Interaction analysis on contact center conversations
Loading...
URL
Journal Title
Journal ISSN
Volume Title
Perustieteiden korkeakoulu |
Master's thesis
Authors
Date
Department
Mcode
SCI3044
Language
en
Pages
50
Series
Abstract
Customer satisfaction is a key to success in every business. Contact center agents provide assistance to customers if they face issues in using the services of enterprise clients. To make this process much better, a post-conversation evaluation of agents is performed and meaningful insights from the conversation are also extracted. A supervisor reviews the conversation and evaluates how efficiently an agent assisted the customer. This evaluation target many metrics which are important for the business of clients as well as for improving the quality of assistance for customers through contact centers. We emphasized automation of manually filled evaluation forms and targeted i) Call Driver (Predicting the primary reason for contacting the support center) ii) First Call Resolution (FCR - Predicting if the customer's issue is resolved or not). Besides this post-conversation evaluation, a short survey is also thrown to the customer as soon as a conversation is ended. The customer is usually asked about the satisfaction level of quality of service being provided and the intention to recommend the product or service to friends and family. Less than 30\% of the customers fill out the survey and others ignore it. The goal was to predict the intention of the customer based on the conversation that just ended with the agent. Hence, we targeted, iii) Net Promoter Score (NPS - Predicting whether a customer is likely to promote, detract or stay neutral about the services). All these problem areas would be helping clients to improve the product, and similarly would be helping contact centers to target the areas where improvements are needed. We have used transformer-based models for solving these problems. Due to a mismatch between the pre-training data and the downstream task (contact center conversations), we used domain adaptation strategy to update the language model of the pre-trained model. We achieved more than 80 percent accuracy in all three problem areas. For i) \& ii) our goal was to support cross-lingual as well without having any data from other languages. The single cross-lingual transformer model, XLM-RoBERTa, was enough to achieve this goal. Results on other languages were also quite convincing.Description
Supervisor
Kurimo, MikkoThesis advisor
Kamran Malik, MuhammadMunir, Syed Taha