Features
  • 21 Jun 2024
  • 8 Minutes to read
  • Contributors
  • PDF

Features

  • PDF

Article summary

Analysis

FeatureDescription
Text and acoustical analysisCA converts audio files of the conversations into text using speech recognition service, calculates acoustical scores, detects sentiment, and categorizes the conversations using AI based models. Separation of the customer and the agent through stereo call recording allows the two channel analysis of all features. Besides, mono recordings can also be analyzed by using speaker detection (diarization) service used. Emotional dimension of the conversations provides insights about the satisfaction of the customers through the evaluation of a rich set of parameters such as tension ratio, monotony of speech, interruption, block ratio, speed, hesitation, and silence. Explanations of these parameters can be found on the table below.
Sentiment analysisCustomer and agent overall sentiment, the status of the sentiment for both channels during the conversation, existance of a negative content, aggreement status are presented in the conversation details. Those informations can be included in the categories as well.
Conversation summarizationConversation summarization via OpenAI is available to achieve digested information in a conversation.
DiarizationMono channel audio files can be analyzed as multiple channels via diarization service. (Please note that some analysis results can be lost such as overlap ratio and interruption count due to detected stereo audio records.)
Named entity recognition (NER)Entities in the conversation are detected and presented in conversation details to provide conversation details at a glance.
Text normalizationCA is capable of correcting typos in chats, understanding abbreviations, and issuing the queries accordingly.
Video call analysisVideo call recordings can be analyzed and played on UI. Diarization service detects the speakers in the conversation and separates the transcript accordingly.
Screen recordingTo further analyze the agent's actions during the conversation, it is possible to record the screen events. Screen recorder is an additional feature of the call recorder.
Supported audio formatsSupported audio formats are Mp3, Wav and Opus.
Supported video formatsSupported video format is Mp4.
Acoustic parameterDescription
Call overlap duration (sec)the total duration where both parties are speaking at the same time.
Call overlap ratio (%)the ratio of the “call overlap duration” to the “total call duration”.
Call max overlap duration (sec)the maximum duration where both parties are speaking at the same time during an interaction.
Call maximum silence start (sec)the second that maximum gap of silence starts.
Call maximum silence end (sec)the second that maximum gap of silence ends.
Call maximum silence duration (sec)the duration of the longest (maximum) uninterrupted gap of silence.
Call silence ratio (%)the ratio of the “total silence duration” to the “total call duration”.
Call silence duration beginning (sec)the duration from the beginning of the call until the first word is spoken by either party.
Call silence duration end (sec)the duration from the last spoken word until the call is ended by either party.
Call hold(s)the number of holds if any.
Agent tension ratio (%)the ratio of tension in agent’s voice in a call.
Agent monotonicity ratio (%)the ratio of monotonicity in agent’s voice in a call.
Agent interruption count per minute (count/min)the number of times where the agent interrupts the customer per minute.
Agent speed (letter/sec)the speaking speed of the agent (the number of letters pronounced per second by the agent.)
Agent block count per minute (count/min)the number of times where the agent has block (uninterrupted/continuous) speech per minute.
Agent block ratio (%)the ratio of the agent's block (uninterrupted/continuous) speaking time to the total call time.
Agent hesitation count per minute (count/min)the number of times where the agent hesitates per minute
Customer tension ratio (%)the ratio of tension in customer’s voice in a call.
Customer share (%)the percentage of talk time attributed to the customer.
Customer interruption count per minute (count/min)the number of times where the agent interrupts the customer per minute.
Customer gender (male/female)the gender of the customer predicted by Sestek AI engine.
Customer block count per minute (count/min)the number of times where the customer has block (uninterrupted/continuous) speech per minute.
Customer block ratio (%)the ratio of the customer's “block speaking time” to the “total call time.”
TensionTension is measured through spectrograms. It is a machine learning concept where the computational method calculates the strain rate in the call. This is done by inferring a segment-by-segment classification of frequency-based features with artificial neural networks.
MonotonicityMonotonicity is measured through variations in the main frequency value in the agent's voice and a value between 0-100 returned as a result. The more the sound deviates from its average frequency value, the less monotonous (or lively) it is marked to be. In other words, regardless of whether agents speak loudly or in a low voice, agents who speak in a monotonous voice are more likely to have a high monotonicity score.
Interruption vs. overlapInterruption happens when the second speaker cuts first speaker's statement. Overlap happens when both parties speak at the same time.

Categorization

FeatureDescription
User defined category builderCA allows the users to search phrases with the usage of logical operators in queries. The custom category builder allows a user to dig deeper and customize analysis vs. settle on canned categories/queries that may not capture key insights. Category builder has a no-code UI that is drag n drop for easy admin by a trained power user. Queries can be supported by various filters such as date, agent group, word group, agent/customer block ratio, hesitation count, interruption count, agent speed, overlap duration/ratio, silence duration/ratio, customer gender, monotonicity ratio, and agent/customer tension ratio. The results of the queries are presented rapidly since the conversational data has already been indexed before the search request. There are some limits in categorization:
- Maximum number of active queries of a tenant is 250.
- Maximum number of text search items a query can have is 40.
- Maximum number of text search items a query group can have is 10.
- Maximum number of query that can be added in a query group is 1.
AI generated categoriesCA evaluates the whole content of speech and classifies analyzed conversations into specific categories that are generated by AI. The AI categorization service creates a topic list by using tenant data and the new conversations are matched with those classes. Please note that CA supports using both OpenAI and Sestek service in categorization process.

Root Cause Analysis

FeatureDescription
Ask GenAIAskGen AI enables users to conduct comprehensive analysis on up to 500 conversations. With this feature, users can extract detailed insights from vast transcriptional data. To enable this feature, add GenAI to your tenants using Open AI credentials.
Statistical comparisonCA enables the users to analyze recorded calls statistically by applying root-cause analysis. The feature allows the users to define two different cases, mostly a positive and a negative case, and compare them in terms of textual and emotional features. The comparison can also be done along a time dimension to track the progress of the features. During statistical comparison, the user is able to deep-dive to receive more information for a certain case. As a result, it is possible to detect the conversation content of a specific phrase.
Trend analysisCA helps companies measure the effectiveness of their marketing efforts and evaluate their position relative to competitors by applying root-cause analysis and focusing on the categories within a given period. Trend analysis can be used not only for these purposes but also keeping the track of the behavior of any custom categories during a selected period.
Non-FCR analysisCA can detect non-FCR conversations and makes the user understand the root-cause of the consequent conversations. Non-FCR algorithm consists of two main parts: filtering and grouping. After a conversation is filtered as non-FCR, the algorithm searches a group to match the conversation.
Filtering:
-Non-FCR measures the repetitive conversations within a given time interval.
-If customer specific data is chosen as ‘caller/called number’ (default option), calls and chats are tagged as non-FCR conversations seperately. If this parameter is chosen as ‘customer ID’, calls and chats can be tagged as non-FCR conversations jointly.
-As non-FCR module is integrated with CA, it is empowered by the categorization feature. The conversations that are matched with at least one of the chosen categories are included for tagging as non-FCR conversations.
Grouping:
-If neither attached data nor categories are used for grouping, the conversations are grouped according to customer specific data.
-If attached data is used for grouping, the conversations that have the same value for all of the chosen attached data are grouped. The conversations that don’t have all of the chosen attached data are excluded.
-If categories are used for grouping, the conversations that are matched with at least one of the chosen categories are grouped.
By this, it is possible to measure whether a customer is interacted many times for the same category and/or attached data. Please note that, filtering and grouping options are configurable from the settings and real advancements in non-FCR measurement. Non-FCR is also applicable for the chats and video type interactions.
Email reportsCA notifies any user or department proactively with daily, hourly, or immediate reports which include the selected query results. It also notifies immediately for urgent cases, such as the detection of undesired events like customers complaints.

Automated Quality Management (AQM)

FeatureDescription
Automatic questionsA conversation can be automatically evaluated by considering the results for the acoustic parameters, category compatibility, script adherence and sentiment results. 100% of the conversations can be evaluated automatically.
Manual questionsEvaluators can manually evaluate by selecting the suitable option for an evaluation criteria defined in an evaluation form.
ObjectionAn agent can raise objection for an evaluation and request a reevaluation process.
CalibrationThe evaluator can also utilize AQM to calibrate assessments conducted by others by providing an additional perspective, or modify his/her evaluation in case of any inaccuracies. Furthermore, the agent has the opportunity to challenge the evaluation of the conversation and request a secondary review.
AssignmentsAQM empowers managers to establish assignments and foster a dependable QM process while optimizing the management of the resources. It allows the agents to review their evaluated conversations and receive feedback for the improvement.

Security

FeatureDescription
Dynamic data maskingPersonally identifiable information in conversation transcription is masked and cannot be reached through the UI.
Audio encryptionKnovvu Analytics is capable of encrypting the audio files after the analysis is completed.

Was this article helpful?

What's Next
Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.