LLM Use Cases

Prev Next
Important Note

LLM use cases are currently only available in the cloud, utilizing OpenAI and Azure OpenAI services. Efforts are underway to make all these use cases available on-premise as well. For on-premise deployment, we plan to install and use LLaMA-3 and Gemma-2. We are continuously testing the latest models and exploring support for different models. The choice of on-premise models will be based on use cases, performance, and hardware requirements.

There don't seem to be any restrictions on specific use cases or languages. We expect higher success rates in European languages, including Turkish, and the success rate is likely to vary based on the complexity of each use case.

Demo Available Features

Subject Details
Text Summarizaiton This feature aims to shorten the analysis process of the supervisors. During the conversation, the supervisors can see the details of the conversation and to grab the issues faster, they can use "generate summary" button in UI.
This feature can be used anytime when it is enabled for the Tenant.
Generative Q&A Generating instant response to customer queries during the conversations with LLM.
Generating Call Notes and Promise Management Generating key information from customer-agent interactions during the conversations with LLM

- conversation summarization
- reason of the conversation
- offered agent solution
- actions taken during conversation.

Demo-not-available Features but are in Product Roadmap

Subject Status Details
Topic Summarization AI Generated Topics Feature in Roadmap During the classification process of the conversations LLM summarization service is used as a prestep of the category training and inference for each conversation subjected to the inference.
Ask For Help reason detection Feature in Roadmap Reason for agents requesting instant help from a supervisor during the conversations can be detected with LLM
Negative Customer Sentiment Analysis Feature in Roadmap Detection of reasons behind negative customer sentiment and response formulation during the conversations with LLM