- Print
- PDF
Real-Time Translation feature ensures that language barriers no longer restrict user engagement. No matter the language spoken by the user, Knovvu Virtual Agent can provide accurate translations in real-time, facilitating seamless communication.
Key Benefits
- Expanded Language Support: Communicate with users in languages not natively supported by your project's NLU models.
- Improved User Experience: Eliminate language barriers, making your chatbot more accessible to a global audience.
- Enhanced Engagement: Engage users in their native language, enriching interactions and improving satisfaction.
Before You Start
Configure Google Translation Service: Navigate to the Integrations page and set up a service integration for Google Translate. This integration will serve projects that require Real-time Translation. Ensure you enter a valid Subscription Key to complete the setup.
Add Text-Language Detection Service to Your Project: Text-Language Detection service is essential for identifying the user's spoken language and triggering the translation process. If the user's language is detected as one of the project's primary or additional languages, the Real-time Translation feature will not interfere, allowing all processes to continue seamlessly.
Activate Real-Time Translation: Once you've successfully integrated the Translation and Text-Language Detection services into your project, you can enable the Real-Time Translation feature. Simply go to the project settings and switch the toggle to "On."
How It Works
User and Virtual Agent Interactions
- When the first user utterance reaches the Virtual Agent, it is sent to the Text-Language Detection service to determine the user's language. If the detected language is one of the project's primary or additional languages, the conversation continues smoothly without employing the Real-Time Translation feature.
- When the user's language is detected by the Text-Language Detection service, the "lastUserLanguage" variable is automatically set. This value may change if a different language is detected in subsequent interactions.
- When the user's language is detected by the Text-Language Detection service, the "lastUserLanguage" variable is automatically set. This value may change if a different language is detected in subsequent interactions.
- If the user's language is not among the project's supported languages, Real-Time Translation is activated. In this case, the user's detected language is used as the source language, and the project's primary language is used as the target language when generating a response for the bot.
- Once the user's utterance is translated into the Virtual Agent's language, it undergoes the Natural Language Understanding (NLU) process. After the Virtual Agent's response is prepared, it is translated back into the user's spoken language.
- Note: Since the user utterance is translated before undergoing the NLU process, the "lastUserInput" variable is set with the translated response. If the translation is not activated, the behavior of the "lastUserInput" variable remains unchanged.
- Note: Since the user utterance is translated before undergoing the NLU process, the "lastUserInput" variable is set with the translated response. If the translation is not activated, the behavior of the "lastUserInput" variable remains unchanged.
- This process is repeated after each user utterance, allowing for dynamic language switching throughout the conversation.
User and Live Agent Interactions
- Real-Time Translation can also enhance interactions between the human agent and the user. If, during the conversation with the virtual agent, the user expresses a desire to connect with a human agent, the conversation is transferred to a Live Agent.
- When the conversation history between the user and the Virtual Agent is sent to the Live Agent, the human agent views the past session history translated into their own language.
- The process mirrors the interaction between the User and the Virtual Agent. Initially, the user's utterance is analyzed by the Text-Language Detection service. If the user's language is identified as different from the project's supported languages, it undergoes a translation process. Consequently, the agent receives the user's input translated into their own language, and vice versa—the user receives the agent's messages translated into their own language as well.
Note: Audio channels are not supported for projects where Real-Time Translation is enabled. Therefore, it is restricted to add Text-to-Speech services to these projects.
Reporting
From the conversations page, you can choose to view the original messages as they appear in the customer chat or switch the toggle to view the translated responses in the project's primary language.
Matched intents and entities of the conversation appear on the left side in the project's primary language, regardless of the toggle selection. Similarly, dashboard graphs, such as Fallback intents, are also displayed in the project's primary language.
When you export the conversations page, the exported file will reflect the view you selected—either the original messages or the translated responses.
Conversations page with the toggle selected as "Original"
Conversations page with the toggle selected as "Translated"