Avatar Web Component

Prev Next

1. Overview

The Knovvu Web Component is a conversational interface component developed for websites, mobile applications, and kiosk systems. It is packaged using the Web Component (Custom Element) standard, allowing it to run consistently across different platforms and integrate easily into any frontend framework or plain HTML page.

Built on a React-based architecture, the Knovvu Web Component delivers a flexible Virtual Agent experience that can operate with or without a video avatar, depending on the configuration. It supports both text and voice-based interactions and is designed for enterprise-scale deployments.

Key Features

Feature Description
🌐 Multi-Platform Support Works seamlessly on Web, Mobile, and Kiosk environments
💬 Chat & Avatar Modes Supports text-based chat as well as video avatar experiences, configurable per project
🌍 Multi-Language Support Enables conversations in English, Arabic, Turkish, and additional languages as required
⚙️ Centralized Configuration Configuration can be managed server-side using an Integration ID, eliminating the need for client-side code changes

2. High-Level Architecture & Workflow

When the Web Component is initialized, it establishes a WebSocket connection with the DataFlow system. All communication between the user and the Virtual Agent is handled through this connection.

2.1 Step-by-Step Workflow

  1. Socket Initialization — The Web Component connects to the configured DataFlow project via WebSocket.
  2. Session Start (Audio or Text Input) — The chat session begins when the user sends an audio or text message.
  3. Virtual Agent Response — The Virtual Agent processes the input and sends a response back through Data Flow.

2.2 Avatar-Enabled Flow (enableAvatar = true)

When the avatar feature is enabled, the following steps occur after receiving the VA response:

  1. The received message is sent to the Uneeq Avatar Service using the Speak Avatar method.
  2. If Sestek TTS is configured:
    • Uneeq calls the Sestek BotHub TTS service.
    • BotHub generates speech audio based on the response text.
    • The generated audio is returned to the avatar service.
  3. The avatar service synchronizes video and audio.
  4. The final rendered stream is sent to the frontend.
  5. The user sees and hears the avatar response.

2.3 User Response Loop

  • Audio Input — Speech Recognition (SR) is performed inside DataFlow, and the recognized text is forwarded to the Virtual Agent.
  • Text Input — The text message is sent directly to the Virtual Agent.

This loop continues until the session ends.


3. Text-to-Speech (TTS) Scenarios

3.1 Avatar Enabled + Sestek TTS

  • Standard and recommended setup
  • All TTS operations are handled by Sestek BotHub
  • Uneeq receives generated audio and renders synchronized video

3.2 Avatar Enabled + External TTS

(e.g., ElevenLabs, Azure TTS)

  • The avatar service does not call Sestek BotHub
  • Uneeq uses its integrated external TTS engine
  • Audio and video are generated together
  • The rest of the workflow remains unchanged

3.3 Avatar Disabled (enableAvatar = false)

  • WebSocket connection is established as usual
  • Only the chat interface is displayed
  • TTS generation occurs within DataFlow
  • Audio responses are played using Sestek TTS
  • No avatar video is rendered

3.4 Avatar Disabled + External TTS

  • Sestek TTS inside DataFlow can be replaced with an external TTS engine
  • Voice playback is still available even without an avatar
  • Useful for lightweight voice-only experiences

4. Getting Started

4.1 Adding the Script File

Add the following script at the bottom of your HTML page:

<script src="https://<ENV_BASE_URL>/sestekavatar.js"></script>

4.2 Initial Configuration

Before loading the script, define the global configuration object:

<script>
  window.sestekInitConfig = {
    // configuration goes here
  };
</script>
<script src="sestekavatar.js"></script>

5. Configuration Parameters

5.1 Mandatory Parameters

These parameters must be provided:

Parameter Description Example
channel Target platform web, mobile, kiosk
vaCredentialId Virtual Agent credential UUID
dataFlowCredentialID DataFlow credential UUID
projectName VA project name TestMobile
sestektts Enable TTS true / false
defaultLanguage Default language en-US, tr-TR
dataFlowProjectName DataFlow project av-test
dataFlowTenantId DataFlow tenant ID UUID
dataFlowTokenCredential Base64 encoded token YWptYW5r...

5.2 Recommended Parameters

Parameter Description
customMetaData User or session metadata

5.3 Optional Parameters

Parameter Description Default
initialOptionsData Input placeholder text "Type here..."
chatPageTitle Chat header title "Knovvu Virtual Agent"
frontendProjectName Frontend identifier
barge Allow interruption during speech false
chatWidth Chat width (web only, %) 30
tenantName Tenant name for file downloads

5.4 Frontend Project Name — chatModeOnly

If the value chatModeOnly is sent in the frontendProjectName field, the avatar will not be initialized at all.

In this mode:

  • The avatar initialization flow is completely skipped
  • No connection is established with the avatar service
  • The Web Component starts directly in chat-only mode
  • Only the text/voice chat interface is rendered
  • This behavior is applied regardless of the enableAvatar setting

This option is recommended for:

  • Lightweight web integrations
  • Mobile WebView usage
  • Scenarios where avatar rendering is not required
  • Performance-optimized chat-only experiences

Example:

window.sestekInitConfig = {
  frontendProjectName: "chatModeOnly",
  integrationId: "YOUR-INTEGRATION-ID"
};

Note: When frontendProjectName is set to chatModeOnly, any avatar-related parameters (enableAvatar, personaId, connectionUrl, etc.) are ignored.


6. Avatar Configuration (Optional)

Parameter Description Example
enableAvatar Enable avatar true
personaId Uneeq persona ID UUID
connectionUrl Uneeq API endpoint https://api-eu.uneeq.io
cameraAnchorDistance Camera zoom level full_shot
cameraAnchorHorizontal Camera position center

7. Multi-Language Support

Custom language-specific messages can be defined:

languageStrings: {
  en: {
    sestekErrorMes: "I'm sorry, I didn't catch that. Could you repeat?"
  },
  ar: {
    sestekErrorMes: "عذرًا، لم أفهم ما قلته. هل يمكنك تكراره؟"
  },
  tr: {
    sestekErrorMes: "Üzgünüm, anlayamadım. Tekrar eder misiniz?"
  }
}

8. Custom Styling

UI styling can be fully customized using CSS:

customStyles: `
  :root {
    --primary-color: #007bff;
  }
  .chat-input-wrapper {
    background-color: var(--primary-color);
  }
`

9. Platform-Specific Examples

Web

{
  channel: 'web',
  chatWidth: 30,
  enableAvatar: false
}

Mobile (WebView)

{
  channel: 'mobile',
  enableAvatar: false
}

Kiosk

{
  channel: 'kiosk',
  kioskEntryLanguages: [
    { title: "English", value: "en-US" },
    { title: "العربية", value: "ar-AE" }
  ],
  enableAvatar: true,
  personaId: "your-persona-id"
}

10. Persona Creation (Important Note)

Note: To create a new avatar persona, you must access the Uneeq CDN management interface and create a new avatar under the Sestek tenant. Once created, a personaId will be generated.

For this process, please contact the Sestek Solution Team, as access and configuration are managed centrally.


11. Summary

The Avatar Web Component provides a flexible, scalable, and platform-independent solution for delivering conversational AI experiences with optional avatar support. Thanks to centralized configuration, multiple TTS options, and full UI customization, it can be adapted to a wide range of enterprise use cases.