Reports

Prev Next
Document Number Revision Number Revision Date
KN. GU.19.EN Rev19 03.04.2026

Dashboard

The Dashboard in AI Agents offers a quick snapshot and a deeper understanding of how the virtual agent is performing. It provides key metrics and visualizations for understanding user behavior, measuring
effectiveness, and identifying areas for improvement.

The Dashboard consists of three main tabs:

  • Business Metrics: Provides overall performance indicators and conversation quality metrics.

Business metrics.jpg

  • AI Agent: Focuses on AI agent and tool performance metrics.

AI Agents.png

AI Agents 2.png

  • Intents: Focuses on intent detection performance and user behavior patterns.

Intents.png

Filters

Virtual Agent: This will filter results for specific AI Agents (NLU-Based) that are part of the project. If no project has been filtered, the Virtual Agent filter will be inactive as well as ‘Filter’ button down below. If no VA filter is selected, the data will be retrieved for all the Virtual Agents that are part of the filtered project.

Channel: Channel refers to the integrated channel being used for communication in a project. This includes IVR, Facebook, Webchat etc.

Date: The date filter allows data retrieval for a desired time range. The time range can either be chosen from standard options that are provided, “today”, “yesterday”, “last 7 days”, “last 30 days”, or it can be a custom date range between any two dates of choice.

AI Agent: This filter allows filtering results for specific AI agents within the project. Select from the dropdown to view metrics for individual agents. This filter is available in the AI Agent tab.

Filters work with ‘and’, which means the information displayed on the dashboard to come from data that satisfies all the constraints set by the filters.

Business Metrics Tab

The Business Metrics tab provides a comprehensive overview of AI Agents performance with focus on session-level metrics, resolution outcomes, customer sentiment and ratings.

Key Metrics

On the top of the dashboard page, there are four boxes placed side by side, displaying the following information:

  • Total Sessions: It displays the total number of user sessions that occurred within the selected filter range. A session represents a continuous interaction between a user and the virtual agent.

  • Average Session Duration: It represents the average length of the sessions.

  • Average Message per Session: It denotes the average number of messages per customer during one session.

  • Self-service rate: It measures the percentage of sessions resolved without human agent intervention.

Main Metrics

Resolution Type Metrics

This section provides comprehensive insights into how customer sessions are resolved.

Resolution Type Trends: This line chart displays how different resolution types evolved over time.

  • Self-service: Sessions where the customer's inquiry was successfully resolved by the AI Agents without human intervention.

  • No Customer Interaction: Sessions where no meaningful customer engagement occurred.

  • Live Agent Handover: Sessions that were transferred to a human agent due to complexity or customer request.

Resolution Type Trends
  • Resolution Type: A pie chart that shows the distribution of different resolution types.
Resolution Type

Session Trends Metrics

This combined visualization uses bar charts to display total session volumes for specific dates, while a line chart overlay shows the average duration trend (measured in seconds). This dual representation helps identify peak usage times and correlate session volume with average duration, revealing patterns in user behavior and system load.

Session Trends

Sentiment Trend Metrics

This section analyzes customer sentiment throughout interactions, helping track emotional responses and overall satisfaction trends over time.

  • Sentiment Trends:

Sentiment trends are depicted using a line graph, with the vertical axis representing sentiment values ranging from -1 to 1, and the horizontal axis indicating dates. Placing the cursor on the trend line reveals the total number and percentage of positive, negative, and neutral conversations within the chosen date range. Additionally, the tooltip displays the average sentiment of all conversations, without categorizing them or considering the total number of conversations. It is important to note that the sentiment averages solely reflect the sentiment of user inputs.

Sentiment Trends
  • Average Sentiment:

The average sentiment is divided into three segments: negative, neutral and positive. The proportion of each segment represents the average score of session scores falling within that particular segment. The value displayed at the center of the graph represents the average score of all sentiment data within the selected date range.

Average Sentiment

Customer Trend Metrics

This section tracks customer satisfaction ratings and their trends over time, providing insight into service quality and customer experience.

  • Customer Rating Trends: This line chart displays how customer ratings evolved over time. The vertical axis represents rating values, and the horizontal axis shows dates. The trend line reveals patterns in customer satisfaction, helping identify periods of improvement or decline in service quality.
Customer Rating Trends
  • Average Customer Rating: This circular gauge displays the overall average customer rating across all sessions. The value at the center (e.g., "4.06") represents the average rating within the selected date range. The gauge is divided into five color-coded segments (1 through 5 stars), with each segment's size representing the proportion of ratings in that category, providing a visual breakdown of the rating distribution.
Average Customer Rating

AI Agent Tab

The AI Agent tab provides comprehensive performance monitoring and usage analytics for AI-powered agents within the AI Agents. This tab enables tracking of agent efficiency, resource consumption, and tool utilization patterns.

Beta Feature

This dashboard is in beta. Please note the following:

  • Do not use these metrics for billing purposes.
  • Metrics are enabled only after the first project publish following the deployment of this feature (if the project already has a published version).
  • The data displayed here is not real-time and is updated at regular intervals.
  • You may see smaller values in some metrics when agent/tool names are changed or during the first data collection (caused by difference calculations requiring at least two data points).
  • Time-based graphs will not display data if the dynamically generated time intervals based on the selected date range exceed the dashboard's active period.
  • Beta features are provided as-is and may be modified or discontinued. Data created during beta may not be preserved when the feature becomes generally available.

AI Agent Key Metrics

At the top of the AI Agent tab, six boxes are displaying critical performance indicators:

  • Total Agent Requests: The total number of requests processed by AI agents within the selected filter range.
  • Error Rate: The percentage of agent requests that resulted in errors, helping identify reliability issues.
  • Average Response Time: The mean time taken by AI agents to generate responses, measured in seconds.
  • Total Tool Calls: The cumulative number of tool invocations made by AI agents during the selected period.
  • Input/Output Token Consumption: Displays the total input and output tokens consumed by AI agents, excluding guardrail token consumption.
  • Input Cache Savings: The percentage of input tokens saved through caching mechanisms.
AI Agents Tab

Tool Performance (Run Count & Latency):

This chart displays tool usage frequency as bars and average latency as a line. The "Show top 10 results for" dropdown in the upper right corner allows sorting by Run Count or Average Latency.

Tool Performance

Agent Request Distribution:

This bar chart shows the number of requests handled by each AI agent. Agent names are displayed on the x-axis and request counts on the y-axis.

Agent Request Distribution

Tool Call Distribution:

This pie chart shows the percentage breakdown of tool calls by category:

  • Workflow: Calls to workflow-based tools and processes.
  • API Integration: External API calls made by the agent.
  • MCP: Model Context Protocol tool invocations.
  • Knowledge Retrieval: Calls to knowledge base and retrieval systems.
Tool Call Distribution

Input Token Consumption Trend:

Displays how input token usage changes over the selected date range.

Input Token Consumption Trend

Output Token Consumption Trend:

Displays how output token usage changes over the selected date range.

Output Token Consumption Trend

Response Time Trend:

This line chart measures how quickly the system responds to requests. It shows average, 95th, and 99th percentile response times.

Response Time Trend

Tool Performance Overview:

A detailed table listing all tools with the following sortable columns:

  • Tool: The name of the tool.
  • AI Agent: The agent utilizing the tool.
  • Run Count: The number of times the tool was executed.
  • Average Latency: The mean execution time for the tool.
  • Error Rate: The percentage of tool executions that resulted in errors.
Tool Performance Overview

Intents Tab

The Intents tab provides detailed insights into intent detection performance, fallback occurrences, and intent usage patterns.

Fallback Messages

image.png<

This graph lists the utterances that resulted in a fallback, that is, unmatched intent detection. Fallback messages are displayed when an intent fails to be detected for an utterance. There are two scenarios where a fallback message is triggered:

  1. If the VA cannot match any intent over a confidence threshold.
  2. If the intent with the highest value match on the confidence threshold is the fallback intent.
    The width of the corresponding bar of the utterance is proportional to the frequency with which the utterance was used and resulted in a fallback. The Fallback Messages widget also has an option that allows the user to download this list.
    Note that fallback utterances can be added to an intent by clicking on them, allowing users to remove them from the fallback list or alternatively, users can create a new intent by clicking on the fallback utterance.

Intent Metrics

  • Intent Overview:

image.png>

This chart shows the ratios between the intents that were matched and the ones that couldn’t be detected, labeled as “fallback”. This helps in analyzing the performance of the AI Agents detection abilities.

  • Top Intent:

image.png>

It lists the intents according to how frequently they have been used by customers. The top 7 intents are displayed here, and if the cursor is hovered on any of them, the number of times the intent was used is shown.

  • Intent Trends:

image.png

It is a graph that displays the trend of usage over time. The x-axis displays the dates, and the overall graph shows the trend of how frequently the corresponding intent was used over a certain range of time. Trends display in here are the Top Intents.

Insights

The Insights tab in AI Agents provides a detailed view into individual conversations and AI-driven responses, enabling a deeper understanding of how the virtual agent interacts with users.

Conversations

The Conversation page provides full visibility into end-to-end user interactions. It is designed to review how the virtual agent processes inputs and navigates through the dialogue.

image.png

Key Features

  • Full Conversation History: View entire user sessions, including both user messages and bot responses, in chronological order. If speech services are enabled, it is also possible to listen to the conversation recordings.

  • Matched Intents and Entities: View the recognized intents and extracted entities for each user message, along with clear indicators of fallback messages or errors where the agent failed to understand or respond correctly.

  • User Journey Flow: It shows a visual timeline of the conversation, highlighting which flows the user passed through and when, with message timestamps for each transition. It helps identify how users navigate the virtual agent, including any fallback points or repeated flows.

  • Customer Sentiment: Displays the overall sentiment detected for the customer during the conversation (e.g., Positive, Neutral, Negative), helping track emotional tone and satisfaction throughout the interaction.

conversation.png

Generative Q&A

The Generative Q&A page provides visibility into the performance of LLM-powered question answering. This feature is especially useful if the virtual agent uses generative models to answer knowledge-based user queries.

Displayed Fields:

  • User Question: The original question asked by the user.
  • Generated Response: The response provided by the generative model.
  • Status: Indicates whether the response was successfully found or not.
  • Average Confidence: Shows the average confidence score of the response, which helps evaluate the reliability of the generated answer.

image.png