- Print
- PDF
Article summary
Did you find this summary helpful?
Thank you for your feedback
Evaluates the emotional state in the input voice.
Parameters:
name | description | default |
---|---|---|
Address | Address of the emotion service | http://core-audio-emotion |
IgnoreSslErrors | ignore any certificate errors if emotion address contains https | false |
Inputs
Audio:
Accepts audio from a single channel.
Events:
none
Outputs
Audio:
none
Events:
name | description |
---|---|
Emotion | Includes information about the emotional state in the voice (such as normal or angry) and the level of monotony in the speaker's tone. |
Remarks :
- This node evaluates the speech in segments of 3 seconds. Silent periods are not included in this duration.
Project Structure
The Emotion Node only needs audio input. The audio doesn't have to be segmented. So, you can place the Emotion node wherever the desired audio is flowing through. A simple project can be built as such:
However, a better approach would be:
Why this approach is better?
In the second example, the only difference is that we pass the audio through a VAD node. This means the audio is filtered. The only audio info that passes through a VAD is speech info. This approach will result in better confidence levels in Emotion identification since random silences are cleared.
Supported flow types: Stream, Batch
Was this article helpful?