On May 8th, Dr. Yong Ma from the University of Bergen delivered a seminar titled “Designing Emotion-Aware Voice Interfaces: Challenges, Insights, and Future Directions”.
Dr. Ma explored how speech signal processing can be utilized to detect emotional cues and cognitive states in a non-intrusive manner. He shared findings from studies on speech emotion recognition, discussing the complexities of distinguishing between genuine and acted emotions, as well as the role of machine learning models in identifying subtle emotional differences.
The seminar also touched upon ethical concerns, including potential biases and the broader societal impact of emotionally intelligent systems. Dr. Ma underscored the importance of user-centered design in crafting empathetic and culturally inclusive voice technologies.
For those interested in further exploring this research, Dr. Ma’s academic contributions can be accessed here.
Stay tuned for more discussions on the future of human-computer interaction in upcoming events!