Can you briefly explain what your project is all about? What’s unique about it?
ALIVE aims to develop Empathic Virtual Assistants (EVAs) for customer support use cases. EVAs are capable of recognising, interpreting and responding to human emotions, by applying state-of-the-art Deep Learning (DL) algorithms for vision, text and voice-based emotion recognition. The output of the models is congregated into a unique state that is fed in real-time in the dialogue state of a Large Language Model (LLM) for empathic text generation, and in the state machine of the Avatar to adapt its state accordingly and modify each answer and interaction (e.g., facial expressions) with the maximum personalization to the user’s needs.
What’s the biggest milestone with your project your startup(s) have achieved so far, and what has surprised you most on this journey?
Our biggest milestone is the development of a pipeline, taking as input audio, text and image that recognizes the user's emotion and provides it as input to an LLM. Our project has received a lot of interest from the research community and the industry, even from stakeholders having diverse backgrounds (i.e., neuroscientists, and philosophers).
How did you measure success?
One we developed five basic emotional states, recognizable by the user. Two: we use at least three modalities as input to perceive the user’s presence/interaction and integrate at least two as aggregators of the final emotional state.
What are your goals over the next three and six, months?
Provide the generated empathetic text to an Avatar, which will adapt to them using empathetically relevant facial expressions. Prepare a small-scale pilot for validation purposes. Disseminate/exploit the project's outcomes.
How has SERMAS helped you during the past few months?
Apart from funding, we receive fruitful feedback and guidance from our mentor (Viktor Schmuck) every month.
Companies: Thingenious and IGODI
OUR POLICY Privacy Policy Cookies Policy Terms of Use