Author: Viktor Schmuck, Research Associate in Robotics at King's College London
On the 21st of May 2024, the European Council formally adopted the EU Artificial Intelligence (AI) Act. But what does it cover, and how does it impact solutions, such as the Socially-acceptable Extended Reality Models And Systems (SERMAS) project, which deals with biometric identification and emotion recognition?
According to the official website of the EU Artificial Intelligence Act, the purpose of the drawn regulation is to promote the adoption of human-centric AI solutions that people can trust, ensuring that such products are developed having in mind the health, safety, and fundamental rights of people. As such, the EU AI Act outlines, among others, a set of rules, prohibitions, and requirements for AI systems and their operators.
When analysing something like the result of the SERMAS project, the SERMAS Toolkit, from the EU AI Act’s point of view, we don’t only need to verify whether the designed solution is compliant with the regulations laid down but also assess if the outcomes of the project, such as exploitable results (e.g., a software product, solutions deployed “in the wild”), will be compliant with them.
Parts of the Toolkit are being developed in research institutions, such as SUPSI, TUDa, or KCL, which makes it fall under the exclusion criteria of scientific research and development according to Article 2(6). In addition, since the AI systems and models are not yet placed onto the market or in service, the AI Act regulations are not yet applicable for the Toolkit according to Article 2(8) as well. As a general rule of thumb, if a solution is either being developed solely as a scientific research activity or is not yet placed on the market, it does not need to go through the administrative process and risk assessment outlined by the Act.
That being said, if a solution involving AI is planned to be put on the market or be provided as a service for those who wish to deploy it, even if it’s open source, it is better to design its components with the regulations in mind and prepare the necessary documentation required for the legal release of the software. This article outlines some of these aspects on the example of the SERMAS Toolkit. Still, it’s important to emphasize that AI systems need to be individually evaluated (and e.g., registered) against the regulations of the Act.
First, we should look at the syntax relevant to the SERMAS Toolkit. These terms are all outlined in Article 3 of the regulations. To begin with, the SERMAS Toolkit is considered an “AI system” since it is a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for ... implicit objectives, infers, from the input it receives” (Article 3(1)). A part of the SERMAS Toolkit is a virtual agent whose facial expressions and behaviour are adjusted based on how it perceives users, and since it is capable of user identification. However, the system is not fully autonomous as it has predefined knowledge and behaviour that it exhibits which is not governed by AI systems. Therefore, it is considered an AI system that has varied levels of autonomy. Speaking of user identification, the Toolkit also deals with:
- “biometric data” - “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images” (Article 3(34));
- “biometric identification” - “the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database” (Article 3(35));
- and has a component that is an “emotion recognition system” - “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data” (Article 3(39)).
While Article 6 outlines that systems that have biometric identification and emotion recognition count as High-Risk AI systems, the SERMAS Toolkit is only considered to be one due to the emotion recognition, since Annex III states that “biometric identification systems ... shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be” (Annex III(1a)) - which is the only use-case of biometric identification. However, Article 6(2) and Annex III also specify that an AI system is considered high risk if it is “intended to be used for emotion recognition” (Annex III(1c)) to any extent. While according to the EU AI Act, an AI system having emotion recognition would be prohibited, the SERMAS Toolkit and any other system can be deployed if a review body finds that it does not negatively affect its users (e.g., cause harm or discriminate against them). Highlighting factors relevant to the SERMAS Toolkit, each AI system is evaluated based on the “intended purpose of the AI system”, the extent of its use, the amount of data it processes, and the extent to which it acts autonomously (Article 7(2a-d)). Moreover, the review body evaluates the potential of harm or people suffering an adverse impact by using the system, and the possibility of a person overriding a “decision or recommendation that may lead to potential harm” (Article 7(2g)). Finally, “the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety” (Article 7(2j)) is also taken into account.
But how does the SERMAS Toolkit compare to the above criteria? To begin with, it only processes the absolute necessary, minimum amount of data and is undergoing multiple reviews during its conception to ensure the secure management of any identifiable personal data. Moreover, it only uses user identification to enhance a building’s security in an access-control scenario. And finally, it solely uses emotion recognition with the aim of providing and enhancing user experience by adapting the behaviour of a virtual agent. This tailored experience does not change the output of the agent, only its body language and facial expressions – which can only take shape as neutral to positive nonverbal features. As such, the agent does not and can not discriminate against people.
So, while the Toolkit is not a prohibited solution after all, it is still considered a High-risk system, which means that upon deployment, or when provided to deployers, some regulations need to be complied with, as outlined by the Act. For instance, a risk-management system needs to be implemented and periodically revised with focus on the system’s biometric identification and emotion recognition sub-systems (Article 9(1-2)). Since these sub-systems are model-based classification or identification methods, the regulations outlined by Article 10(1-6) for the training, validation, and testing of the underlying models and datasets should be followed (e.g., datasets used for training the models should be ethically collected, with as few errors as possible, and without bias). Moreover, a thorough technical documentation is expected to be published and updated according to Article 11(1), and the system should come with appropriate logging of interactions (Article 12(3)) and overseeing tools (Article 14(1-2)), especially during identification processes. Lastly, when the Toolkit is put on the market, which is part of the exploitable results of the project, it needs to undergo assessment and is required to be registered in a database with its accompanying documentation (Article 14(1-2)).
In conclusion, the EU AI Act means that AI systems dealing with personal data, especially when it comes to emotion recognition or identification that affects how an autonomous system operates, need to comply with the laid-out regulations. Solutions put in service or on the market may be classified as High-risk and would be prohibited by default. However, a thorough assessment procedure, compliance with additional security requirements, logging, and documentation may mean that an AI system can be cleared to be deployed after all. As for nonprofessional or scientific research purpose systems, such as the still under-development SERMAS Toolkit, they are encouraged to voluntarily comply with the requirements outlined by the Act (Recital 109), and in the long run, they will benefit from being developed with the requirements in mind.
The EU AI Act can be explored in detail here, and to get a quick assessment of whether an AI system that will be put on the market or in service is affected by it, and to what extent, they also provide this compliance checker tool.
Do you know this song? Some of you probably do, others maybe not. Some are probably searching for it online at this very moment, and maybe most of you just don’t care about the type of music we listen to while we work. In any case, the message is the same: meeting you is all we want to do!
If you are an innovator, device manufacturer, technology provider and integrator in the AI/ML/VR/XR, connect with us because our OC2 DEMONSTRATE is running and “hey, we’ve been trying to meet you”. 🎶
Why should you meet us, you ask?
SERMAS offers an innovative, collaborative environment with specialised infrastructure, technology, knowledge, and the chance of being funded with up to EUR 150 000 per sub-project (lump sum per consortia).
Our OC2 DEMONSTRATE sub-projects must provide solutions as autonomous applications built on the SERMAS API and using the SERMAS Toolkit for web-based digital avatars and/or the SERMAS ROS2 Proxy for robotics applications.
We cover nine domains/industries: education, energy, tourism, cultural heritage, health, manufacturing, bank & insurance, retail and marketing & advertising. These domains should be considered in one or multiple of the overarching fields of application: training, services, and guidance.
The consortium of applicants is encouraged to use the XR technologies from the SERMAS Toolkit and adapt them to the domain of their choice. It’s possible to select more than one model and/or tool to be applied to their proposed agent and facilitate the demonstration of the pilot.
When should we do it?
Now is the time. Read the guidelines for applicants, and apply by 26 June 2024 - 17:00 CEST.
Author: Megha Quamara, King’s College London
Societal acceptance of eXtended Reality (XR) systems will intrinsically depend upon the security of the interaction between the user and the system, which encompasses aspects such as privacy and trustworthiness. Establishing security thus necessitates treating the XR system as a socio-technical entity, wherein technology and human users engage in the exchange of messages and data. Both technology and users contribute to the overall security of the system, but they also have the potential to introduce vulnerabilities through unexpected or mutated behaviour. For instance, an XR system may misinterpret human actions due to limitations in its algorithms or understanding of human behaviour. Conversely, the users may make mistakes by deviating from the expected communication or interaction norms, which can trigger unintended responses or cause the system to start behaving unpredictably, thus disrupting the immersive experience and unknowingly compromising the system’s security.
Security developers and analysts have so far focused on XR systems primarily as technical systems, constructed upon software processes, digital communication protocols, cryptographic algorithms, and so forth. They concentrate on addressing the complexity of the system they are developing or analyzing, often neglecting to consider the human user as an integral part of the system’s security. In essence, they do not consider the importance of human factors and their impact on security. Essentially, there exists an intricate interplay between the technical aspects and the social dynamics, such as user interaction processes and behaviours, but state-of-the-art approaches are not adequately equipped to consider human behavioural or cognitive aspects in relation to the technical security of XR systems, as they typically focus on modelling basic communication systems.
To sum up, addressing security concerns of XR systems from a socio-technical lens, rather than a purely technical one, remains terra incognita, with no recognised methodologies or comprehensive toolset. Thus, formal and automated methods and tools need to be extended, or new ones developed from scratch, to tackle the challenges in designing secure content-sharing for XR systems and their interaction with humans who can misunderstand or misbehave. The Explainable Security (XSec) paradigm, which extends Explainable AI (XAI), can be considered in the security of the decisions and the explanations themselves, thereby contributing to the overall trustworthiness of the system. Moreover, since the composition of secure system components might still yield an insecure system, existing methods and tools must scale to verify that this composition yields indeed a secure XR system.
The SERMAS project aims to contribute by carrying out research and development in all of these directions.
From the floor, of Immersive Tech Week in Rotterdam, we interview the partners from SERMAS, XR2Learn, VOXReality, CORTEX2, and XR4ED about the predictions for the future of XR, project expectations for 2024, and the benefits of XR technologies
Dive into this insightful discussion with Axel Primavesi, Ioannis Chatzigiannakis, Jordane RICHTER, Charles Gosme, Olga Chatzifoti, Fotis Liarokapis, Alain Pagani, Moonisa Ahsan. Thank you for your valuable insights!
Watch the full interview here.
From November 28th to December 1st, the SERMAS team was at the heart of innovation at the Immersive Tech Week 2023 in Rotterdam. This international gathering served as a convergence point for developers in the extended reality (XR) field, fostering collaboration, and highlighting projects and technologies that will shape the future of immersive technology.
One of the standout moments during the event was SERMAS's active participation in the F6S Innovation session titled "Connecting Founders to Horizon Europe Funding Opportunities." This session explained the role played by European funds in driving XR innovation to new heights and facilitating collaborative ventures. Besides showcasing SERMAS's objectives and opportunities, the session also showed vast opportunities for XR enthusiasts to access funding and support from Horizon Europe.
SERMAS shared the spotlight with other XR projects, each contributing to the immersive technology landscape in unique ways: VOX Reality, XR2Learn, XR4ED, and CORTEX2. This collective showcase demonstrated the diversity and depth of XR applications, illustrating how this technology reshapes industries and experiences across the board.
At the vibrant heart of the event, booths 18 and 19 served as the dynamic playground for all participating projects to share their technologies and captivating demos. Here, SERMAS, joined by the partners from DW Innovation and Spindox Labs, shared the preliminary outcomes of our project, shedding light on the latest developments in our pilots and presenting the innovative SERMAS Toolkit. These booths became the connection where attendees were not only introduced to our current endeavours but also gained insight into the trajectory we envision for the future of the SERMAS project.
Summing up, the Immersive Tech Week 2023 event was not merely a showcase of technology; it was an exciting glimpse into the future of the power of XR. We share here our thank you to the entire VRDays team for organising this event and providing a space for innovation and collaboration.
Let’s see what the future holds for next year in XR and the SERMAS project. Don’t forget to join us on this journey as we continue to push the boundaries to make XR systems more socially acceptable.
See you next year!
Author: DW Innovation team
The most excited we've seen the teenage brother of a consortium member this year, was when the makers of Baldur's Gate, a famous role-playing game (RPG), made the following announcement: Henceforth, players can change their avatars during the game. Gaming studio Patch 3 explained that with the new update, it'll be possible to customize characters even after exiting the respective editor. That's a novel thing in the world of avatar-based games, a feature enabled by the progress made in the field of generative AI. This technology now greatly simplifies the process of building virtual environments and its characters, thus enhancing the range of creative choices.
Avatars, AI, and the metaverse
So what is an avatar? Simply put, it's a virtual (maybe slightly fictitious) representation of yourself, capable of performing actions (e.g. engaging in conversations) in a digital space. It can also function as a bot modelled on a human counterpart. Or as an entirely virtual entity/agent without any blueprint in the physical world. Depending on the use case, generative AI can help design and change the appearance, voices, movements, as well as thoughts and dialogues of an avatar in what is often referred to as the metaverse now. The concept of the avatar is actually decades old (think: Ultima, Second Life, World of Warcraft etc.), but it's only now – in the 2020s – that avatars are becoming an integral part of all kinds of digital services and entertainment. How did we get to this point?
3D, XR, the pandemic, and "the way of the future"
A very brief explanation could go like this: Advancements in computer graphics and streaming had already enabled a shift from 2D to 3D in terms of avatars and agents and catapulted video calls into the mainstream in the late 2010s. And then, in 2020, the pandemic hit. People were isolated. And people quickly realized that social interaction is an essential and fundamental need. So a lot of companies started pumping a lot of money into extended reality (XR) tech and collaborative online spaces. In this context, popular gaming platforms like Roblox also invested heavily into the expansion of avatar tech (and there didn't seem to be any problems in funding all this).
Last year, an AI avatar of Elvis Presley thrilled the audience of "America's Got Talent". Created by a singer and a technologist working together, it gave a convincing performance of the song "Hound Dog". In London’s West End, the ABBA show "Voyage", which features all four members of the Swedish band in avatar form, has hit the mark of one million spectators. ABBA's Björn Ulvaeus says that avatars are "the way of the future". And Tom Hanks is convinced that he will act beyond his grave – in the form of an AI-driven avatar.
Metaverse platform Ready Player Me (named after the 2011 VR sci-fi bestseller Ready Player One), recently launched its generative AI avatar creator. And similar tools are popping up just everywhere. Another one is Magic Avatars, a service that creates realistic 3D avatars from a single photo. There's also Synthesia, the most advanced AI avatar video maker in the market right now, with more than 120 languages and accents available. The NBA created new AI tech which allows fans to have their very own avatar replace a regular player in a game – and Microsoft recently introduced an avatar generator for MS teams. These avatars are the harbingers of Microsoft Mesh, the Microsoft metaverse that's not available yet. With their avatar, users are supposed to take part in XR meetings and work collaboratively, without having to turn on their cameras. Microsoft advertises that employees can appear in any form they choose and that feels most comfortable, which is intended to promote diversity and inclusion.
Inclusive avatars
Speaking of which, avatars are also used as sign language interpreters now. Israeli startup CODA aims to make video watching a lot easier for the deaf and hearing-impaired community. To this end, the company enlists AI-driven avatars able to translate spoken language into sign language almost instantaneously. That's next level because up until recently, similar companies like signer.ai (India) or Signapse (UK) only offered to translate written text into sign language.
What's next?
In a nutshell, there are all kinds of (more or less) sophisticated avatars for all kinds of purposes now, but one big challenge remains: Creating an avatar / 3D character / virtual agent (call it what you like) which is portable, sustainable, versatile – and can be used in multiple environments. A solution like this would be very appealing to users facing a complex, rapidly growing digital/immersive space. Hopefully, SERMAS can make a positive contribution here.
In the meantime, let's all enjoy playing a little with 3D authoring tools, generative AI and immersive RPG worlds.