Author: Megha Quamara, King’s College London
Societal acceptance of eXtended Reality (XR) systems will intrinsically depend upon the security of the interaction between the user and the system, which encompasses aspects such as privacy and trustworthiness. Establishing security thus necessitates treating the XR system as a socio-technical entity, wherein technology and human users engage in the exchange of messages and data. Both technology and users contribute to the overall security of the system, but they also have the potential to introduce vulnerabilities through unexpected or mutated behaviour. For instance, an XR system may misinterpret human actions due to limitations in its algorithms or understanding of human behaviour. Conversely, the users may make mistakes by deviating from the expected communication or interaction norms, which can trigger unintended responses or cause the system to start behaving unpredictably, thus disrupting the immersive experience and unknowingly compromising the system’s security.
Security developers and analysts have so far focused on XR systems primarily as technical systems, constructed upon software processes, digital communication protocols, cryptographic algorithms, and so forth. They concentrate on addressing the complexity of the system they are developing or analyzing, often neglecting to consider the human user as an integral part of the system’s security. In essence, they do not consider the importance of human factors and their impact on security. Essentially, there exists an intricate interplay between the technical aspects and the social dynamics, such as user interaction processes and behaviours, but state-of-the-art approaches are not adequately equipped to consider human behavioural or cognitive aspects in relation to the technical security of XR systems, as they typically focus on modelling basic communication systems.
To sum up, addressing security concerns of XR systems from a socio-technical lens, rather than a purely technical one, remains terra incognita, with no recognised methodologies or comprehensive toolset. Thus, formal and automated methods and tools need to be extended, or new ones developed from scratch, to tackle the challenges in designing secure content-sharing for XR systems and their interaction with humans who can misunderstand or misbehave. The Explainable Security (XSec) paradigm, which extends Explainable AI (XAI), can be considered in the security of the decisions and the explanations themselves, thereby contributing to the overall trustworthiness of the system. Moreover, since the composition of secure system components might still yield an insecure system, existing methods and tools must scale to verify that this composition yields indeed a secure XR system.
The SERMAS project aims to contribute by carrying out research and development in all of these directions.
OUR POLICY Privacy Policy Cookies Policy Terms of Use