Deep learning (DL)-based automated cybersickness detection methods, along with adaptive mitigation techniques, can enhance user comfort and interaction. However, recent studies show that these DL-based systems are susceptible to adversarial attacks; small perturbations to sensor inputs can degrade model performance, trigger incorrect mitigation, and disrupt the user's immersive experience (UIX). Additionally, there is a lack of dedicated open-source testbeds that evaluate the robustness of these systems under adversarial conditions, limiting the ability to assess their real-world effectiveness. To address this gap, this paper introduces Adversarial-VR, a novel real-time VR testbed for evaluating DL-based cybersickness detection and mitigation strategies under adversarial conditions. Developed in Unity, the testbed integrates two state-of-the-art (SOTA) DL models: DeepTCN and Transformer, which are trained on the open-source MazeSick dataset, for real-time cybersickness severity detection and applies a dynamic visual tunneling mechanism that adjusts the field-of-view based on model outputs. To assess robustness, we incorporate three SOTA adversarial attacks: MI-FGSM, PGD, and C&W, which successfully prevent cybersickness mitigation by fooling DL-based cybersickness models' outcomes. We implement these attacks using a testbed with a custom-built VR Maze simulation and an HTC Vive Pro Eye headset, and we open-source our implementation for widespread adoption by VR developers and researchers. Results show that these adversarial attacks are capable of successfully fooling the system. For instance, the C&W attack results in a $5.94x decrease in accuracy for the Transformer-based cybersickness model compared to the accuracy without the attack.
Virtual Reality (VR) promises unparalleled immersive experiences across various domains, from gaming [28] and education [37] to healthcare [42]. However, as VR technology advances, it introduces significant security vulnerabilities. Researchers have highlighted critical risks, such as security and privacy attacks (SPS) [37], network and GPU-based attacks [39], etc., which can compromise the integrity of VR systems and lead to disruptions in user experience by inducing cybersickness. User experience is a vital aspect of VR, and cybersickness remains a significant obstacle to the broader acceptance of VR. Several machine learning (ML) and deep learning (DL) methods have been proposed for the automatic detection and mitigation of cybersickness to improve user comfort and safety [16,17,20]. In an ideal setup (without adversarial interference), these ML/DL models automatically detect the severity of cybersickness from VR sensor data and trigger adaptive mitigation techniques, such as narrowing the dynamic field of view (FOV) [7,15,20]. However, ML/DL algorithms are vulnerable to carefully crafted adversarial examples [24], which also applicable for the VR cybersickness use cases [18]. This shows the importance of analyzing and evaluating the robustness of DL models when used for automatic cybersickness detection and mitigation.
Despite the growing importance of ML/DL-driven cybersickness detection and mitigation, there is a significant gap in the availability of open-source testbeds capable of evaluating these real-time ML/DL-based detection methods. Furthermore, the research community has yet to develop comprehensive testbeds that automatically implement and assess the entire pipeline for cybersickness detection and mitigation using ML/DL techniques. While some testbeds exist [4,11,35,43], they do not support the automatic integration of detection and mitigation systems. Additionally, there is a lack of testbeds designed to evaluate the adversarial robustness of these AI models in the context of cybersickness detection and mitigation. This gap in available resources and evaluation frameworks underscores the need for further research and the development of an open-source testbed that not only supports real-time ML/DL-based detection but also evaluates the resilience of these systems against adversarial attacks, thereby motivating the importance of this work.
Motivated by the above-mentioned limitation of existing works, this paper introduces Adversarial-VR, a novel real-time VR testbed for evaluating DL-based automatic cybersickness detection and mitigation strategies under adversarial attack conditions, as shown in Figure 1. To the best of our knowledge, it is the first open-source testbed to assess the robustness of DL-based automatic cybersickness detection and mitigation systems against adversarial attack conditions. Our key contributions are as follows:
• We develop our testbed by incorporating two state-of-the-art (SOTA) DL models: Deep Temporal Convolutional Network (DeepTCN) and the Transformer. It is worth mentioning that any DL model can be integrated into our proposed testbed. These models are trained using MazeSick, a publicly available SOTA VR cybersickness dataset [20]. Upon detecting cybersickness, the system automatically triggers mitigation techniques using Unity’s Tunneling Vignette system [36]. Specifically, we implement a dynamic field-of-view (FOV) adjustment technique that adapts to the severity of the user’s symptoms.
Our evaluation confirms that the system performs effectively under ideal conditions, i.e., when no adversarial attacks are present, as shown in Figure 2.
• Our testbed supports generating adversarial examples and injecting them into the cybersickness detection models, thereby manipulating the outcome of cybersickness detection. To craft these adversarial inputs, we employ three widely used attack algorithms: Momentum Iterative Fast Gradient Sign Method (MI-FGSM) [8], Projected Gradient Descent (PGD) [23], and Carlini-Wagner (C&W) [5] method. It is important to note that any adversarial example generation algorithms can also be integrated into our testbed. Our testbed evaluation covers both white-box and black-box attack scenarios. Results using the MazeSick dataset demonstrate that the proposed adversarial approach can effectively deceive the detection system. For instance, the C&W attack results in a 5.94× drop in detection accuracy for the Transformer-based cybersickness detection model compared to its accuracy without adversarial attacks. This manipulation prevents the activation of cybersickness mitigation mechanisms, significantly degrading the UIX.
• Finally, to support widespread adoption by VR developers and researchers, we release our system as an open-source testbed. By making our implementation publicly available 1 , we aim to foster community-driven experimentation and advancement in adversarial robustness for VR. The testbed is implemented within a custom-b
This content is AI-processed based on open access ArXiv data.