The metaverse ecosystem, an integrated fusion of physical, digital, and virtual realities, presents a plethora of advantages for both research and industry, including accelerated system development, enhanced productivity, and global real-time interactions. Despite extensive research efforts toward implementing metaverse systems across various domains, a notable deficiency exists in addressing the critical security and privacy issues within this immersive mixed-reality landscape. The intersection of virtual and physical realms gives rise to distinctive security concerns, encompassing inter-realm adversarial attacks creating counterfeit representations, the risk of information leakage across distributed virtual reality devices, and the exposure of user privacy at both the network and application levels. These less-recognized threats arise from the intricate interplay and added complexity of the digital realm, coupled with the reliance on cutting-edge technologies like digital mapping, machine learning, and data analysis in mixed environments. In response to these challenges, the project?s novelties are pioneering strategies to fortify the metaverse ecosystem, with the goal of creating secure, resilient, and privacy-enhanced digital world experiences. The project's broader significance and importance span digital twin networks, manufacturing, and automation testing, where the developed security and privacy prevention techniques can be employed in various data analysis tasks, including medical data processing, road traffic prediction, user mobility, and trajectory predictions. The project integrates the research insights into new modules for computer security and privacy courses and hosts outreach activities like Data Privacy Week, with the vision of advancing the participation of underrepresented minorities in STEM fields and improving STEM education.
This project addresses security and privacy challenges in emerging metaverse ecosystems, drawing from interdisciplinary knowledge in cyber-physical systems, machine learning, traffic analysis, and usable privacy. The project places emphasis on advancing three interconnected facets: 1) Developing innovative defense mechanisms to protect digital mapping and synchronization against adversarial manipulation, including a resilient digital representation approach and a mask-and-trim strategy to replace adversarial inputs during the synchronization process. 2) Creating novel privacy-preserving distributed training methods that enable collaborative ML model training for virtual reality users. This involves quantization-based federated learning methods to optimize learning accuracy, data leakage, bit quantization, and energy consumption. 3) Introducing novel privacy controls for virtual reality end users. This entails developing techniques to thwart behavioral inference both at the network layer (through traffic) as well as the application layer (through embedded sensors) using obfuscation techniques such as differential privacy. The overall design aims to establish a safe and trustworthy digital environment for users worldwide.