Rapid 3D Reconstruction of Indoor Environments to Generate Virtual Reality Serious Games Scenarios

Rapid 3D Reconstruction of Indoor Environments to Generate Virtual   Reality Serious Games Scenarios
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Virtual Reality (VR) for Serious Games (SGs) is attracting increasing attention for training applications due to its potential to provide significantly enhanced learning to users. Some examples of the application of VR for SGs are complex training evacuation problems such as indoor earthquake evacuation or fire evacuation. The indoor 3D geometry of existing buildings can largely influence evacuees’ behaviour, being instrumental in the design of VR SGs storylines and simulation scenarios. The VR scenarios of existing buildings can be generated from drawings and models. However, these data may not reflect the ‘as-is’ state of the indoor environment and may not be suitable to reflect dynamic changes of the system (e.g. Earthquakes), resulting in excessive development efforts to design credible and meaningful user experience. This paper explores several workflows for the rapid and effective reconstruction of 3D indoor environments of existing buildings that are suitable for earthquake simulations. These workflows start from Building Information Modelling (BIM), laser scanning and 360-degree panoramas. We evaluated the feasibility and efficiency of different approaches by using an earthquake-based case study developed for VR SGs.


💡 Research Summary

The paper investigates rapid 3D reconstruction techniques for indoor environments to support Virtual Reality (VR) Serious Games (SGs) focused on earthquake evacuation training. Recognizing that traditional sources such as architectural drawings or Building Information Modeling (BIM) often fail to capture the “as‑is” condition of a building and are ill‑suited for dynamic disaster scenarios, the authors evaluate three distinct workflows: (1) BIM‑based modeling, (2) laser‑scanning (point‑cloud) reconstruction, and (3) 360‑degree panoramic imaging.

In the BIM workflow, the authors use Autodesk Revit to generate parametric building models, export them as FBX files, and import them into the Unity game engine. They note that Revit’s proprietary material definitions are not directly readable by Unity, requiring an intermediate conversion step (e.g., via 3ds Max or the Walk‑Through‑3D plugin). While BIM provides rich semantic data and facilitates dynamic manipulation of individual building components (walls, doors, furniture), the resulting models are often overly detailed, containing multiple construction layers and excessive polygon counts. This can degrade real‑time performance, necessitating optimization techniques such as occlusion culling and hardware instancing.

The laser‑scanning workflow captures the actual geometry of the space using a terrestrial laser scanner, producing dense point clouds stored in the E57 format. These clouds are processed with Autodesk ReCap Pro to generate polygonal meshes and texture maps, which are then exported as FBX for Unity. This method yields high visual fidelity and accurately reflects the current state of the environment. However, the generated mesh is typically a single, undivided surface, limiting the ability to manipulate individual objects for dynamic changes. Moreover, scan blind spots create holes in the mesh, and the sheer number of polygons can strain rendering performance. The authors suggest post‑processing in 3D modeling software to segment objects or augment the scene with virtual assets to compensate for missing geometry.

The 360‑degree panorama workflow involves capturing omnidirectional images with a panoramic camera, which can be directly imported into Unity as JPEG textures. This approach is the fastest and most cost‑effective, requiring no conversion pipeline. Nevertheless, panoramas lack depth information and are inherently 2D, preventing users from navigating inside the scene or interacting with objects. To mitigate these limitations, the paper discusses teleportation navigation (linking sequential panoramas) and overlaying augmented reality layers to introduce interactive virtual elements.

A pilot case study was conducted in the Engineering VR/AR Lab at the University of Auckland (approximately 6 m × 5 m, 2.6 m ceiling height) containing desks, chairs, computers, partitions, and boxes. Each workflow was applied to reconstruct the lab, and the authors measured construction time, data size, visual fidelity, interactivity, and frame‑rate performance. Findings indicate that BIM offers the greatest flexibility for dynamic scenario creation but incurs the longest preprocessing time and requires significant model simplification. Laser scanning provides the highest realism but suffers from limited object-level manipulation and high polygon counts, demanding mesh simplification and possible hybrid augmentation. 360‑degree panoramas achieve the fastest setup but are constrained to static visual experiences with limited interaction.

The authors conclude that the choice of workflow should align with project priorities: if dynamic, physics‑based interactions are paramount, BIM (potentially combined with selective laser‑scan data) is preferable; if visual realism of the existing environment is critical, laser scanning is optimal; for rapid prototyping or low‑budget applications, panoramas suffice. They advocate for hybrid pipelines that blend BIM’s semantic richness with laser‑scan fidelity, and suggest future work on automating data conversion, integrating real‑time physics‑based destruction models, and expanding the approach to larger, more complex indoor spaces.


Comments & Academic Discussion

Loading comments...

Leave a Comment