Next Article in Journal
Detecting Fatigue during Exoskeleton-Assisted Trunk Flexion Tasks: A Machine Learning Approach
Previous Article in Journal
Beacon, a Lightweight Deep Reinforcement Learning Benchmark Library for Flow Control
Previous Article in Special Issue
Bi-Resolution Hash Encoding in Neural Radiance Fields: A Method for Accelerated Pose Optimization and Enhanced Reconstruction Efficiency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Virtual Reality in Cultural Heritage: A Setup for Balzi Rossi Museum

1
Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genova, 16145 Genova, Italy
2
Department of Languages and Modern Culture, University of Genova, 16145 Genova, Italy
3
DRML-MiC Direzione Regionale Musei Liguria, Ministero della Cultura, 16126 Genova, Italy
4
Eurodrone Academy, 12012 Boves, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(9), 3562; https://doi.org/10.3390/app14093562
Submission received: 22 March 2024 / Revised: 11 April 2024 / Accepted: 18 April 2024 / Published: 23 April 2024
(This article belongs to the Special Issue Recent Advances in 3D Reconstruction, 3D Imaging and Virtual Reality)

Abstract

:
This study presents the creation of a virtual reality experience for the Museo Preistorico dei Balzi Rossi e Zona Archeologica (hence Balzi Rossi Museum) commemorating the centenary of Prince Albert I Grimaldi’s archaeological work at the site. The project aims to preserve and convey the site’s heritage through advanced VR technology. Photogrammetry was used for 3D reconstruction of the entire Balzi Rossi coastal cliffs, including the notable “Caviglione” and “Florestano” caves, known for their upper Paleolithic rock engravings. Two subsequent development phases produced the final public VR experience, incorporating Nanite technology for enhanced visual fidelity. This advancement resulted in a more detailed and immersive VR experience, presenting the Balzi Rossi cliffs across different historical periods, including the Würm glaciation. Key to this phase was optimizing the VR experience for performance, focusing on stable frame rates and minimizing motion sickness, and integrating a multi-lingual interface for broader accessibility. Since November 2023, the VR setup at Balzi Rossi Museum has been an educational and interactive feature enabling visitors to virtually explore the site’s history. This study aims to describe a process for optimizing and enabling the creation of VR experiences while maintaining a high polygon count within the context of small teams.

1. Introduction

Virtual reality (VR) is an emerging technology that has the potential to revolutionize the way we experience and interact with the world around us. In recent years, VR has gained popularity in a variety of sectors [1], including entertainment, education [2], and training [3]. However, the potential of VR is not limited to these sectors. VR also has the potential to transform the way we experience and interact with cultural heritage, and its use in this field is attracting interest year by year [4].
Cultural heritage (CH) is an important part of our history and identity. It can include archaeological sites, museums, monuments, and artworks; however, many cultural heritage assets are often inaccessible or difficult to visit. VR can help overcome these barriers by providing a way to experience CH in a realistic and engaging way. VR can be used to create immersive experiences that allow users to explore archaeological sites, visit museums, and safely interact with their artworks and artifacts. This can be particularly useful for students and researchers who want to study cultural heritage in a new way.
VR can also be used to create educational experiences that allow users to learn more about these topics in a fun and engaging way. VR also has the potential to help preserve cultural heritage [5]. Many cultural heritage assets are fragile and subject to deterioration. This technology can be used to create digital copies of CH assets, which can be stored for and shared with future generations. Moreover, VR can be used to create an experience without any risk of damage, creating something that can outlast the loss and deterioration of the CH resources. It is in this context that the Balzi Rossi Museum project has taken shape. The project started in 2021 to celebrate the centenary of the death of Prince Albert I Grimaldi of Monaco, who was one of the first to conduct archaeological excavations in the area of the Balzi Rossi coastal cliffs, located close to the border between Italy and France (see the map in Figure 1) near the town of Ventimiglia (Imperia, Liguria, Italy), where important evidence of upper paleolithic culture was found. The site was chosen because it is the most important example of upper paleolithic parietal art in northwestern Italy.
The project has three main objectives: (1) to preserve and monitor the site and its findings, (2) to look for new forms of parietal art in the caves, and (3) to communicate and disseminate the CH of the site through the use of new technologies, in particular VR. The project is the result of a collaboration between the 3D Lab Factory of the University of Genoa, Eurodrone of Boves (Cuneo, Piemonte, Italy), and the Balzi Rossi Museum. This paper outlines the results of the third objective, which consisted mainly of communication and outreach, and thus the creation of a virtual experience in which it is possible to explore the entire archaeological site and examine the parietal art engravings present in the caves, which are normally not visitable or easy to access. In addition, it was also important to highlight the new findings discovered during the photogrammetric campaign.
Two versions of the VR immersive experience were realized in two distinct phases during the three-year project. In the initial phase, the huge static georeferenced 3D point cloud of the entirety of the Balzi Rossi cliffs and caves generated using aerial-based photogrammetry was transformed into a large high-density polygonal mesh, i.e., a 3D static model, using the RealityCapture 1.2 software. In the implementation of this first release for the commemorative event of February 2022, the obtained large mesh was practically unmanageable within the chosen real time game engine, i.e., Unreal Engine 4 from EPIC Games. The technical complexities associated with optimizing high-density polygonal meshes for immersive virtual reality environments were a well-known (at that time) general limitation of real time 3D rendering. In this phase, it was crucial to guarantee that the detail quality, originating from photogrammetry, was maintained without detriment to real time performance and the immersive quality of the VR experience.
Starting from the foundational work of the first phase, the project underwent a significant evolution with the passage from Unreal Engine 4 to Unreal Engine 5, a novel game engine software platform released in April 2022. This transition marked the beginning of the second phase and the realization of the first public version. The challenge focused on preserving the original 3D point cloud and producing a significantly larger high-density polygonal mesh of the site, passing from 250 K polygons to +100 M polygons, while ensuring the smoothness and usability of the VR experience. Key to this phase was the emphasis on increasing the aesthetics and visual fidelity of the virtual environment with respect to the real location. Using the advanced features of Unreal Engine 5 facilitated the creation of a more realistic and visually immersive virtual representation, which is now on exhibit inside the museum.

2. Related Works

The advancement of VR and augmented reality (AR) technologies has been a cornerstone in the preservation of cultural heritage, as evidenced by a diverse array of scholarly contributions. Noh, Sunar, and Pan’s early research [6] was instrumental in illustrating the initial applications of AR in the digital reconstruction of historical sites, setting a precedent for subsequent studies in digital cultural heritage preservation. Building upon these foundations, Bekele et al. [7] undertook a comprehensive evaluation of augmented, virtual, and mixed-reality systems, primarily focusing on their educational utility and their transformative impact on museum exhibitions and the creation of virtual museums.
Specialized research in the domain further expanded with Haydar et al.’s exploration of the use of VR and AR for the preservation of underwater archaeological sites, marking a significant shift towards 3D visualization and interactive techniques in studying submerged cultural heritage [8]. Concurrently, Obradović et al. [9] demonstrated the efficacy of photogrammetry in VR through the detailed reconstruction of cultural heritage objects exemplifying the Serbian Orthodox Cathedral Church’s iconostasis, thus underscoring the potential of virtual heritage reconstructions in achieving high levels of precision and interactivity. In a related context, Bruno et al. [10] outlined a comprehensive methodology for digital archaeological exhibitions, blending 3D reconstruction with VR to enhance the presentation of archaeological objects from Calabria (Italy) dating back to a period between the 17th and 8th centuries BC.
The evolution of AR applications in cultural heritage, particularly over the last decade, was thoroughly reviewed by Boboc et al. [11], highlighting significant advancements in fields such as 3D artifact reconstruction and the development of virtual museums. Complementing these technological perspectives, Bekele and Champion [12] engaged in a comparative study of immersive reality technologies, assessing their effectiveness in promoting cultural learning within virtual heritage applications. Their analysis delved into various interaction methods, offering insights into the educational implications of these technologies. Collectively, these scholarly endeavors narrate the continuous evolution and diversification of VR and AR technologies in cultural heritage. This narrative is not just a reflection of technological progress but also an expansion in application scopes, encompassing preservation techniques, educational platforms, and interactive experiences. This collection of previous research serves both as a foundational reference and a contextual framework for ongoing inquiries into the emerging frontiers of virtual heritage.
The Balzi Rossi Museum project represents a significant innovation in the field of virtual heritage, leveraging advanced VR technology to transform the understanding and experience of cultural heritage. Utilizing Unreal Engine 5 and its Nanite technology, this initiative transcends traditional approaches to digital reconstruction and offers an immersive and interactive exploration of archaeological heritage. In particular, the application of Nanite represents a paradigm shift in 3D modeling for VR: owing to its ability to handle complex geometries effortlessly, Nanite remarkably obviates the need for traditional model optimization techniques such as retopology. This allows for a more streamlined and efficient workflow, further enhancing the visual fidelity and performance of the VR experience without the intensive labor typically associated with model optimization. This innovative approach stands in stark contrast to the methodologies employed in prior related works, which predominantly relied on standard model optimization techniques, underscoring the significant advancements brought by Nanite in VR development.
The project not only enhances visual fidelity but also emphasizes educational and interactive experiences. By enabling visitors to virtually explore and learn about the Balzi Rossi site’s history, the project marks a novel integration of VR into cultural heritage, pushing the boundaries of how historical and archaeological studies are presented and experienced. This approach reflects a significant leap in the application of VR technologies, moving beyond static displays to create a dynamic and engaging virtual experience.

3. Materials and Methods

3.1. Data Acquisition and Photogrammetry

The first phase of the Balzi Rossi Museum project endeavor entailed the implementation of photogrammetry, a pivotal process for both the preservation of the site and the development of a VR narrative. To this purpose, we executed every stage of photogrammetry, from data acquisition through drones to 3D point cloud calculations and 3D mesh generation, using several kinds of high-precision sensors for multiphase multi-spectral data collection. This is the list of drones and related sensing devices used in three different campaigns, performed between September 2021 and November 2022:
  • DJI Matrice 210 V2 RTK with thermal sensors and 4K DJI cameras: An enterprise-level drone, equipped with Zenmuse Xt2 thermal sensors and Z30 ultra-HD cameras, and RTK Ground Station to improve the GPS precision.
  • DJI Phantom P4 multispectral RTK: A compact prosumer-level drone, equipped with custom 5 + 1 multispectral sensors (NIR 0.75–1.4 µm, Red-Edge, R, G, B channels and visible light), originally designed for precision agriculture and able to capture images at various wavelengths to facilitate the discovery of engravings inside the Balzi Rossi caves.
  • DJI Matrice 300 RTK with Zenmuse LIDAR L1: An enterprise-level drone capable of carrying heavy instrumentation like the Zenmuse L1, which integrates a Livox Lidar module, a high-accuracy IMU, and a camera with a 1-inch CMOS on a three-axis stabilized gimbal. It also features artificial intelligence for enhanced stability and the OcuSync video transmission system for rapid data transfer.
  • DJI Mini: An ultra-compact consumer-level drone with a 12 MP 4K camera, used to take pictures in very cluttered spaces like the top and inner parts of the caves.
These drones collected approximately 4000 images clustered in 11 groups and +100 million points of LIDAR scans over three different flights with RTK precision of +/−0.5 cm horizontally and +/−1.5 cm vertically.
Multispectral data acquisition has been tested across various wavelengths and modes. Aerial data acquisitions made during the day with drones were affected by sunlight shadows, dynamically changing throughout the day because the Balzi Rossi cliffs are exposed to the south. Therefore, discovered candidate findings during aerial inspections were analyzed in detail within the caves during nightly data captures with a custom fixed installation based on four high-precision sensors (NIR, R, G, B) geometrically aligned with a professional Panasonic Lumix DC-S5 and encircled within coherent light LED panels operating at 4500 Kelvin degrees.
All of the collected data were processed in two distinct ways to meet the conservation and communication objectives of the Balzi Rossi Museum project.
For Preservation: The data processing for site conservation employed two main softwares:
  • DJI Terra V.4.0: This proprietary software from DJI facilitates mission planning, area mapping, and processing of data collected by drones. It allowed for the creation of a three-dimensional, georeferenced virtual model, as well as the autonomous generation of a 3D model from the data.
  • Cloud Compare V 2.12.0: Used for processing point clouds and 3D models, this software turns a set of points with spatial coordinates into detailed three-dimensional models.
The photogrammetry provided an extremely precise point cloud. By establishing a reference scale based on the distance between railway tracks, we achieved a digital model with millimeter-level precision, geolocated and ready for future analysis and studies and allowing precise monitoring of changes over time.
For Communication: The second process, focused on communication, aimed to produce the VR experience. The data processed with the previous software were not directly compatible with the Unreal Engine graphics engine, used for VR. Moreover, most of the data was superfluous for the required 3D model. Therefore, we selected and used Reality Capture (RC), a photogrammetry software specialized in producing 3D models, which since 2021 has been acquired by Epic Games and thus fully integrated with Unreal Engine. The photos previously used were not entirely usable, as there were multispectral photos and photos of a wider area than the one we selected for the model. Therefore, from the original set, 955 photos were selected, allowing the best result for our objectives. The software created a cloud of about 3 billion 3D points, optimized for use with Unreal Engine.

3.2. First Iteration of the Balzi Rossi VR Experience

The first version of the Balzi Rossi VR Experience (Figure 1) was developed using the Unreal Engine 4 graphics engine: it included the “Caviglione” and the “Florestano” caves. The archaeological site of Balzi Rossi is one of the most important complexes of the Paleolithic European period and, between the 19th and 20th centuries, several caves were excavated there.
The first step was to import the 3D reconstruction mesh results of the photogrammetry using RC as previously mentioned. The point cloud was then converted into a polygonal mesh of around 250 million triangles. Due to the limited capabilities of Unreal Engine 4 and the hardware at our disposal, it was then necessary to decrease the polygon count of the mesh, bringing it to 250,000 triangles. The reduction was balanced by high-definition texturing, resulting in a model of sufficiently good quality for the purpose of the project at this stage (Figure 2 and Figure 3).
We started with an Unreal 4.27 VR Template, which already has basic interaction, a teleport motion system, forward rendering, and MSAA anti-aliasing implemented by default, settings which are particularly relevant for VR [13].
The 3D model was then imported into it. However, the model was still too complex to be displayed in real time. To solve this problem, it was divided into 25 pieces. Once the model was imported, it was necessary to create a level in which to place it. A number of assets were then added to the level, including an atmospheric system and a landscape that simulated a beach to evoke an environment similar to the Balzi Rossi area, even though at this stage there was not yet a faithful reproduction of the real landscape, except for the cliff. The base area of the level was composed of 24 square planes, each 100 m long, creating an area of 600 × 400 m, on which the various sections of the level were built. After modeling the beach, a flying platform was added about 200 m from the shore, which acted as a starting base and a panoramic point. To build the sea, the free asset “WaterMaterial” from the Epic games store, containing a collection of assets for crafting rivers, lakes, and seas, was employed. By amalgamating various meshes from this package, a vast expanse of seawater was created, spanning approximately 800 m, to the extent that the water’s edge remains unseen in virtual reality. To set up the assets within the level, a selection was made among the materials made available, and the “M_Ocean_Cheaper” material was chosen as it was less complex and less impactful on the overall performance of the experience. The technique used to build a consistent sea (despite the material’s name) is to use a material involving vertex paint. In this case, the technique was to differentiate the wave motion in contact with the beach and increase or differentiate it as one moves away from the shore.
The next step was to make the map navigable. To do this, a series of navigation volumes (NavMeshBoundsVolumes) were positioned, which outlined the areas where teleportation was allowed. Volumes were also positioned to prevent teleportation in areas where players were not supposed to go. The cliff’s collisions were then created using simplified collision volumes to allow for better performance.
A menu was then added, which allowed two actions: “Restart” to return to the initial level and “Real Life” to close the simulation.
One of the objectives of this project is to highlight the rock art of the archaeological site and show it in a completely new way. In fact, due to the passage of centuries and the cleaning of the sediments present in the caves, some engravings are now in a high position compared to the ground (about 5 m high), making it difficult to observe them closely.
Using very high-definition images taken in a photography session with controlled and uniform light, decals were created; images were used as textures that rest directly on the mesh following its shape. Decals are a type of material that can project some properties, such as texture, color, roughness, and normals, onto other meshes in the scene and follow the shape of the material on which they are applied. Decals can be used to add details, such as dirt, damage, or graffiti, to the underlying surfaces, and in this case are used to emulate the appearance of the engravings following the shape of the mesh.
These images have been positioned as similarly as possible to how they are found on the real site. To improve the view of the engravings, a ladder with a platform has been positioned to reach the height of the engravings. Finally, signs have been added as another element that improves the experience and highlights the engravings, which are difficult to distinguish due to their nature.
Based on this first level, two additional levels were created: the first consisted of a reconstruction of the Balzi Rossi cliff in the Pleniglacial phase in the MIS 7-4 (Figure 4); the second consisted of an embellished but less archaeologically correct version, aimed at showing how the demo could be improved.
During the glaciation, the sea level was much lower than today (about 2 km from where the coastline is now), thus creating a large plain in front of the cliff. Furthermore, the landscape that is now coastal Mediterranean was more similar to Alpine or Nordic landscapes at the time. Pollen analysis from several excavations conducted in the Balzi Rossi caves was used for vegetative indicators and cover.
For the reconstruction, we only shifted the coastline by 200 m from the initial 2 km to give an idea of the location while avoiding a more precise reconstruction that would have severely affected the performance of the VR experience. In addition, the textures were modified to simulate a more glacial landscape and all the human constructions visible on the cliff were deleted. The embellished version was created as a very similar copy to the original, but with some tricks that made it more appealing to look at. The placeholder cubes were replaced by rock models, the connection between the beach and the path was replaced by a series of rocks, many areas where the vegetation was extremely artificial were deleted and the path to follow was made with a stone bridge.
All levels are accessible through interactions from the observation platform. The work lasted more than a month and a half in total, and it was presented on 10 February 2022 as part of the commemoration for the centenary of the death of Prince Albert I.

3.3. Second Iteration of the Balzi Rossi VR Experience

As of February 2023, subsequent to the unveiling of Unreal Engine 5 and its plethora of innovative features, a necessity emerged to conceive a second iteration of a specific application. This initiative aimed to surpass the visual excellence and fidelity of its predecessor. A prominent hurdle in the inaugural version was the imperative to segment the cliff model and confine the polygon count to a quarter-million to streamline the importation process into the game engine and secure optimal performance for virtual reality applications. The introduction of Unreal Engine 5 catalyzed new avenues to elevate the Balzi Rossi VR experience, notably through Nanite technology, enabling a significant enhancement in the cliff model’s quality and various facets of the VR application.
Nanite represents a revolutionary rendering paradigm, markedly augmenting the engine’s capacity to generate intricate and detailed 3D environments [14]. This virtualized geometry system empowers creators to fabricate highly detailed assets unshackled from conventional polygon budget limitations. Nanite facilitates the rendering of immense geometric detail by streaming and processing only visible specifics at any given time, achieved by aggregating triangles into clusters. It enables developers to actualize cinema-grade visuals in real time, utilizing high-resolution source art composed of millions, even billions, of polygons directly in the game engine. Despite this intricate detail, Nanite maintains efficient performance by optimizing and scaling geometry relative to the camera’s perspective, processing only essential data. This system potentially obviates the traditional levels of detail (LOD) approach for performance optimization. Additionally, Nanite allows for real-time, dynamic alterations in lighting, shadows, and other environmental variables, maintaining high fidelity without dependence on precomputed data.
To harness Nanite, a transition from forward to deferred rendering was necessitated, as the former does not support Nanite’s functionalities [15]. Although the light pass with deferred rendering is computationally more intensive than forward rendering, Nanite’s utilization still fostered significant performance leaps. Given Nanite’s exceptional optimization in rendering and consequent substantial savings in graphic resources, it was decided to augment the polygon count of the cliff model, aspiring for superior graphic rendering. The process of digital photogrammetry was resumed again with RealityCapture, initially yielding a cliff model comprising 60 million triangles. Subsequent trials revealed that surpassing a certain polygon threshold marginally affected graphic rendering. Consequently, a five million-polygon model was exported from RC and imported into the engine. This model, with its higher polygon count, was markedly more detailed than the previous 250,000-polygon version, rendering it significantly more photorealistic.
During the initial importation attempt, an issue of mesh conversion into Nanite clusters caused a sudden shutdown of Unreal. After multiple resizing trials on external programs, it became evident that the error stemmed from the mesh’s exceedingly large size. Subsequently, reducing the model by a factor of 100 facilitated a smooth engine importation. As commonly observed in post-3D model generation via photogrammetry, the imported mesh appeared ‘dirty’, with numerous unwanted parts. For the cleanup process, Unreal’s internal modeling tool was employed, adeptly handling ordinary mesh optimization operations.
Nanite’s real-time streaming ensures fluid rendering despite the high polygon count, conserving significant hardware resources for the cliff rendering. This led to increased creative liberty in level design, providing a higher computational budget for incorporating distinctive landscape elements. A primary limitation of the first version of the Balzi Rossi VR experience was landscape constraints: nearly all hardware resources were consumed for cliff rendering, leaving a scant budget for level design. This resulted in a complete absence of the coastal strip, including museum buildings, plants, rocks, and other natural features. For level construction, free assets from Quixel Megascans and the Epic Games Marketplace were utilized.
For the coastal part, the game map was modeled using Unreal’s landscape space to achieve shapes, colors, and hues akin to reality. In the landscape material, a function to blend two different textures was used to minimize tiling as much as possible. Regarding the construction of two staircases, a walkway crossing the railway, and a small stone utility shed (Figure 5), the meshes and their materials were modified, also utilizing modeling tools available in Unreal.
Additionally, for a more visually seamless integration of the meshes into the landscape, Vertex Paint and Runtime Virtual Texturing tools were employed. Streaming Virtual Texturing (SVT), an alternative method in Unreal Engine 5 for texture streaming from the disk, offers several advantages and some disadvantages compared to the traditional MIPmaps-based method [16]. This latter method conducts an offline analysis of UV material usage and decides which MIP levels of a texture to load based on visibility and object distance from the observer during runtime. This can be limiting as the streaming data considered are the MIP levels of the native resolution textures. High-resolution texture use can significantly impact performance and memory overload when loading a higher MIP level. Additionally, the CPU makes decisions on MIP texture streaming using object visibility and culling settings. Here, visibility is more conservative, meaning the system is more likely to load an object to avoid sudden appearances (popping) as the player moves. Therefore, even if only a small part of the object is visible, the entire object is considered visible; the loaded object includes any associated textures that might be needed for streaming.
In contrast, the SVT system transmits only texture parts that are actually visible to the user. This is achieved by dividing all MIP levels into small, fixed-size sections (tiles) where the GPU determines their visibility from the observer’s perspective; consequently, when Unreal considers an object visible, the GPU is instructed to load the required tiles into a GPU memory cache. Regardless of texture size, the fixed tile size considers only those actually visible; the GPU calculates their visibility using the Z-buffer, ensuring that texture streaming occurs only for visible parts relevant to certain pixels on the screen. Runtime Virtual Textures (RVT), which was used in the Balzi Rossi level to better integrate meshes with the landscape, operates similarly to SVT but differs in some specifics [17]: here, the texture texels are generated in real time by the GPU, not pre-calculated and stored on disk as in SVT. RVT is designed for textures that need to be generated on demand, like procedural textures or landscape materials with multiple layers.
For the insertion of plants and rocks (Figure 5), Unreal’s foliage space was used, ensuring better asset management; notably, all meshes placed in the level are considered as a single instance, weighing less in terms of rendering compared to positioning each mesh individually from the content browser. To achieve a minimum of 60 frames per second (FPS), extensive work on LOD optimization and mesh simplification was necessary. Instead of using four or five LOD levels, only two were employed: a level 0 with a mesh not exceeding 2500/3000 triangles and a level 1 with a 2D billboard image always oriented towards the observer’s camera. Additionally, the resolution of lightmaps was reduced from 8 to 4, and dynamic shadows were deactivated, retaining only static ones.
Moving to the subject of lighting, Lumen and any form of global illumination were disabled to maintain acceptable performance. Lumen, a new feature of Unreal Engine 5 capable of generating real-time light reflections using ray tracing, is too resource-intensive for VR rendering. As detailed in the official Unreal documentation [18], to generate consistent shadows with Nanite-controlled meshes, these must work in combination with light sources set to “movable”. Consequently, the level’s directional light and sky light actors were configured this way. Planning for a single fixed light source, the sun, and a minimal quantity of dynamic elements, the adopted approach used precomputed lighting, where non-real-time lighting was precalculated.
As previously mentioned, for the Balzi Rossi VR experience, a target of no less than 70 FPS was set to ensure a smooth, entertaining view and prevent motion sickness syndrome [19]. After adding and optimizing all landscape elements, setting LODs, and employing Nanite, a problem with FPS and particularly frame time was encountered: average FPS hovered around 60, but frame time was highly unstable, with frequent and noticeable signs of stuttering. After trying various solutions, it was concluded that temporal upscaling, namely, a type of algorithm (temporal upscalers) that performs an analysis of data on previous and current frames and motion vectors to produce an image at a higher resolution than that rendered, was appropriate. Temporal upscalers have the same general operational goal but differ in the methodology by which they achieve it. Both Unreal’s proprietary system TSR (Temporal Super Resolution) and AMD’s more famous FSR (Fidelity Super Resolution) essentially interpolate frames to generate higher resolution images. In contrast, Nvidia’s DLSS (Deep Learning Super Sampling) uses AI deep learning algorithms to upscale frames. Specifically, DLSS employs a pre-trained neural network on thousands of images to learn how to upscale images correctly [20]. Additionally, DLSS’s efficiency is boosted by operating on GPU-dedicated AI calculation processors called Tensor Cores. As recently clarified [21], neither Unreal’s TSR nor AMD’s FSR are remotely comparable to DLSS, which can achieve substantial performance boosts while preserving excellent image quality. For these reasons, DLSS was implemented in the Balzi Rossi VR experience, gaining a stable frame rate in all situations.
Following the remarkable results achieved through the adoption of Nanite and DLSS technologies (Figure 6), there was an impetus to enhance and innovate the second iteration of the Würm glaciation level. It’s crucial to note that during the Würm period, the coastline was significantly lower, estimated at about 2 km from the cliffs. To create a map of such extension, the innovative World Partition (WP) feature of Unreal Engine 5 was employed to maintain a stable frame rate. WP, an automatic data management and distance-based level streaming system, offers a comprehensive solution for large world management. Unlike the previous approach of dividing large levels into sublevels, WP stores the world in a single persistent level divided into grid cells, automatically streaming these cells based on distance from a streaming source [22].
For the cliffs, to provide winter characteristics like snow and less saturated colors, various modifications were made to the mesh material. These included slight desaturation of the albedo texture by altering brightness, RGB values, and saturation, as well as application of the snow on the cliffs using the VertexNormalsWS node in combination with snow textures. This approach maintained the original albedo texture of the cliffs while applying the snow texture only to mesh vertices facing the z-axis. The intensity at which snow is applied beyond a certain degree of inclination is controlled using the Lerp node, which performs a linear interpolation between two known values (A and B) with an alpha constant ranging from 0 to 1.
To recreate a winter environment (Figure 7) with ongoing snowfall, the Exponential Height Fog and Sky Atmosphere components were properly configured. The former creates a fog system with higher density in lower altitude areas and vice versa, managing two colors: one for the hemisphere facing the main directional light and one for the opposite side. The Sky Atmosphere component simulates light dispersion through the atmosphere, considering factors like sky color and directional light, and includes Mie scattering and Rayleigh scattering effects for dynamic sky simulation [23].
For the snowflakes, a Niagara system with modified textures was created to simulate real snow instead of simple white circles. This particle system was attached to the VR pawn, so the emitter created particles locally, maintaining good performance and graphic rendering. The same optimizations applied to the foliage in the main level were used for the forest, but with fewer elements, allowing the retention of dynamic shadows.
The movement system has been enhanced and modified in the latest version. Initially, the basic system provided by Unreal was used, allowing movement through teleportation to minimize motion sickness. However, in the current iteration, this system has been completely redeveloped to incorporate smooth and continuous movement using analog sticks, offering users greater freedom for movement and exploration, despite the potential increase in motion sickness. This was accomplished by converting the VR player pawn into a player character, ensuring a more robust structure that prevents passing through objects. The movement programming has been updated to incorporate continuous motion. Additionally, a tutorial has been introduced to guide users on how to employ both methods of movement, a feature that was absent in the previous version. Lastly, for debugging tools to assess performance during landscape construction and DLSS adoption, Unreal Insights was used. This tool provides highly detailed, frame-by-frame data on draw calls, CPU, GPU, and resources used by each scene element. Another essential VR debugging tool was Oculus Debug Tools, providing an overlay in the head-mounted display (HMD) containing basic in-game performance information. This tool was used to monitor FPS and hardware usage in the executable of the experience, where Unreal tools are challenging to use. Another major improvement over the first version was making the experience multi-language, making tutorials and infographics available in Italian, English, and French. We have opted for Italian and French as the primary languages of the project, given the archaeological site’s unique position near the border between Italy and France. English has been adopted as the lingua franca to facilitate broader international engagement. Additionally, in our team, we have experts who are fluent speakers of these languages who have been able to validate the translations.

3.4. User Experience

The Balzi Rossi VR experience presents a user interface crafted to ensure inclusivity and accessibility, thereby resonating with a broad spectrum of individuals possessing varying degrees of virtual reality proficiency. The interface offers two distinct modalities of locomotion: teleportation, which caters to individuals susceptible to motion-induced discomfort, and continuous movement, tailored for users seeking an immersive exploration experience.
The foundational principle of the project is accessibility, highlighting the imperative to facilitate uninterrupted access to the cliff caves, which frequently remain inaccessible owing to variables such as adverse weather conditions or ongoing research activities. Furthermore, the initiative is specifically designed to accommodate individuals with restricted mobility.
The digital experience is crafted to stand independently, providing a rich, solitary exploration while also serving as an enhancement to the physical visit. It provides opportunities that are otherwise unattainable: it facilitates a detailed and straightforward examination of the engravings, situated at a height of 5 m, with an option to illuminate them; it further provides the opportunity to traverse a virtual reconstruction of the Würm glacial period.
The user interface adopts a minimalist style intentionally to amplify immersion and realism. The primary interface displays the hands, indicating the position of the controllers, which are consistently visible. An additional interface appears when the game menus are “summoned”, where it is also possible to select by pointing with the hand that is not holding the menu, creating a laser indicator that assists in selection. The same indicator is used for language selection (Figure 8). The final interface available to the user is the teleport interface, which displays the teleportation destination (Figure 9 and Figure 10).
Participants engage in a variety of activities, available across the three levels of the experience:
  • Current Exploration of the Balzi Rossi area: This mode enables viewing the area as it appears today, with a particular focus on the Caviglione cave, where the engravings are located. Inside the cave, an elevator enables visitors to reach the level of the engravings to observe them closely and accurately. Also located at this level is the starting point: a platform where users can change languages, travel to different levels, and have a panoramic view of the cliff.
  • Post-Würm Glaciation Historical Reconstruction: In this section, users can immerse themselves in an accurate reconstruction of the area at the end of the Würm glacial era. They have the opportunity to move to the coastline, which extended 2 km from the current position according to archaeological data. A textbox located at this level provides information about the reconstruction.
  • Interactive Tutorial: Composed of two parts, the tutorial guides users through a series of intuitive textboxes and minigames designed to teach navigation within the experience.

3.5. Main Challenges and Solutions

This investigation was methodically partitioned into two critical phases, each designed to overcome specific technical obstacles associated with the refinement of polygonal meshes within virtual environments, with particular emphasis on the optimization of cliff meshes. In the inaugural phase, intensive efforts were directed towards the polygonal decimation process, facilitating its smooth integration into the gaming engine. This procedure led to a marked reduction in the number of polygons to 250,000 and made it necessary to divide the mesh into several segments, a measure compelled by the computational limitations of the hardware available at the time. The cliff’s dominant presence and its recurrent depiction in the foreground mandated incessant rendering by the engine, thereby significantly taxing the GPU’s computational resources. This scenario posed a formidable challenge in a VR setting, where duplicating the scene rendering for each eye is essential to achieve a stereoscopic illusion. As explicated by Obradović et al. [9], realizing such a complex task demanded comprehensive optimization, transcending the preliminary capabilities of our team, and the accomplishment was therefore attained through a series of compromises and the strong efforts of our modest team.
The introduction of Unreal Engine 5 alongside its Nanite technology represented a pivotal shift, albeit ushering in fresh challenges that required inventive resolutions. The primary objective was to significantly increase the polygon count to enhance the digital representation of the cliff utilizing Nanite technology. For the second iteration of the VR experience, a decision was made to employ a mesh comprising 5 million polygons. Preliminary assessments did not indicate any performance detriments with meshes containing 50 or 60 million polygons; nevertheless, the feasibility of employing such high-density meshes with Nanite technology merits further exploration to ascertain its boundaries comprehensively. It is pertinent to note that Nanite technology does not support forward rendering, which is conventionally advocated for VR experiences. Consequently, the main issue was to secure a consistent 60 FPS throughout the experience, thereby mitigating the risk of VR-induced discomfort. To ensure a stable frame rate of at least 60 FPS, we relied on Nvidia graphics cards that feature DLSS technology. By setting the engine to ‘ultra performance’ mode, we not only achieved our target but also surpassed it, reaching up to 72 FPS on Meta Quest 2. This allowed our small research team to create a stable and visually appealing experience.

4. Conclusions and Future Works

The Balzi Rossi VR experience has been used at the Balzi Rossi Museum since November 2023. Visitors can explore the ancient ruins in virtual reality, learning more about the area’s history. The experience has been designed to be interactive, allowing visitors to explore and take photos of the ruins, as well as educational, providing information about the area’s history. The final version of the experience was intended as PC VR because currently there is no possibility of taking advantage of Nanite in mobile VR. To avoid cable issues, typical of first generation of HMDs since 2016, we chose Meta Quest 2 and Air Link for the final form, leaving open the possibility of connecting the HMDs to a dedicated workstation via Meta Quest Link.
The Balzi Rossi VR experience in this form is open and prepared for implementation or corrective operation, and we are already investigating new possible features to transform and improve the experience in a more immersive way, which could involve MetaHuman and generative AI.
The Balzi Rossi Museum project provides an overview of how new technologies can be employed both in communication, through the immersive experience of the Balzi Rossi VR, and in conservation, by creating a digital twin and using a georeferenced point cloud and advanced photogrammetry techniques. However, our methodology goes beyond the Balzi Rossi Museum project; it is designed to be flexible and adaptable. Its purpose is not only to enhance and strengthen the archaeological site in question but also to demonstrate the transformative potential of innovative technologies in the cultural heritage sector. Our commitment highlights the versatility of these methodologies when applied to large structures, such as cliffs or archaeological sites, ensuring a level of scalability and quality equal, if not superior, to that for smaller-sized objects. To improve the experience for all the museum visitors, another future goal is to collect data on the main linguistic groups of visitors in order to add more languages.

Author Contributions

Conceptualization, S.I.; Methodology, S.I.; Software, S.I., M.S. and L.M.; Validation, S.I., C.P. and D.Z.; Formal analysis, S.I. and M.P.; Investigation, M.P.; Resources, C.P. and M.P.; Data curation, M.P.; Writing—original draft, S.I., M.S. and L.M.; Writing—review & editing, S.I., M.S., L.M., C.P., D.Z., M.P., A.T. and G.V.V.; Supervision, S.I., A.T. and G.V.V.; Project administration, A.T. and G.V.V.; Funding acquisition, A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Ecosystem “RAISE—Robotics and Artificial Intelligence for Socio-Economic Empowerment” (NextGenerationEU code ECS 00000035) PNRR-M4C2-I1.5.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to Museo Preistorico dei Balzi Rossi e Zona Archeologica.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Azuma, R.T. A Survey of Augmented Reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  2. Rojas-Sánchez, M.A.; Palos-Sánchez, P.R.; Folgado-Fernández, J.A. Systematic Literature Review and Bibliometric Analysis on Virtual Reality and Education. Educ. Inf. Technol. 2023, 28, 155–192. [Google Scholar] [CrossRef]
  3. Sattar, M.; Palaniappan, S.; Lokman, A.; Shah, N.; Khalid, U.; Hasan, R. Motivating Medical Students Using Virtual Reality Based Education. Int. J. Emerg. Technol. Learn. 2020, 15, 160. [Google Scholar] [CrossRef]
  4. Chong, H.T.; Lim, C.K.; Rafi, A.; Tan, K.L.; Mokhtar, M. Comprehensive Systematic Review on Virtual Reality for Cultural Heritage Practices: Coherent Taxonomy and Motivations. Multimed. Syst. 2022, 28, 711–726. [Google Scholar] [CrossRef]
  5. Theodoropoulos, A.; Antoniou, A. VR Games in Cultural Heritage: A Systematic Review of the Emerging Fields of Virtual Reality and Culture Games. Appl. Sci. 2022, 12, 8476. [Google Scholar] [CrossRef]
  6. Noh, Z.; Sunar, M.S.; Pan, Z. A Review on Augmented Reality for Virtual Heritage System. In Learning by Playing. Game-Based Education System Design and Development; Springer: Berlin/Heidelberg, Germany, 2009; pp. 50–61. [Google Scholar] [CrossRef]
  7. Bekele, M.; Pierdicca, R.; Frontoni, E.; Malinverni, E.; Gain, J. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. J. Comput. Cult. Herit. 2018, 11, 7. [Google Scholar] [CrossRef]
  8. Haydar, M.; Roussel, D.; Maidi, M.; Otmane, S.; Mallem, M. Virtual and Augmented Reality for Cultural Computing and Heritage: A Case Study of Virtual Exploration of Underwater Archaeological Sites (Preprint). Virtual Real. 2011, 15, 311–327. [Google Scholar] [CrossRef]
  9. Obradović, M.; Vasiljević, I.; Đurić, I.; Kićanović, J.; Stojaković, V.; Obradović, R. Virtual Reality Models Based on Photogrammetric Surveys—A Case Study of the Iconostasis of the Serbian Orthodox Cathedral Church of Saint Nicholas in Sremski Karlovci (Serbia). Appl. Sci. 2020, 10, 2743. [Google Scholar] [CrossRef]
  10. Bruno, F.; Bruno, S.; De Sensi, G.; Luchi, M.-L.; Mancuso, S.; Muzzupappa, M. From 3D Reconstruction to Virtual Reality: A Complete Methodology for Digital Archaeological Exhibition. J. Cult. Herit. 2010, 11, 42–49. [Google Scholar] [CrossRef]
  11. Boboc, R.; Bautu, E.; Florin, G.; Popovici, N.; Popovici, D. Augmented Reality in Cultural Heritage: An Overview of the Last Decade of Applications. Appl. Sci. 2022, 12, 9859. [Google Scholar] [CrossRef]
  12. Bekele, M.; Champion, E. A Comparison of Immersive Realities and Interaction Methods: Cultural Learning in Virtual Heritage. Front. Robot. AI 2019, 6, 91. [Google Scholar] [CrossRef] [PubMed]
  13. Epic Developer Community. Forward Shading Renderer. Available online: https://dev.epicgames.com/documentation/en-us/unreal-engine/forward-shading-renderer-in-unreal-engine (accessed on 21 March 2024).
  14. Nanite Virtualized Geometry. Available online: https://docs.unrealengine.com/5.0/en-US/nanite-virtualized-geometry-in-unreal-engine/ (accessed on 30 January 2024).
  15. Forward vs. Deferred Shading. Available online: https://unrealartoptimization.github.io/book/pipelines/forward-vs-deferred/ (accessed on 4 February 2024).
  16. Streaming Virtual Texturing. Available online: https://docs.unrealengine.com/4.27/en-US/RenderingAndGraphics/VirtualTexturing/Streaming/ (accessed on 26 February 2024).
  17. Runtime Virtual Texturing. Available online: https://docs.unrealengine.com/5.0/en-US/runtime-virtual-texturing-in-unreal-engine/ (accessed on 28 February 2024).
  18. Movable Light Mobility. Available online: https://docs.unrealengine.com/5.0/en-US/movable-light-mobility-in-unreal-engine/ (accessed on 29 January 2024).
  19. Oculus Developers. Guidelines for VR Performance Optimization. Available online: https://developer.oculus.com/documentation/native/pc/dg-performance-guidelines/ (accessed on 7 March 2024).
  20. Watson, A. Deep Learning Techniques for Super-Resolution in Video Games. arXiv 2020. [Google Scholar] [CrossRef]
  21. Walton, J. AMD FSR vs. Nvidia DLSS: Which Upscaler Reigns Supreme? Available online: https://www.tomshardware.com/features/amd-fsr-vs-nvidia-dlss (accessed on 26 February 2024).
  22. World Partition. Available online: https://docs.unrealengine.com/5.0/en-US/world-partition-in-unreal-engine/ (accessed on 28 February 2024).
  23. Sky Atmosphere Component. Available online: https://docs.unrealengine.com/5.0/en-US/sky-atmosphere-component-in-unreal-engine/ (accessed on 28 February 2024).
Figure 1. Map of the Balzi Rossi Museum (WGS84 coordinates: 43°47′1.61″ N, 7°32′3.42″ E), and its location in Liguria region (bottom left).
Figure 1. Map of the Balzi Rossi Museum (WGS84 coordinates: 43°47′1.61″ N, 7°32′3.42″ E), and its location in Liguria region (bottom left).
Applsci 14 03562 g001
Figure 2. Virtual view from Unreal Engine 4 Editor of the first iteration from an air platform above the sea, providing an unusual site perspective thanks to VR.
Figure 2. Virtual view from Unreal Engine 4 Editor of the first iteration from an air platform above the sea, providing an unusual site perspective thanks to VR.
Applsci 14 03562 g002
Figure 3. Comparison between a drone orthophoto of the Balzi Rossi cliff (left) and the first virtual reconstruction (right).
Figure 3. Comparison between a drone orthophoto of the Balzi Rossi cliff (left) and the first virtual reconstruction (right).
Applsci 14 03562 g003
Figure 4. Virtual view from Unreal Engine 4 Editor of the first iteration from the aerial platform, with the sea replaced by land and weather conditions adjusted as presumed during the Würm glaciation.
Figure 4. Virtual view from Unreal Engine 4 Editor of the first iteration from the aerial platform, with the sea replaced by land and weather conditions adjusted as presumed during the Würm glaciation.
Applsci 14 03562 g004
Figure 5. View of the reconstructed staircases that lead to Caviglione and Florestano caves and foliage. First version (left) vs. second version (right).
Figure 5. View of the reconstructed staircases that lead to Caviglione and Florestano caves and foliage. First version (left) vs. second version (right).
Applsci 14 03562 g005
Figure 6. Comparison of Balzi Rossi cliff between a drone orthophoto (left), the first virtual reconstruction (center), and the second virtual reconstruction (right).
Figure 6. Comparison of Balzi Rossi cliff between a drone orthophoto (left), the first virtual reconstruction (center), and the second virtual reconstruction (right).
Applsci 14 03562 g006
Figure 7. Comparison between the first virtual reconstruction of the Balzi Rossi cliff as it could have appeared during the Würm glaciation (left) and the second reconstruction (right).
Figure 7. Comparison between the first virtual reconstruction of the Balzi Rossi cliff as it could have appeared during the Würm glaciation (left) and the second reconstruction (right).
Applsci 14 03562 g007
Figure 8. Language selection menu.
Figure 8. Language selection menu.
Applsci 14 03562 g008
Figure 9. Screenshot that shows the teleport locomotion system in action: teleport destination is where the light blue disk is placed.
Figure 9. Screenshot that shows the teleport locomotion system in action: teleport destination is where the light blue disk is placed.
Applsci 14 03562 g009
Figure 10. The interface of the movement selection menu and game menu.
Figure 10. The interface of the movement selection menu and game menu.
Applsci 14 03562 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Iacono, S.; Scaramuzzino, M.; Martini, L.; Panelli, C.; Zolezzi, D.; Perotti, M.; Traverso, A.; Vercelli, G.V. Virtual Reality in Cultural Heritage: A Setup for Balzi Rossi Museum. Appl. Sci. 2024, 14, 3562. https://doi.org/10.3390/app14093562

AMA Style

Iacono S, Scaramuzzino M, Martini L, Panelli C, Zolezzi D, Perotti M, Traverso A, Vercelli GV. Virtual Reality in Cultural Heritage: A Setup for Balzi Rossi Museum. Applied Sciences. 2024; 14(9):3562. https://doi.org/10.3390/app14093562

Chicago/Turabian Style

Iacono, Saverio, Matteo Scaramuzzino, Luca Martini, Chiara Panelli, Daniele Zolezzi, Massimo Perotti, Antonella Traverso, and Gianni Viardo Vercelli. 2024. "Virtual Reality in Cultural Heritage: A Setup for Balzi Rossi Museum" Applied Sciences 14, no. 9: 3562. https://doi.org/10.3390/app14093562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop