1. Introduction
The digital transformation of survey, modeling, and management processes for built heritage is redefining the tools supporting knowledge, conservation, and transmission of the asset. In this context, Heritage Building Information Modeling (HBIM) is configured as an information infrastructure in which metric data, historical documentary sources, material characterizations, diagnostic results, and management contents converge in a queryable model [
1,
2,
3,
4]. Within this framework, Scan-to-HBIM workflows based on integrated three-dimensional acquisitions are now enhanced by the use of advanced texturing strategies and Physically Based Rendering (PBR) materials [
5,
6,
7]. Texture, in fact, does not merely constitute support for visual rendering, but can assume an informative function associated with the surface of the model, conveying chromatic, material, and conservation data. Recent studies show how high-resolution orthophotos, textures derived from photogrammetric data, and mappings consistent with parametric geometry can improve the readability of the as-is state, support the identification of degradation phenomena, and recognize constructive discontinuities, increasing the informational density of the model and reducing the distance between direct observation, survey data, and digital representation [
4,
8,
9,
10].
Despite these advances, the literature still presents differentiated and not yet fully consolidated approaches to the integration of the material component within HBIM workflows. In some cases, photogrammetric textures are projected onto parametric surfaces by means of metrically controlled orthophotos; in others, the generation and optimization of material maps are delegated to external software, confirming the limits that native BIM environments still present in the advanced management of texturing and PBR materials. In further applications, materials are employed mainly to guarantee visual continuity and interoperability toward real-time platforms, transferring predominantly basic appearance attributes. Overall, some recurring critical issues emerge: dependence on manual procedures, the limited scalability of processes, the difficulty in managing high-resolution datasets and, above all, the still partial integration of texture as a stable informational component of the model [
11,
12].
In parallel, the integration of HBIM into immersive and interactive visualization environments is fostering the development of hybrid pipelines in which information models, photogrammetric textures, and real-time graphic engines converge in digital ecosystems oriented toward consultation and operational support [
8,
10]. Such solutions show significant potential for planned conservation, cultural mediation, accessibility, and immersive fruition [
13,
14,
15]. Some studies have also underlined how transfer toward XR and real-time environments requires the maintenance of semantic coherence and interoperability among geometry, materials, and metadata [
16,
17,
18]. However, there has been insufficient investigation into how photogrammetric texturing can be integrated into Scan-to-HBIM workflows not only as a visual improvement, but as a controlled, replicable, and interoperable documentary and informational component, capable of also accompanying the model in the subsequent phases of visualization and consultation in a real-time environment.
It is around this issue that the present contribution is positioned. The research addresses the following question: to what extent can the integration of photogrammetric texturing into a Scan-to-HBIM workflow increase the informational content of the model, improve the recording of the material and conservation conditions of the asset, and keep this informational heritage interoperable in a real-time environment? Starting from this question, the research hypothesis is that photogrammetric texturing, if integrated according to controlled procedures within the workflow, does not operate solely as support for realistic rendering but contributes to increasing the level of informational development of the model, strengthening its capacity to record and restitute the material and conservation conditions of the asset, as well as to support its immersive fruition [
19,
20,
21].
The originality of the work therefore does not lie in the use of texturing itself, already widely discussed in the literature, but in its interpretation and experimentation as an informational component of the HBIM model within an integrated Scan-to-HBIM-to-Real-Time-Rendering pipeline. The scientific contribution of the article consists of proposing and discussing a workflow aimed at verifying how photogrammetric texturing can simultaneously contribute to the representation of the as-is state, the organization of data, and the informational continuity between the HBIM model and immersive visualization environments. The experimentation takes as its case study the Fountain of the 99 Cannelle in L’Aquila, chosen for the richness of its sculpted mascarons, the complexity of its historical-constructive stratification, and the recognized identity value of the artifact at the urban and territorial scale [
16,
18,
21].
2. Materials and Methods
In this chapter, the methodology adopted for the documentation and digital modeling of the Fountain of the 99 Cannelle is illustrated. The process is structured into an integrated sequence of phases, which includes the definition of information requirements, the acquisition and processing of survey data, the development of the HBIM model, photogrammetric texturing, and subsequent geometric validation. This methodological setting is aimed at ensuring metric accuracy, informational coherence, and support for the activities of analysis, management, and enhancement of the artifact (
Figure 1).
2.1. From Monument to Data: Case Study and Definition of the Information Requirements
La Fontana delle 99 Cannelle, located in the Rivera district in L’Aquila, is one of the most representative medieval civic monuments of central Italy, a monument not only emblematic for the local community of L’Aquila but also strongly linked to the urban fabric. Built starting in 1272, from a design attributed to Tancredi da Pentima, it was conceived as a manifesto of the political unity of the refounded city, alluding, through the ninety-three stone masks and the six spouts, to the ninety-nine castles that contributed to the formation of the contado of L’Aquila [
21]. The sum of the ninety-three masks and the six spouts in fact yields the symbolic number “99”, reflected in a marked seriality of elements and in a high frequency of detail, with direct implications for acquisition and modeling choices. The layout unfolds across three stone masonry sides in bichrome stone (white and pink), arranged according to an open trapezoidal plan toward Piazza San Vito, creating an intimate yet permeable space in relation to the surrounding urban fabric. The surfaces are organized into a continuous sculptural apparatus that alternates geometric motifs with anthropomorphic and zoomorphic masks, defining a rhythmic articulation of the elevation and reinforcing the symbolic dimension of the work. The system of cascading basins, articulated on staggered levels, integrates formal and functional solutions, highlighting the design attention to the hydraulic arrangement and documenting the continuity of use of the fountain up to the 20th century. Starting from the definition of the required level of information need (LOIN), assumed within the framework of the UNI EN ISO 19650 [
22]. series and specified according to UNI EN ISO 7817-1:2024 [
23], the information collected within the digital container was structured to support the analysis and management of the asset through the reading of the state of conservation, enable the systematic querying of data, and feed real-time and immersive visualization environments (
Figure 2) [
24,
25,
26].
2.2. Acquisition and Management of Data
The monument of the Fountain of the 99 Cannelle was the subject of an investigation initially developed within the framework of a study on the medieval fountains of central Italy, with particular attention given to morphological aspects and the decorative apparatus. Subsequently, the research evolved into a methodological study aimed at the definition and verification of an integrated Scan-to-HBIM-to-Real-Time-Rendering workflow, oriented toward the management, documentation, and planning of interventions on the artifact, also with a view toward future restoration and maintenance activities. Within this framework, the survey campaign was not conceived as a simple routine monitoring activity but as a cognitive and experimental acquisition, aimed both at the construction of a geometric and material basis and at the evaluation of the effectiveness of the integration among different digital modeling techniques.
To this end, the monument was documented by means of an integrated survey carried out through terrestrial laser scanning (TLS) and Structure from Motion (SfM), from the ground and by UAV, in order to acquire a geometric basis and a high-definition description of the wall surfaces, the bichrome facing, and the mascarons [
7,
27].
The datasets were acquired and managed according to distinct but coordinated processes, maintaining as constant requirements scale coherence, control of the reference system, and compatibility with the subsequent HBIM phases [
28,
29].
The TLS point cloud was acquired with a FARO Focus M70 through a closed traverse of 23 stations (average mesh 0.62 × 0.62 cm) and registered in Leica Cyclone Register360 [
27]; the dataset was then exported in .e57 format and assumed as the metric reference. Terrestrial photogrammetry, conducted with a Sony Alpha camera (24 MP CMOS sensor) and lenses dedicated to detail and overall shots, reached resolutions up to 40 px/cm for the mascarons and 9–20 px/cm for the elevations. Finally, aerial photogrammetric acquisition required two distinct flight campaigns:
• The first was carried out in 2021, through the use of a DJI Mavic 2 Pro (20 MP), during which a total of 220 images were collected, at a resolution of about 2 px/cm, at two different altitudes (25 m and 120 m).
• The second, carried out in 2024 with a DJI Mini 3 (12 MP), was aimed at integrating the data relating to the horizontal surfaces. In this campaign, 675 images were collected and a flight altitude of 4 m was set.
Photogrammetric processing was carried out through RealityScan 2.0.1 software on 895 images. The integration between TLS and photogrammetry was constrained to a common reference system (local Cartesian UCS: local1—Euclidean, units in meters): for the TLS dataset, 9 Ground Control Points (GCPs) and 11 Control Points (CPs) were inserted, while for the photogrammetric dataset, 9 GCPs and 94 CPs were used, distributed on horizontal and vertical surfaces and marked on a variable number of images (3–10 per point). At the end of the procedure, the photogrammetric model was therefore roto-translated into the same reference system as the TLS point cloud, assumed as the reference [
7,
30,
31] (
Figure 3). The alignment coherence between the two datasets was verified in the open-source software CloudCompare v2.13.alpha by means of the Multi-scale Model-to-Model Cloud Comparison (M3C2) algorithm [
31,
32].
Once the unified point cloud had been obtained, processed within the RealityScan 2.1.0 software, a mesh composed of 101 million triangles and 50.6 million vertices was generated, then textured by means of 80 textures at 16,384 × 16,384 px, with a texel size equal to 0.000712 m/px (0.712 mm/px, equivalent to ~14 px/cm) and surface coverage equal to 100% [
33]. The point cloud was subsequently decimated and classified into functional regions (Context, Square, and Fountain, the latter further subdivided into horizontal registers) through the use of ReCap Pro 2025 software. The regions were then exported as .RCS and recomposed into a single .RCP project to be imported into Revit 2025 software, in line with the requirements of data management and modeling in the BIM environment (
Figure 4) [
34].
2.3. HBIM Modeling
The entire modeling phase was managed through a preliminary process of environment organization: starting from the import of the point cloud, the same reference system as the survey was maintained, in order to preserve the geolocation of the data; the elevation levels and cutting planes were then defined in correspondence with the key elements of the fountain, to support the production of derived outputs [
35,
36]. Pursuing the same objective, the complex was organized according to a spatial and functional coding system oriented toward the traceability and univocal identification of information. The information was therefore organized into various levels:
• Spatial units: corresponding to the façades and context areas (F99-N, F99-E, F99-S, F99-O and F99-Cx);
• Sub-units: referring to the horizontal registers of the elevation;
• Technological units: in which building components (UE), systems (UI), artistic components (UA), decorative components (UD), and functional components (UF) are distinguished.
The integration between spatial subdivision, horizontal registers, and technological units generated a univocal coding of the components (ID), used for localization within the model, the construction of thematic sheets, and the structured querying of the dataset [
20,
35,
36] (
Figure 5).
Entering the core of the operational modeling phase, a hybrid approach was adopted, aimed at combining operational efficiency and adherence to the survey data. As regards the main architectural elements, such as masonry, basins, pavements, steps, and containment elements, these were modeled by means of Revit native system families, while recurring or typologically defined elements, such as spouts, channels, molded elements, and ashlars, were developed as loadable families, equipped with type and instance parameters dedicated to the management of geometry, materials, and informational coding.
Particular attention was instead devoted to the mascarons, elements that strongly characterize the monument and are distinguished by marked formal and plastic complexity. Since these are stone elements with a high degree of irregularity, not reducible to regular parametric schemes, their restitution was addressed through the controlled integration of meshes within the BIM environment. This entailed a preliminary decimation in RealityScan to 5000 triangles, in order to reduce the computational load while maintaining the morphological readability of the elements.
Subsequently, in order to balance geometric fidelity and performance sustainability according to the different scales of observation, a three-level detail visualization strategy was implemented directly within the BIM environment (high, medium, low):
▪ High detail: the mesh was imported as geometry within a loadable family (Generic Model category), in order to preserve the rendering of material and chromatic data in close-up views and during documentation and dissemination phases.
▪
Medium detail: the mesh was managed through Dynamo (Visual Programming Language) by means of a node-based data-flow script that allowed decimation down to about 2500 triangles and automatic placement within the model [
37] (
Figure 6).
▪ Low detail: with an analogous workflow in Dynamo, the mesh was further decimated down to about 1500 triangles and correctly positioned for visualization in overall views and in work phases in which performance constitutes a priority.
The three versions (high-/medium-/low-detail) were finally nested within a container family, within which it was possible to associate each mesh with the corresponding visualization mode in the BIM environment [
18,
31,
38]. The integration of the meshes into the information structure was accompanied by coding and parametric association operations, so that each element could be queried.
In this sense, the Level of Information (LOI) was implemented through a set of shared parameters, aimed at systematically describing each component. The main information fields include the ID, the spatial and technological affiliation, the typology, the dimensional and material characteristics, the dating, the survey information, the state of conservation, and the links to external digital resources (URLs and QR codes). This parametric structure supports the production of thematic sheets, quality controls, and targeted queries, maintaining the alignment between geometric representation, informational content, and supporting documentation [
39,
40,
41].
2.4. Photogrammetric Texturing of the Information Model
Photogrammetric texturing represented a central phase of the adopted workflow, aimed at restituting the material and chromatic quality of the Fountain of the 99 Cannelle and at integrating the visual component within the information model [
42,
43]. Through the RealityScan 2.1.0 software, orthophotos were generated for each surface, subsequently subjected to a graphic optimization phase aimed at reducing reflections, accidental shadows, and chromatic alterations. The orthophotos were then processed in the open-source software Materialize 2017.4.3. for the generation of the main Physically Based Rendering (PBR) maps: Diffuse, Height, Normal, and Ambient Occlusion [
44,
45,
46] (
Figure 7).
These maps were subsequently encoded and assigned as materials to the individual surfaces; where necessary, scale and orientation were calibrated manually in relation to the as-is condition of the artifact. Particular attention was devoted to the more articulated surfaces, which could not be managed through UV mapping within the environment; for this reason, the photographic planes were subdivided into sub-areas.
The integration of photogrammetric textures was also used to support detailed modeling, particularly in the areas where the point cloud presented gaps or lower reliability, as well as for controlling the positioning of the components. Through surface texturing, it was also possible to represent, at the geometric level, the superficial layer of the wall facings, modeled by means of in-place modeling and subsequently transformed into loadable families [
47,
48] (
Figure 8).
2.5. Geometric Validation Procedure
In order to verify the accuracy of the data, the geometric validation of the model was finally conducted, aimed at controlling the metric coherence between the digital model and the point cloud. For this purpose, the USIBD Level of Accuracy (LOA) Specification v3.1 was adopted, which defines five increasing classes of accuracy (from LOA10 to LOA50) [
27,
48]. In particular, the achievement of class LOA30 was assumed as the project objective, as it corresponds to a range between ±5 and ±15 mm. The main comparison was carried out in CloudCompare, through the Cloud-to-Mesh (C2M) function [
45]. The procedure makes it possible to compare the reference mesh, exported from the information model, with the point cloud, in order to determine the minimum distance from the modeled surface, returning a positive or negative deviation.
3. Results
The results of the experimentation are summarized through a series of outputs derived from a single HBIM model, set as a common and queryable information base. The outputs produced make explicit the logic of the workflow, showing how the same data structure can be restituted in differentiated forms according to users and use scenarios. In particular, the results include (i) a system of schedules for management, control, and report extraction; (ii) official 2D outputs (plans and sections) derived from the model; (iii) real-time visualization and publication products (renders, 360° panoramas, virtual tour, and standalone web viewer), used both for technical consultation and for immersive fruition.
3.1. Quality Assessment of Survey Acquisition and Dataset Integration
The evaluation of the quality of the survey acquisition and of the integration of the datasets was based on the analysis of the registration, the photogrammetric processing, and the alignment coherence between the TLS and SfM datasets. The TLS registration carried out using Leica Cyclone Register360 returned average alignment residuals of less than 1 cm. The photogrammetric processing in RealityScan yielded a mean reprojection error of 0.46 px (median 0.39 px; maximum 1.00 px). Within the common local reference system, the Ground Control Points (GCPs) and the Control Points (CPs) used for the integration of the TLS and photogrammetric datasets returned residuals of less than 0.01 m in the processing reports. The alignment between the TLS and photogrammetric point clouds was further evaluated in CloudCompare through the Multi-scale Model-to-Model Cloud Comparison (M3C2) algorithm. On 236,620 measurements, the mean distance was equal to −0.0105 m (σ = 0.0247 m), while the highest deviations, reaching values close to ±0.15 m, were mainly located in marginal areas or in sectors affected by critical acquisition conditions (
Figure 9).
3.2. Schedules and Information Management
The design of the sheets was set in compliance with UNI 11337-3 and UNI 11337-4, organizing the information heritage along complementary axes: thematic sheets, useful for reading the model according to the typology and function of the elements; topographic sheets, oriented toward localization by spatial unit, sub-unit, and horizontal registers; and detail sheets, such as those dedicated to the individual mascarons, which allow a punctual and at the same time synoptic reading of the state of conservation (
Figure 10). In this way, it was possible not only to verify the completeness and coherence of the information, but also to directly extract all the reports necessary for the planned maintenance and conservation of the elements. The sheets were generated starting from the coded information structure of the HBIM model and included unique IDs, data on the state of conservation, and links to external digital resources.
In this compilation phase, the availability of a textured information model proved particularly useful, since it made it possible to conduct an immediate visual comparison between geometric data, material data, and state of conservation directly on the model views, facilitating the compilation of the information fields, reducing the margin of interpretative error, and ensuring greater adherence between in situ observation, digital survey, and recording within the HBIM system. For completeness of information, in the sheets relating to elements of particular significance, such as the mascarons and Tancredi’s plaque, hypertext URLs were included that referred to external digital records, in which, for each object, IDs, denomination, historical framing, technical–material description, notes, bibliographic references, and further specialist insights were recalled.
Each sheet also integrates a QR code that allows access to 360° panoramic views and immersive paths, so as to continuously connect the visual component, the technical-descriptive one, and the interactive experiences, opening the model up for integration with cloud archives, virtual tours, or museum communication systems and making the information apparatus immediately also usable in lay and educational contexts. From a methodological point of view, the entire structure of the sheets was built starting from unique codes derived from the classification system, in order to guarantee the precise traceability of each element and maintain coherence among the model, the attached documentation, and the data architecture.
3.3. The 2D Output
Starting from the information model, the main two-dimensional drawings were derived, which are essential for describing the artifact from a graphic point of view and for supporting its architectural interpretation; these outputs fully constitute the official documents of the digital process. In the Revit environment, the main plans and sections were generated at a 1:50 scale, adopting two complementary representation modes: wireframe views, useful for highlighting the geometric structure, and views with the integration of photo planes, aimed at documenting the as-is condition and at reading the material and conservation characteristics. These drawings were subsequently optimized from a graphic point of view, in order to improve their readability and communicative effectiveness (
Figure 11).
3.4. Geometric Accuracy Assessment of the Model
The geometric validation of the model was carried out through CloudCompare using the Cloud-to-Mesh (C2M) function. The analysis, conducted on a sample of 888,530 points, returned a mean deviation of −0.56 mm and a standard deviation (RMS) of 8.21 mm. These values fall within the range corresponding to LOA30, confirming the coherence between the digital model and the survey data (
Figure 12).
3.5. Real-Time Visualization
The real-time visualization of the HBIM model was carried out using Enscape (Chaos), a plugin for Autodesk Revit [
49]. This solution allows real-time synchronization between modeling and the 3D environment, avoiding intermediate exports or conversions and enabling the production, from the same information base, of outputs for fruition (360° panoramas, videos, rendered images, and web publications). In the preliminary phases, Twinmotion had been taken into consideration; however, the need for geometric simplification and some critical issues in material rendering would have reduced the fidelity of the representation. For this reason, Enscape was preferred, as it was more consistent with the objective of maintaining geometries and textures faithful to the HBIM model. Real-time engines such as Unreal Engine and Unity would offer a higher level of customization, but they require skills and development times less compatible with an HBIM workflow intended for multidisciplinary teams [
8,
16,
18,
26]. Within this framework, immersive visualization was employed with a dual purpose: accessible, to allow intuitive exploration of the asset and its informational contents, and professional, as support for reading the as-is state and for sharing a common interpretative context among technical figures (restoration, design, protection). Publication on cloud platforms and access through browsers (and also by means of QR codes associated with 2D outputs) extends fruition without dedicated software, making HBIM not a static archive, but a navigable information environment that integrates documentary accuracy and accessibility [
26,
31,
50,
51].
3.5.1. From BIM to Immersive Vision: Information and Annotations
The direct integration between Autodesk Revit and Enscape made it possible to set up a fully synchronized workflow, in which every geometric, material, or informational modification made to the model was updated in real time within the immersive visualization environment, without the need for intermediate exports or format conversions. This bidirectional connection guaranteed constant coherence between technical modeling and three-dimensional visualization, allowing immediate control of the visual output and the geometric configuration throughout the entire process. This synchronization proved particularly effective during the information modeling and texturing phases: the possibility of verifying in real time the positioning of the photographic planes and the effect of the PBR maps ensured high visual accuracy and perfect alignment between informational data and three-dimensional representation. The platform, in fact, offers various tools useful for the management of information models: the BIM Information tool allows direct access to the informational parameters associated with the model components, such as materials, construction periods, state of conservation, and spatial classifications, providing an immediate and contextual reading of the architectural asset. Completing the experience, Enscape’s Comment tool makes it possible to insert geolocated annotations directly into the model, linking them to specific points or architectural elements (
Figure 13). Such notes can highlight conservation issues, propose interventions, or document observations during the process of analysis and management of the asset. This functionality introduces a collaborative and participatory dimension into the HBIM workflow, transforming the model into a shared platform of knowledge, where immersive vision combines with information management to support, in an integrated manner, the activities of conservation, planning, and enhancement of architectural heritage.
3.5.2. From the Informational Model to the Narrative: Digital Experiences for the Dissemination of Cultural Heritage
At the same time, the digital model was conceived not only as an information archive, but as a starting point for the construction of immersive digital experiences dedicated to the communication and enhancement of heritage. Starting from the model, five 360° spherical panoramas were generated, placed at strategic points around the fountain (main entrance, north side, eastern front corresponding to the oldest nucleus, south side, and a central overall view), and used to construct a virtual tour capable of continuously restituting the perception of space and the complexity of the site (
Figure 14).
The tour was developed through Chaos Cloud, a platform that makes it possible to arrange the panoramas in sequence, insert interactive hotspots, and integrate textual, graphic, and multimedia contents. By clicking on the information points, it is possible to access in-depth panels, historical images, comparisons between the current state and archival documentation, sheets on construction history, transformations over time, the symbolism of the masks, and post-earthquake restoration interventions. In this way, the route is not merely a virtual visit, but a guided narrative that interweaves, in a clear and accessible way, the different levels of reading of the monument. The tour is accessible via browser through link or QR code, without installing dedicated software, and is designed to be enjoyed by a broad audience, not only specialists.
At the same time, in order to further expand the possibilities of access, the model was exported in standalone web mode, allowing the exploration of the three-dimensional model directly from a browser, in real time, with free navigation of the space. Taken together, the virtual tour and the standalone web model contribute to overcoming the purely design-based dimension of the model, transforming it into a living platform of knowledge, participation, and dissemination, in which architecture becomes something to explore, understand, and narrate (
Figure 15).
4. Discussion
The application of the Scan-to-HBIM-to-Real-Time-Rendering workflow to the Fountain of the 99 Cannelle shows that photogrammetric texturing, when it is fully integrated into the information model, is not a simple visual covering but an active component of the processes of documentation, interpretation, and communication of heritage [
7,
19,
26,
52]. In the case under examination, the texture in fact contributes to the identification of constructive discontinuities, material integrations, degradation phenomena, and chromatic variations, reducing the distance between in situ observation, surveys, and digital representation. It follows that, in the HBIM environment, visual data can assume documentary value when it is critically controlled and connected to the semantic structure of the model.
This result is closely linked to the nature of the case study. The Fountain of the 99 Cannelle combines seriality, a high density of detail, and marked morphological irregularity, especially in the mascarons, and therefore constitutes a significant testing ground for hybrid modeling strategies. The results show that repetition does not automatically imply simplification: when seriality is associated with strong formal and material variability, a purely parametric approach is not sufficient and the representation requires the integration of parametric logic, reality-based meshes, and high-definition texturing. These results are situated within the debate on HBIM as an environment of integration among geometry, semantics, and multi-source documentation [
1,
23,
35,
36,
40,
53]. In line with these studies, the work confirms that the effectiveness of an HBIM model depends not only on geometric precision but on the capacity to organize heterogeneous data into a coherent and queryable structure. However, compared with approaches in which metric precision constitutes the prevailing aim, the case analyzed here shows that the interpretative value of the model increases when geometric reliability is accompanied by visual and material evidence that can be read directly within it.
A second issue concerns the relationship between parametric modeling and irregular reality-based components. The literature has repeatedly highlighted the limits of purely parametric approaches in the treatment of plastic, degraded, or morphologically complex elements [
18,
41,
47]. In many cases, such difficulties have led to geometric simplifications or to the use of meshes as external references, not fully integrated into the information system. On the contrary, the workflow proposed here shows that irregular elements such as the mascarons can be incorporated into the HBIM structure through semantic coding, parametric association, and differentiated management of levels of detail. The contribution of the work therefore does not consist in opposing parametric modeling and mesh-based modeling, but in demonstrating the practicability of a hybrid strategy in which non-parametric components remain interoperable within the information model.
The use of Dynamo as a Visual Programming Language assumes a relevant methodological role. In the present study, visual programming supported the management of the mascaron meshes, particularly in the decimation and positioning of the versions at different levels of detail. This suggests that VPL tools can constitute an effective interface between reality-based geometries and semantically structured models, contributing to balancing morphological fidelity, computational sustainability, and information organization [
25,
35,
51]. The integration between visual, metric, and descriptive data also produces effects on the management level. The structured comparison among these information levels made the compilation of the sheets more reliable and favored the organization of the model as a queryable system for maintenance and conservation purposes [
20,
35,
37]. At the same time, the direct synchronization between Revit and Enscape shows that real-time rendering can be used not only for dissemination but as an operational extension of the information model, maintaining continuity among technical modeling, data consultation, and visualization oriented toward fruition [
8,
16,
51,
52].
Alongside these aspects, the case study also highlights significant limitations. The first concerns the still strongly manual nature of several texturing operations and of the integration between geometric and visual data. This critical issue depends not only on the setup of the workflow but also on the limited availability, within the Revit environment, of native tools for the advanced and automated management of textures, especially in the presence of irregular surfaces that cannot be effectively treated through standard UV mapping procedures [
8,
42]. Consequently, it was necessary to subdivide the photographic planes, manually calibrate scale and orientation, and verify the correct positioning of the textures, with an increase in processing times and dependence on specialist skills. Manual work therefore does not constitute a simple operational inconvenience, but a structural limitation that directly affects the replicability and scalability of the workflow, especially if applied to more extensive monuments or more articulated decorative systems. To overcome this limit, developments oriented toward the semi-automation or automation of texture projection, image-to-geometry alignment, and surface segmentation would be necessary [
1,
6,
26,
42,
54].
A second limitation concerns the computational sustainability of high-definition models. Although a model with high geometric and material quality was achieved, this inevitably raises the problem of the balance between visual fidelity, informational density, and operational manageability. The multi-LOD strategy adopted for the mascarons constitutes a concrete response to this critical issue, but it also highlights a more general issue with HBIM applied to assets with high decorative richness: detail cannot be considered an absolute value but must be calibrated according to analytical, conservation, and communicative objectives. This consideration also has direct implications for the transferability of the method, which will depend on the possibility of defining appropriate thresholds between the necessary informational density and acceptable computational cost. It will therefore be appropriate to develop more systematic criteria in relation to conservation needs, interactive performance, and the maintenance of the model over time [
22,
42,
47,
55].
Overall, the main scientific contribution of the work consists of demonstrating that, for assets characterized by the coexistence of seriality and irregularity, the most effective HBIM strategy is not the purely parametric one but a hybrid approach capable of integrating semantic structure, reality-based geometries, and documentary textures within a single interoperable environment. More generally, the work shows that texture can operate as documentary evidence, that irregular mesh-based components can become semantically operative within HBIM, and that real-time rendering can function as an extension of the information model. In this sense, the scientific value of the model depends not only on how accurately it reproduces the geometry but on how effectively it succeeds in relating form, material evidence, semantics, and uses of the model within a coherent digital system.
5. Conclusions
The adopted workflow showed that integration among surveys, HBIM structuring, photogrammetric texturing, and real-time visualization can produce a model that is not limited to a digital replica of the artifact but is configured as an information environment capable of connecting documentation, analysis, management, and fruition. From this perspective, the main contribution of the work consists of having demonstrated the effectiveness of a hybrid approach in which reality-based geometries, semantic structure, and documentary textures contribute to the construction of a queryable, reliable, and usable model at multiple levels [
7,
10,
37,
52,
56,
57]. The results also suggest that texturing should not be considered a simple outcome of visualization but an informational component of the model, while the real-time environment can operate as an extension of the HBIM system without separating from it [
8,
10,
16,
51,
52]. In this sense, the proposed workflow contributes to defining an operational mode in which knowledge, representation, and accessibility are closely integrated.
However, some research questions remain open. In particular, it will be necessary to reduce the weight of manual operations in the management of textures and meshes, verify the transferability of the method on case studies differing in scale, complexity, and state of conservation, and further investigate integration with collaborative platforms and XR environments for the monitoring, updating, and fruition of heritage over time.