Next Article in Journal
Distributed Remote-Controlled Sensor Network for Monitoring Complex Gas Environment Based on Intelligent Gas Analyzers
Previous Article in Journal
Multiclass Classification of Brain Tumors with Various Deep Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Three-Dimensional Modelling and Visualization of Stone Inscriptions Using Close-Range Photogrammetry—A Case Study of Hero Stone †

by
Suhas Muralidhar
and
Ashutosh Bhardwaj
*
Indian Institute of Remote Sensing, Dehradun 248001, India
*
Author to whom correspondence should be addressed.
Presented at the 9th International Electronic Conference on Sensors and Applications, 1–15 November 2022; Available online: https://ecsa-9.sciforum.net/.
Eng. Proc. 2022, 27(1), 35; https://doi.org/10.3390/ecsa-9-13343
Published: 1 November 2022

Abstract

:
Stone inscriptions and archaeological structures are an asset to humankind which contain the history of the past. Estampage is the traditional method used to obtain the replica of the inscriptions which is primarily used to decrypt texts and for documentation purposes. Presently, close-range photogrammetry is a useful remote sensing technique to digitize these inscriptions for study as well as preservation. The current study focuses on the creation of a 3D model of a hero stone using digital camera technology. These photographs were acquired using a Sony Alpha7 III camera with a 35 mm full-frame CMOS sensor. Two hundred and sixty-one images/frames were acquired from different heights above ground and with various positions and angles around the stone inscription to cover it all around. The data acquired were processed in a series of steps which included image matching, dense point cloud generation, mesh reconstruction, and texturing of the model. As the sensor is non-metric, two markers acquired from the field were added to the scene to scale it accurately. The dimensions of the hero stone are computed as 2.3 × 1.3 ft and the resulting model had a reprojection error of less than 0.011 pixels. The processed model has 10,915,514 facets (TIN) and 8000 × 8000 × 4 textures providing a realistic appearance. The recent developments in computer vision using the structure from motion (SfM) approach enables the reconstruction of the hero stone accurately with realistic textures and details useful for preservation work.

1. Introduction

Letters inscribed or engraved commonly on stone, marble, metal, terracotta, or wood are called inscriptions. As a crucial historical source, inscriptions are texts found on stone (Pillars, Walls), metal (Copper plates), or other materials (Bronze coins) in temples, monuments, and historic places. They provide proof of the existence and operations of ancient rulers and empires including detailed specific religious practices. Epigraphy is the study of inscriptions or the science of identifying a grapheme of a particular script. A hero stone is a memorial that is dedicated to eternalizing the honorable deaths of heroes specifically illustrating the scenes of battle. The current hero stone which is documented was originally found in Halebidu, Karnataka. It is preserved in the Bengaluru Govt Museum, Bengaluru, Karnataka [1].
Estampage or stamping is the traditional method of creating a replica of an inscription used to decipher the script. However, preserving estampages is a challenging task as they fade away with time. Digital heritage [2] will help us in proper conservation as well as in detail preservation which can also be used for deciphering texts and studying them [3]. Hence, digitally preserving them would act as a suitable alternative to save them for future generations [4]. In this study, we used close-range photogrammetry as the primary tool for digitally documenting the hero stone. We can carry out this activity precisely by having a basic setup of gears such as a camera, lights, and measuring tools. Once the data are processed, we can visualize them as a three-dimensional model

2. Study Site and Dataset

The hero stone (Figure 1) documented in this study was originally found in an archaeological site of Halebidu, Karnataka, which was later shifted to the Bangalore Government Museum, Bengaluru. This hero stone dates to the time of Hoysala king Vishnuvardhana in the 12th century AD [5]. It is situated in an open area on the grounds of the museum where it is preserved carefully. A mirrorless digital camera was chosen to scan the hero stone and acquire the datasets.

3. Methodology

The first task of this study was to conduct a preliminary field survey/reconnaissance survey to understand the hero stone’s location, lighting, and a few other basic data variables for the planning of image acquisition. A mirrorless camera with a 35 mm full-frame CMOS sensor of the Sony Alpha III model was used to scan hero stone. To capture the entire area, 261 photos were taken from various heights and angles, above ground and in various positions relative to the stone inscription. As the hero stone is situated outside the museum, natural lighting was sufficient to carry out photography. Furthermore, the dimensions of the hero stone were recorded on the field. These measurements are very important for scaling the model in the processing stage. Figure 2 shows the methodology used for the processing of the datasets.
All the data were transferred into the workstation and were well organized before processing in Agisoft Metashape in the 30-day free trial mode to process the dataset as a chunk. Before processing the images, there are several steps to be analyzed such as image alignment, dense point cloud generation, model texture generation, scaling of the model, and simplification [6].
Ref. [7] Processing of the dataset was completed on a computer with 16 GB of RAM and 4 GB of graphics memory with an i-core 7 processor. The images should not be geometrically transformed in any way such as being cropped, rotated, resized, etc., because the software can process only unmodified photos as they were taken by a digital photo camera. The processing of manually cropped or geometrically distorted photographs is likely to fail or generate very erroneous results.

3.1. Image Matching and Alignment of the Dataset

The images were imported into Metashape as chunks, and separate chunks can be constructed for different datasets. The software examines the camera position at the instant an image is acquired, which is determined by the interior and exterior orientation parameters, at this stage. Interior orientation factors include camera focal length, image principal point coordinates, and distortion factors for lenses. Aerotriangulation is used to calculate the exterior and interior image orientation parameters with bundle block adjustment based on collinearity equations. Aerotriangulation enables onboard measurements and photogrammetric measurements of tie points to be collaboratively adjusted. As a result, exterior orientation characteristics for images are more precisely and consistently determined [8]. A sparse point cloud containing triangulated coordinates for matching image points and estimated exterior (translation and rotation) and interior camera orientation parameters make up the output of this processing stage. There were 134,364 tie points produced as a result of setting the parameters for this alignment stage to medium precision and generic preselection, along with a sparse point cloud of the scene. However, it is possible for blurring or insufficient overlap between the photos to occur when taking pictures, which could have a negative impact on the outcome.

3.2. Building of Dense Point Clouds Using Input Camera Data

This step comprises the matching and detection of feature points. The tie-points data will then be represented in 3D as a sparse point cloud as a result. To generate dense point clouds, dense stereo matching is used to construct depth maps [9]. For the overlapping picture pairs, depth maps are produced taking into account the relative exterior and interior orientation parameters obtained with bundle adjustment. To create a combined depth map, multiple pairwise depth maps created for each camera are put together. In this work, 261 ultra-high depth maps were produced using a mild filtering method, employing the color values of neighboring images and their pixels.

3.3. Mesh Reconstruction Based on Depth Maps

The software can reconstruct polygonal mesh models using depth map data or point cloud data (including dense, sparse, and point cloud data imported from external sources). In this study, depth maps were used as the input for generating mesh [10]. These inputs generate a depth map, where each pixel indicates how far it is from the camera. The technique also generates high-quality maps that show us how many views each pixel was correctly matched to. The depth maps are used as an input to the mesh generation which will be discussed in this step. A 3D range grid is back-projected onto each depth map. By joining nearby vertices on the grid, a rough mesh is created. All depth maps’ vertices are combined and fed into the reconstruction method. This technique creates a triangulated surface that is impervious. The reconstruction algorithm assumes a whole, closed surface and fills in low point density regions with several triangles. Compared to dense cloud-based reconstruction, the depth maps setting makes better use of all the information from the input photos and uses fewer resources. Usually, this setting is used for arbitrary surface type reconstruction. Any form of an object can be modelled using any arbitrary surface type. It is typically chosen for enclosed items such as sculptures, structures, etc. It produces a high-quality mesh since it does not make any assumptions about the kind of object being modelled. The quality of the model will increase as the number of photographs increases, but it still will not be enough because the final mesh will always contain topology defects.

3.4. Texture Generation and Decimation of the Model

Using images as the source data and generic mapping mode, 8000 × 8000 × 4 textures were generated which gave a realistic view of the model. The final model generated was too high to visualize and hence it was decimated to 50,000 polygons including vertex color [11]. The original textures were reprojected onto this decimated model to regain its realistic view. A precise scale bar was made by placing two markers in the scene to reduce the model to its actual size.

4. Results and Discussion

Figure 3 shows a reconstructed mesh with TIN (a), and a 3D model with textures (b) of the hero stone. The processed model has 10,915,514 facets (TIN) and 8000 × 8000 × 4 textures providing a realistic appearance which was further reduced to 50,000 polygons and reprojected the same textures onto the model [12]. Two markers were placed in the scene and a scale bar was generated for which field measurements were used to scale it down to its actual size [13]. Table 1 shows the accuracy, projections, and errors in pixels of the two markers. Due to the high geometry of the model obtained, the errors were less than 0.1 pixels [14]. Figure 4 shows the results of the filter applied to the inscriptions. These models are ready to export for further analyses to any software desired.

5. Conclusions

This study reveals that the SfM-based 3D models are a low-cost alternative to the 3D modelling of a hero stone and in general for any structure with a good texture [15]. This method can be used in a variety of disciplines, each with its own applicability and ease of implementation. Heritage is explained in UNESCO documents as “our legacy from the past, what we live with today, and what we pass on to future generations” [16]. Heritage is anything that is respected and passed down from one generation to the next. Intangible or tangible, any form of heritage in an analogue state can be converted to a digital form using computer processing and other techniques for future preservation [17]. The results which are obtained here are accurate to a few cm level and these models can be used to study and decipher texts as well as preserve them digitally [18]. The output of the model has a variety of options to export such as .obj, .fbx, .stl, etc. Options such as STL (stereolithography) can be used to 3D print the same model with textures. Hence, close-range photogrammetry can be one of the best methods to record, handle, and process data for the preservation of heritage [19].

Author Contributions

Data acquisition and processing was completed by S.M. The analysis and the manuscript were prepared by S.M. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Self obtained data.

Acknowledgments

The authors would like to thank the Department of Archaeology, Museums and Heritage, Karnataka, for preserving the hero stone and also M. N. Muralidhar, a heritage enthusiast, who supported in equipment, data acquisition, and field measurements.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Annual Report of the Mysore Archaeological Department for the Year 1929. 1931. Available online: https://archive.org/details/in.gov.ignca.22774 (accessed on 1 March 2022).
  2. Ehtemami, A.; Park, S.; Bernadin, S.; Lescop, L.; Chin, A.; Bum Park, S.; Bernadin, S.; Lescop, L.; Chin, A. Review of Visualizing Historical Architectural Knowledge through Virtual Reality. In Proceedings of the SoutheastCon 2021 (SoutheastCon), Virtual, 10–13 March 2021. [Google Scholar]
  3. Barmpoutis, A.; Bozia, E.; Wagman, R.S. A novel framework for 3D reconstruction and analysis of ancient inscriptions. Mach. Vis. Appl. 2009, 21, 989–998. [Google Scholar] [CrossRef]
  4. Kwoczynska, B.; Litwin, U.; Piech, I.; Obirek, P.; Sledz, J. The Use of Terrestrial Laser Scanning in Surveying Historic Buildings. In Proceedings of the 2016 Baltic Geodetic Congress (BGC Geomatics 2016), Gdansk, Poland, 2–4 June 2016; pp. 263–268. [Google Scholar] [CrossRef]
  5. Epigraphia Carnatica, B.L. Rice (Volume-9, Hassan District, Revised Edition), Beluru-348. Available online: https://archive.org/details/dli.ernet.213550/page/n5/mode/2up (accessed on 1 March 2022).
  6. Agisoft Metashape User Manual Professional Edition, Version 1.7. Available online: https://www.agisoft.com/pdf/metashape-pro_1_8_en.pdf (accessed on 15 June 2022).
  7. Pfarr-Harfst, M. Typical Workflows, Documentation Approaches and Principles of 3D Digital Reconstruction of Cultural Heritage. In 3D Research Challenges in Cultural Heritage II; Springer: Cham, Switzerland, 2016; pp. 32–46. [Google Scholar] [CrossRef]
  8. Jawahar, C.V.; Shan, S. (Eds.) Computer Vision—ACCV 2014 Workshops; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; Volume 9009. [Google Scholar] [CrossRef]
  9. Murtiyoso, A.; Grussenmeyer, P. Documentation of heritage buildings using close-range UAV images: Dense matching issues, comparison and case studies. Photogramm. Rec. 2017, 32, 206–229. [Google Scholar] [CrossRef] [Green Version]
  10. Remondino, F. Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef] [Green Version]
  11. Brutto, M.L.; Ebolese, D.; Fazio, L.; Dardanelli, G. 3D survey and modelling of the main portico of the Cathedral of Monreale. In Proceedings of the 2018 Metrology for Archaeology and Cultural Heritage (MetroArchaeo), Cassino, FR, Italy, 22–24 October 2018; pp. 271–276. [Google Scholar] [CrossRef]
  12. Kim, D.H.; Poropat, G.; Gratchev, I.; Balasubramaniam, A. Assessment of the Accuracy of Close Distance Photogrammetric JRC Data. Rock Mech. Rock Eng. 2016, 49, 4285–4301. [Google Scholar] [CrossRef] [Green Version]
  13. Surový, P.; Yoshimoto, A.; Panagiotidis, D. Accuracy of Reconstruction of the Tree Stem Surface Using Terrestrial Close-Range Photogrammetry. Remote Sens. 2016, 8, 123. [Google Scholar] [CrossRef] [Green Version]
  14. Li, X.Q.; Chen, Z.A.; Zhang, L.T.; Jia, D. Construction and Accuracy Test of a 3D Model of Non-Metric Camera Images Using Agisoft PhotoScan. Procedia Environ. Sci. 2016, 36, 184–190. [Google Scholar] [CrossRef]
  15. Jariwala, J.J.; Bhardwaj, A.; Khoshelham, K.; Raghavendra, S.; Khoshelham, K. Feasibility of Mobile Mapping System by Integrating Structure from Motion (SfM) Approach with Global Navigation Satellite System. In Proceedings of the Applied Geoinformatics for Society and Environment (AGSE); pp. 115–132. Available online: https://www.researchgate.net/publication/275771899 (accessed on 17 August 2022).
  16. Digital Heritage UNESCO. 2022. Available online: https://en.unesco.org/themes/information-preservation/digital-heritage/concept-digital-heritage (accessed on 20 August 2022).
  17. Tingdahl, D.; Van Gool, L. A Public System for Image Based 3D Model Generation. In Proceedings of the International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications, Rocquencourt, France, 10–11 October 2011; pp. 262–273. [Google Scholar] [CrossRef]
  18. Amelio, S.D.; lo Brutto, M.; D’amelio, S.; lo Brutto, M. Close Range Photogrammetry for Measurement of Paintings Surface Deformations NEPTIS-ICT-Based solutions for Augmented Fruition and Exploration of Cultural Heritage View Project Special Issue on “3D Virtual Reconstruction for Archaeological Sites” View project Close Range Photogrammetry for Measurement Of Paintings Surface Deformations. 2009. Available online: https://www.researchgate.net/publication/237546517 (accessed on 25 June 2022).
  19. Kushwaha, S.K.P.; Dayal, K.R.; Sachchidanand; Raghavendra, S.; Pande, H.; Tiwari, P.S.; Agrawal, S.; Srivastava, S.K. 3D Digital Documentation of a Cultural Heritage Site Using Terrestrial Laser Scanner—A Case Study. In Applications of Geomatics in Civil Engineering; Ghosh, J.K., da Silva, I., Eds.; Lecture Notes in Civil Engineering; Springer: Singapore, 2020; Volume 33, pp. 49–58. [Google Scholar] [CrossRef]
Figure 1. Depiction of the perspective view of the hero stone.
Figure 1. Depiction of the perspective view of the hero stone.
Engproc 27 00035 g001
Figure 2. Methodology flowchart.
Figure 2. Methodology flowchart.
Engproc 27 00035 g002
Figure 3. Reconstructed mesh with TIN (a); 3D model with textures (b).
Figure 3. Reconstructed mesh with TIN (a); 3D model with textures (b).
Engproc 27 00035 g003
Figure 4. 3D model enhanced for readability.
Figure 4. 3D model enhanced for readability.
Engproc 27 00035 g004
Table 1. Accuracy of control points.
Table 1. Accuracy of control points.
MarkersX Err (m)Y Error (m)Z Error (m)AccuracyErrorProjectionsError (pix)
Point 1−0.3281000.1413430.0283920.0050000.3583771100.011
Point 2−0.3278900.1213760.0498210.0050000.353166980.011
Total Error of Control Points0.3279950.1317380.04058 0.355781 0.011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Muralidhar, S.; Bhardwaj, A. Three-Dimensional Modelling and Visualization of Stone Inscriptions Using Close-Range Photogrammetry—A Case Study of Hero Stone. Eng. Proc. 2022, 27, 35. https://doi.org/10.3390/ecsa-9-13343

AMA Style

Muralidhar S, Bhardwaj A. Three-Dimensional Modelling and Visualization of Stone Inscriptions Using Close-Range Photogrammetry—A Case Study of Hero Stone. Engineering Proceedings. 2022; 27(1):35. https://doi.org/10.3390/ecsa-9-13343

Chicago/Turabian Style

Muralidhar, Suhas, and Ashutosh Bhardwaj. 2022. "Three-Dimensional Modelling and Visualization of Stone Inscriptions Using Close-Range Photogrammetry—A Case Study of Hero Stone" Engineering Proceedings 27, no. 1: 35. https://doi.org/10.3390/ecsa-9-13343

Article Metrics

Back to TopTop