Next Article in Journal
Theoretical Method to Predict Internal Force of Crossbeam in Steel–Concrete Composite Twin I-Girder Bridge under Torsional Loading
Previous Article in Journal
Effect of Different Crestal Sinus Lift Techniques for Implant Placement in the Posterior Maxilla of Deficient Height: A Randomized Clinical Trial
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning and Image-Processing-Based Method for the Detection of Archaeological Structures in Areas with Large Amounts of Vegetation Using Satellite Images

by
José Alberto Fuentes-Carbajal
*,
Jesús Ariel Carrasco-Ochoa
,
José Francisco Martínez-Trinidad
and
Jorge Arturo Flores-López
Instituto Nacional de Astrofísica Óptica y Electrónica (INAOE), Puebla 72840, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(11), 6663; https://doi.org/10.3390/app13116663
Submission received: 21 March 2023 / Revised: 13 May 2023 / Accepted: 15 May 2023 / Published: 30 May 2023

Abstract

:
The detection of archaeological structures in satellite images is beneficial for archaeologists since it allows quick identification of structures across large areas of land. To date, some methods have been proposed to solve this task; however, these methods do not give good results in areas with large amounts of vegetation, such as those found in the southeast of Mexico and Guatemala. The method proposed in this paper works on satellite images obtained with SASPlanet. It uses two color spaces (RGB and HSL) and filters (Canny, Sobel, and Laplacian) jointly with supervised machine learning to improve the detection of archaeological structures in areas with a lot of vegetation. The method obtains an average performance of at least 93% on precision, recall, F1 score, and accuracy. Thus, our proposal is a very good option compared with traditional techniques for manual or semi-automatic detection of structures, identifying archaeological sites in a shorter time.

1. Introduction

From the beginning of aerial photography, which has been used for different tasks [1,2,3,4,5], archaeologists realized the potential of this technique to study and discover new sites, because large areas of land could be analyzed more quickly, allowing them to understand the spatial organization of archaeological structures [6]. Today, the range of image acquisition techniques in aerial archaeological research is vast; from satellites to small unmanned aerial vehicles (UAVs), these images can be produced in different visible light spectra (e.g., [7]). This data source has opened new research perspectives for cultural heritage and archaeology at much vaster scales [8,9,10]. The digital era has nevertheless produced a bottleneck. The use of large amounts of images is a problem since the images are often labeled manually or semi-automatically; this is a problem for humans due to the time required to label each archaeological structure [11]. Fortunately, this repetitive task can now be tackled using methods based on machine learning [12,13]. The problem, in a nutshell, consists of the binary classification of pixels (structure versus other types of objects); we focus on object detection through supervised machine learning, particularly the detection of archaeological structures from satellite images with abundant vegetation [14,15,16,17]. Mesoamerican civilizations developed in Mexico, Guatemala, Belize, western Honduras, and El Salvador covering more than 300,000 km2 [18,19]. There are still several unexplored and unstudied areas in this territory with hidden archaeological structures among the vegetation [20].
Nowadays, digital image processing techniques, as well as computational learning algorithms, have been increasingly used by archaeologists for the discovery of new archaeological structures. Recently, F. Monna et al. [21] proposed a method to detect from orthomosaics the enormous amount of dry stones used by past societies to build funerary complexes in the steppes of Mongolia. The problem with this method in sites with large amounts of vegetation, is obtaining reliable DEM maps, so, as F. Monna et al. commented, their method does not give good results in the jungle or wooded areas. In this work, we propose a method that allows the detection of archaeological structures in areas with a large amount of vegetation, from satellite images obtained with SASPlanet. For detection in these types of images, two color spaces (RGB and HLS) and different filters (Canny, Sobel, and Laplacian) were used; also, different supervised classifiers were tested. The experiments with different images taken from the archaeological sites of Calakmul, Comalcalco, Tikal, Zaculeu, Xcambó, Mayapán, Aké, and Palenque show that the proposed method achieves good results.
The work presented is organized as follows: Section 2 contains the related work; in Section 3 our proposed method is described; in Section 4, some experiments carried out at different archaeological sites are shown; in Section 5, we analyze our method and its limitations; and in Section 6 our conclusions and future work are presented.

2. Related Work

According to UNESCO, armed conflicts and war, earthquakes and other natural disasters, pollution, poaching, uncontrolled urbanization, and unchecked tourist development pose significant problems to world heritage sites [22,23,24,25]. Due to this, records of archaeological structures have been generated worldwide through images. However, the amount of data makes analysis difficult. The development of artificial intelligence methods and approaches can help to facilitate the collection and understanding of the data in less time. Today, machine learning and computer vision techniques have assisted archaeologists in evaluating, predicting, and classifying their findings using images [26,27].
Literature shows different types of works that have used these techniques in the area of archaeology; for example, some works have been carried out to detect shipwrecks automatically through methods that use deep learning using elevation maps to apply new methodologies in the area of underwater archaeology, obtaining results that compete with or improve on those made by archaeologists [28]. Another area in which these methods have been used is the classification of ancient artifacts, such as vessels from a Bronze Age cemetery in Saxony. The results exceeded archaeologists’ expectations by eliminating systematic errors within the typology [15]. Artificial neural networks have also been applied to attributing the origins of archaeological ceramics by analyzing their composition found in different groups of archaeological sites, showing promising results [17]. Another work focused on computer vision proposes a new representation of Mayan hieroglyphs that includes information from both the foreground and the background, obtaining better results than the state of the art [29]; in a later work, a proposal was also made to improve this representation using thinned hieroglyphs by introducing an improved selection of points of interest which significantly improved the results of hieroglyph recovery [30].
On the other hand, the work carried out by [31] is based on monitoring and documenting archaeological sites in the Arctic using RGB images and thermal images obtained with the help of a drone. The results obtained by combining RGB and digital surface models helped to map the ruins and structures of the study sites. Similarly, Ref. [32] focuses on identifying archaeological sites (tombs) in forested areas using time series of images with the help of machine learning. The results of this work indicate that the method works well for clustered tombs. Likewise, Ref. [33] uses machine learning techniques with satellite images to detect Roman fortified sites in arid environments, obtaining results with an overall accuracy of 0.93 with field study data. Finally, Monna et al. [21] use orthomosaics from the Jergalan area to analyze funerary complexes in Mongolia, using different machine learning algorithms to detect stone structures. The method uses different color spaces (RGB and HSV), textures (homogeneity, contrast, and entropy), and DEM maps and results in images that compete with or improve on the results obtained by archaeologists performing a manual classification. However, this method does not work well for jungle or wooded regions.
The use of image analysis and supervised classification is a tool that could serve to support archaeological research in different areas, such as the detection of archaeological structures, even in sites with large amounts of vegetation. All of the above motivates us to propose a new method that allows the detection of ancient structures in areas with large amounts of vegetation, such as those in the Mesoamerican region, which is introduced in the next section.

3. Proposed Methods

This work proposes a method to detect archaeological structures in jungle regions, such as Mesoamerican ones. First, we will broadly describe the proposed method, which uses satellite images from which useful features are extracted. With these features, we perform semantic segmentation of pixels of interest, in our case the archaeological structures, through a supervised classifier trained to detect these pixels. Later, the stages of our method will be explained in detail.
The proposed method receives an image of the site of interest in RGB color space. We propose applying a color space transformation (RGB to HSL) to generate a new image and use this new image to generate images with different filters. The color space transformation is necessary due to the type of filters we use. Once we have all the generated images (color spaces and filters), their channel values are used as features. Based on these features, a classifier is trained to perform pixel classification that detects the archaeological structure as part of the introduced method. As a result, we can build a new binary image where each white pixel corresponds to a pixel of an archaeological structure, and each black pixel corresponds to a pixel that does not belong to an archaeological structure. Finally, we use the RGB image and the binary image to remove all vegetation from the RGB image to isolate the archaeological structure, generating a segmented RGB image as a result. The general process followed by our proposed method can be seen in Figure 1. In addition, we present a pseudocode Algorithm 1 that shows the process of our method. The following sections describe each stage of the proposed method in more detail.
Algorithm 1 Workflow of our proposed method.
1:
 procedure Proposed method(rgbImage)
       Color space transformation
2:
     h s l I m a g e TransformRGBtoHSL(rgbImage)
       Filters
3:
     l a p l a c i a n , c a n n y , s o b e l ApplyFilters(hslImage)
       Detection of archaeological structures
4:
     t r a i n i n g S e t GenerateTrainingSet(rgbImage, hslImage, laplacian, canny, sobel)
       Classify Pixels
5:
     p i x e l s c l a s s i f i e d Classifier(trainingSet)
       Get Structure as image
6:
     s t r u c t u r e I m a g e GetStructurefromImage(rgbImage, pixelsclassified)
7:
    return structureImage
8:
 end procedure

3.1. Obtaining Images

The free application SASPlanet [34] was used to obtain different images of the Mayan zone archaeological sites of Mexico and Central America. SASPlanet was designed for viewing and downloading high-resolution satellite images with the help of other servers such as Google Earth, Google Maps, Bing Maps, Nokia, Here, Yahoo, Yandex, OpenStreetMap, ESRI, and Navteq. The process followed to download the images was to identify the site of interest, select an area (see Figure 2a), choose the image quality, select the image format (in our work, we use the .tif format, however, any other format can be used), select the post-processing options, and save the georeferenced information (see Figure 2b). The image generated corresponds to the site of interest; the spectral range is from 380 to 700 (visible light), using three bands (RGB), with a spatial resolution of 15 m and a swath width at the nadir of 13.1 km. Figure 2 shows the process followed to obtain the images using SASPlanet.

3.2. Color Space Transformation

Many color models are currently used because the science of color is a broad field encompassing many application areas [7]. In this work, we use two color spaces (RGB and HLS), since transforming images to a non-RGB space has improved the classification performance [35]. These two color spaces are described below:
  • RGB color space: The typical format involves a 24-bit encoding, where the image is constructed using three primary colors—red (R), green (G), and blue (B)—each represented by a separate channel that is processed by cameras and computers. With 8 bits allocated to each color channel, there are 256 distinct values possible per channel, resulting in a total of 16,777,216 color combinations. Additive color mixing is used in this color space to create the final image, which is divided into the three channels (R, G, and B) and treated as distinct features for each color channel.
  • HSL color space: This consists of three color channels: H (hue), which represents the primary colors (red, green, and blue); L (lightness), which takes into account the amount of light in an image, where the amount of light will tend to black; and S (saturation), where the amount of saturation in the image will depend on whether the color turns gray or maintains the original color. Choosing this color channel allows us to carry out the processing of the image in individual planes, because the color is represented in the image hue and lightness, while the saturation is used as a masking image that isolates the area of interest in the image [7].
The change in the color space from RGB to HSL in our experiments helped us to obtain images with filters that contain more information due to the composition of the HSL. The tests can be seen in Figure 3, where we can see images generated from these color spaces with the help of different filters. This figure shows that the filters (especially the Canny and Laplacian filters) emphasize more pixels corresponding to vegetation in the HSL color space than in the RGB color space.

3.3. Filters

Filters are used to achieve specific objectives when processing digital images. These objectives include reducing the number of intensity variations between adjacent pixels, which is known as smoothing the image. Another goal is to eliminate noise, which involves identifying and removing pixels with intensity levels that differ significantly from their neighbors. Such noise can originate from the image acquisition or transmission processes. Filters can also be used to boost edges, highlighting the edges located in an image, or to detect edges, which refer to pixels with sharp changes in intensity. These operations are applied to the pixels of an image to optimize it, emphasize specific information, or achieve a particular effect. The filtering process can be performed in the frequency domain and/or color space [36].
For better feature extraction (structure versus non-structure), we tested mean, median, variance, convolution, bilateran, Canny, Laplacian, and Sobel filters. However, we selected the Canny, Laplacian, and Sobel filters, because they produced better results than the others, allowing us to improve the detection of archaeological sites in areas with a lot of vegetation.
  • The Canny filter is based on the first derivative of a Gaussian function. However, since raw image data is often affected by noise, the original image is preprocessed with a Gaussian filter to reduce the noise. This results in a slightly blurred version of the original image [37].
  • The Laplacian filter on the other hand, is an edge detector that calculates the second derivatives of an image, measuring the rate of change in the first derivative. It is used to determine whether a change in adjacent pixel values represents an edge or a continuous progression [37].
  • Finally, the Sobel filter uses a small, separable, integer-valued filter in both the horizontal and vertical directions to convolve the image. This filter is relatively inexpensive in terms of runtime [37].
The objective of using the different images generated from the RGB image is to include different features (information) in images, to improve the detection of archaeological structures by a classification algorithm.

3.4. Detection of Archaeological Structures

This stage aims to determine whether or not a pixel in the image corresponds to an archaeological structure. This is achieved by applying a supervised classifier that uses a training set of previously labeled pixels, using images generated with the color spaces (RGB and HSL) and filters (Canny, Laplacian, and Sobel). In this way, the pixels of new images can be classified using the information of the pixels in the training set. In our proposal, the labeling of the pixels in the training set was carried out manually using the ImageJ software (https://imagej.nih.gov/ij/, accessed on 14 May 2023). For this, we created a few rectangles on areas defined as “structure” or “non-structure”, and all pixels in each rectangle were labeled accordingly. Each pixel of type “structure” and “non-structure” is described through a feature vector with 15 entries, each one corresponding to the pixel values in each channel of the RGB and HSL images, and the pixel values in each channel after applying the three filters (Canny, Laplacian, and Sobel) on the HSL image. Thus, we obtain vectors as follows: R , G , B , H , S , L , C 1 , C 2 , C 3 , L 1 , L 2 , L 3 , S 1 , S 2 , S 3 . The labeling described produces a set of pixels of structure type or non-structure type described by fifteen features, constituting the training set for a supervised classifier. Figure 4 shows an example of the pixels selected as a training set for the Calakmul area. This image corresponds to the RGB color space, showing the structure (red) and non-structure (blue) type pixels. This example shows that the number of pixels selected as a training set is very small and easy to obtain compared to manually labeling the whole image. Additionally, using a small training set positively impacts the time required for training a classifier. Once the training set is built, a classifier will train a model to classify all of the image pixels as structure or non-structure, i.e., detect (if it exists) the archaeological structure based on the small training set.

4. Experiments

To assess the proposed method, we applied it to images of four archaeological sites: Calakmul, Palenque, Comalcalco, and Tikal. For this, we manually selected small training sets of pixels, as explained in the previous section. Some images of the archaeological sites used in this experiment can be seen in Figure 5. To show the performance of our proposal, we performed three evaluations. First, we evaluated our method by applying five-fold cross-validation on the selected training set. As a second evaluation, we used the chosen training set on an image to classify all of the pixels in the image. Then, we assessed the classification by comparing the classifier’s result against the fully labeled image. Finally, a third evaluation is presented to show the performance of our proposal in a more realistic scenario; for this, we used high-resolution satellite images and graphically show the archaeological structures detected by our proposed method.
For our first evaluation, we selected training sets from four archaeological sites: Calakmul, Palenque, Comalcalco, and Tikal. For this, a training set was created for each archaeological site, manually selecting rectangles of different sizes on areas with structures and non-structures. Table 1 shows the sizes of the training sets for each archaeological site. Since the selection was made manually, the number of pixels selected for each image varies due to the size of the selected rectangles. The pixels selected for the Calakmul area can be seen in Figure 4.
To evaluate the performance of our proposal, we used the training set and applied five-fold cross-validation over this set to obtain an evaluation of the classifiers. This process consists of creating five subsets called folds, each with 20% of the training set, each containing the same proportion of structure and non-structure pixels as the whole training set. Then, we took four folds to train the classifier and one fold to test the classifier and evaluate its classification quality. This process was repeated five times, taking each of the five folds as a test set. Finally, the value reached by the quality measures in each of the five evaluations was averaged and reported. Although semantic segmentation is a major part of our work, we decided not to use segmentation-based performance measures such as IoU and BF score since, in a real scenario, the images are not apriori whole segmented. Thus, the measures used to evaluate the classifier in our experiment were: precision, recall, F1 score, and accuracy, since they are widely used to evaluate the performance of a classifier. To compute these metrics, it was necessary to know the values TP (true positive = number pixels of structure correctly classified as structure), TN (true negative = number pixels of non-structure correctly classified as non-structure), FP (false positive = number pixels of non-structure wrongly classified as structure), and FN (false negative = number pixels of structure wrongly classified as non-structure), where the positive class corresponds to structure pixels, and the negative class corresponds to non-structure pixels. To obtain these measures the following equations were used:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 S c o r e = 2 · P r e c i s i o n · R e c a l l P r e c i s i o n + R e c a l l
A c c u r a c y = T P + T N T P + F P + T N + F N
Table 2 shows the results obtained with different metrics using different supervised classifiers: KNN (K-nearest neighbors, using Euclidean distance with a value of K = 5), NN (neural networks, with an Adam optimizer, a hidden layer size of (5, 5, 7), a learning rate of 0.001, an alpha value of 1 × 10 5 , and an epsilon of 1 × 10 8 ), RF (random forest, with a number of trees of 20 and a maximum depth of 3), SVM (support vector machines, using a polynomial kernel), and LR (logistic regression, with a liblinear optimizer and a maximum of 1000 iterations). In Table 2, we can see that most of the values are higher than 0.98, because the results were obtained based just on the pixels of the training set.
The second evaluation uses a fully labeled image to assess the classification of unlabeled pixels in our proposed method. Thus, we select a small training set, as explained in the previous section. The pixels in the whole image are labeled by applying our method, and, using the same metrics, the classifier’s quality is evaluated. These metrics were obtained by comparing the pixels labeled by the classifier against the correct label in the fully manually labeled image. The image used for this experiment corresponds to a small portion of the high-resolution image of the archaeological site of Calakmul. It contains an image with a size of 599 × 512 pixels. The pixels selected for the training set in this image were 142 for structure and 256 for non-structure. Figure 6 shows this small image and the pixels selected for the training set. Figure 7 shows the fully labeled image, where the yellow outline delimits the areas corresponding to the archaeological structure. We manually labeled this image. The results obtained can be seen in Table 3, where we can see that these results are lower than those obtained in the previous experiment. This happens because the classifier wrongly labeled some pixels as non-structure compared to the fully labeled image. The labeling error is due to the shadows contained in the structures. However, these results are fairly good, from 92% to 97%.
We show how our proposed method works in practice as a third evaluation. For this, we use large images from each of the archaeological sites. As an example, we only show the results obtained using our method on the image of the Calakmul site. For the other sites, we only show the image of the detected archaeological site output by our method. We selected Calakmul to show all the steps of our method because large amounts of vegetation hide it. However, the images of all the steps corresponding to the other archaeological sites and their results can be seen and downloaded from (https://ccc.inaoep.mx/~ariel/Detection-of-archaeological-structures/, accessed on 14 May 2023). For the Calakmul site, the image in the HSL color space, as well as the filters (Laplacian, Canny, and Sobel), were generated and can be seen in Figure 8. In this figure, we can notice, especially in the HSL image, that a large part of the vegetation can be differentiated, which is a good result of our method. However, this does not always happen with all images. On the other hand, it can be seen that the Canny and Laplacian filters can better represent the vegetation, and the Sobel filter can emphasize the edges of the structure.
Since all of the classifiers obtained similar results, we only show the results obtained with the neural network (NN) classifier. Figure 9b shows the binary image generated by this classifier. This figure shows that a large part of the vegetation was correctly labeled if we compare it with Figure 9a, so our method detected the structure pixels successfully. However, it is important to mention that the shadows in the images affected the classification of the pixels, since the structure pixels in shadow are similar to vegetation pixels in shadow. Therefore, providing images free of shadows to the classifier is advisable since it can help the classifier generate better classification results, producing better archaeological site detection.
Using the binary image generated by the classifier, we segmented the archaeological site detected by our method in the RGB image by changing the color of the pixels labeled as non-structure to black. Thus, we isolate the archaeological structure from the vegetation. The obtained results can be seen in Figure 10b, where we can appreciate a large part of the structure was correctly segmented if we compare it with Figure 10a.
Figure 11 shows the results obtained in the Palenque, Comalcalco, and Tikal archaeological sites, showing the images in RGB and the segmented RGB images. The archaeological sites of Palenque and Comalcalco obtained the best results since the quality of these images is better. The errors in Tikal’s and Calakmul’s images are due to the shadows and the soil’s similarity with the archaeological structure. So it was not easy for the classifier to discriminate between structure and non-structure pixels.
Finally, to validate our proposed method, we trained our classifiers using another archaeological site (Dzibilchaltun) and analyzed the results on archaeological sites such as Xcambó, Mayapán, and Aké. We used only one satellite (Google Maps) to obtain these images, because using different satellites would produce different-resolution images, affecting the classifiers’ performance. The obtained results can be seen in Figure 12. As shown in Figure 12, our proposed method can detect archaeological sites without prior labeling, with good results.

5. Discussion

Although at first sight, it seems that just by looking with the naked eye at the RGB image it is possible to distinguish pixels belonging to an archaeological structure from pixels not in the structure, for computers performing this task is not so immediate. From the experiments, we can see the usefulness of our proposed method to more quickly identify archaeological structures in large areas of land with a lot of vegetation. Our method reduces archaeologists’ pixel selection time when identifying large-image structures. Eventually, our method allows the archaeologist not to waste time on this task, allowing them to perform more significant duties. Our experiments also showed that using appropriate filters and color spaces, and their synergy with a supervised classifier, made possible the detection of archaeological structures. The difference between our method and other approaches is that our method focuses on images with large areas of land and a lot of vegetation. Hence, using color spaces and filters was vital to highlighting information helpful to the classifier.
The main limitation of our method is in images with shadows (due to vegetation, structures, and weather), which caused the classifiers to be unable to discriminate between archaeological structures and vegetation. Additionally, we noticed that some structures partially covered by grass, trees, or soil also caused classification errors. This type of image remains a challenge that deserves study in future work. However, as we showed in our experiments, the results obtained with our proposal are suitable for detecting archaeological structures in areas with a lot of vegetation which are usually difficult to access.

6. Conclusions and Future Work

This paper proposes a method for identifying archaeological structures in satellite images with extensive vegetation. We show that using image processing techniques and machine learning provides an effective solution for detecting archaeological sites, due to its low cost and how difficult it would be for archaeologists to perform this task manually. The proposed method significantly reduces the time used for manually labeling pixels. It is only necessary to label a small portion of the pixels to label the whole image employing a classifier. Using different color spaces and filters helped to produce adequate results in archaeological sites with a lot of vegetation. The proposed method had an acceptable performance detecting archaeological structures, although, according to our experiments, the proposed method is undoubtedly affected by the shadows in the images obtained through SASPlanet. However, using images with a good resolution and optimal environmental conditions could help to achieve even better performance. The results obtained with our proposal are promising for detecting archaeological structures in areas of difficult access.
In future work, we will evaluate other color spaces and filters for detecting archaeological structures in images containing shadows. In addition, it is important to develop a method for detecting archaeological structures in semi-desert areas or with less vegetation, because the soil type and the structure’s material degrade the detection with the method proposed in this paper.

Author Contributions

All authors equally contributed to the work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by the National Council of Humanities, Sciences, Technologies and Innovation of Mexico (CONAHCyT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research was partially supported by the National Council of Humanities, Sciences, Technologies and Innovation of Mexico (CONAHCyT) through its graduate study scholarship program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chaudhuri, D.; Kushwaha, N.K.; Samal, A.; Agarwal, R.C. Automatic Building Detection From High-Resolution Satellite Images Based on Morphology and Internal Gray Variance. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1767–1779. [Google Scholar] [CrossRef]
  2. Sirmacek, B.; Unsalan, C. A Probabilistic Framework to Detect Buildings in Aerial and Satellite Images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 211–221. [Google Scholar] [CrossRef]
  3. Chen, X.; Xiang, S.; Liu, C.L.; Pan, C.H. Vehicle Detection in Satellite Images by Hybrid Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1797–1801. [Google Scholar] [CrossRef]
  4. Sun, X.; Wang, P.; Wang, C.; Liu, Y.; Fu, K. PBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2021, 173, 50–65. [Google Scholar] [CrossRef]
  5. Can, G.; Mantegazza, D.; Abbate, G.; Chappuis, S.; Giusti, A. Semantic segmentation on Swiss3DCities: A benchmark study on aerial photogrammetric 3D pointcloud dataset. Pattern Recognit. Lett. 2021, 150, 108–114. [Google Scholar] [CrossRef]
  6. Bourgeois, J.; Meganck, M. Aerial Photography and Archaeology 2003: A Century of Information; Papers Presented During the Conference Held at the Ghent University, December 10th–12th, 2003; Academia Press: Cambridge, MA, USA, 2005; Volume 4. [Google Scholar]
  7. Gonzalez, R.C.; Richard, E. Woods Digital Image Processing; Pearson: London, UK, 2018. [Google Scholar]
  8. Caspari, G. Mapping and damage assessment of “royal” burial mounds in the Siberian Valley of the Kings. Remote Sens. 2020, 12, 773. [Google Scholar] [CrossRef]
  9. Luo, L.; Wang, X.; Guo, H.; Lasaponara, R.; Zong, X.; Masini, N.; Wang, G.; Shi, P.; Khatteli, H.; Chen, F.; et al. Airborne and spaceborne remote sensing for archaeological and cultural heritage applications: A review of the century (1907–2017). Remote Sens. Environ. 2019, 232, 111280. [Google Scholar] [CrossRef]
  10. Caspari, G.; Crespo, P. Convolutional neural networks for archaeological site detection–Finding “princely” tombs. J. Archaeol. Sci. 2019, 110, 104998. [Google Scholar] [CrossRef]
  11. Traviglia, A.; Torsello, A. Landscape pattern detection in archaeological remote sensing. Geosciences 2017, 7, 128. [Google Scholar] [CrossRef]
  12. Lambers, K.; Verschoof-van der Vaart, W.B.; Bourgeois, Q.P. Integrating remote sensing, machine learning, and citizen science in Dutch archaeological prospection. Remote Sens. 2019, 11, 794. [Google Scholar] [CrossRef]
  13. Soroush, M.; Mehrtash, A.; Khazraee, E.; Ur, J.A. Deep learning in archaeological remote sensing: Automated qanat detection in the kurdistan region of Iraq. Remote Sens. 2020, 12, 500. [Google Scholar] [CrossRef]
  14. Gansell, A.R.; van de Meent, J.W.; Zairis, S.; Wiggins, C.H. Stylistic clusters and the Syrian/South Syrian tradition of first-millennium BCE Levantine ivory carving: A machine learning approach. J. Archaeol. Sci. 2014, 44, 194–205. [Google Scholar] [CrossRef]
  15. Hörr, C.; Lindinger, E.; Brunnett, G. Machine learning based typology development in archaeology. J. Comput. Cult. Herit. (JOCCH) 2014, 7, 1–23. [Google Scholar] [CrossRef]
  16. Wilczek, J.; Monna, F.; Gabillot, M.; Navarro, N.; Rusch, L.; Chateau, C. Unsupervised model-based clustering for typological classification of Middle Bronze Age flanged axes. J. Archaeol. Sci. Rep. 2015, 3, 381–391. [Google Scholar] [CrossRef]
  17. Barone, G.; Mazzoleni, P.; Spagnolo, G.V.; Raneri, S. Artificial neural network for the provenance study of archaeological ceramics using clay sediment database. J. Cult. Herit. 2019, 38, 147–157. [Google Scholar] [CrossRef]
  18. Adams, R.E.; Brown, W.E.; Culbert, T.P. Radar mapping, archaeology, and ancient Maya land use. Science 1981, 213, 1457–1468. [Google Scholar] [CrossRef]
  19. Inomata, T.; Triadan, D.; López, V.A.V.; Fernandez-Diaz, J.C.; Omori, T.; Bauer, M.B.M.; Hernández, M.G.; Beach, T.; Cagnato, C.; Aoyama, K.; et al. Monumental architecture at Aguada Fénix and the rise of Maya civilization. Nature 2020, 582, 530–533. [Google Scholar] [CrossRef]
  20. Hansen, R.D.; Morales-Aguilar, C.; Thompson, J.; Ensley, R.; Hernández, E.; Schreiner, T.; Suyuc-Ley, E.; Martínez, G. LiDAR analyses in the contiguous Mirador-Calakmul Karst Basin, Guatemala: An introduction to new perspectives on regional early Maya socioeconomic and political organization. Anc. Mesoam. 2022, 1–40. [Google Scholar] [CrossRef]
  21. Monna, F.; Magail, J.; Rolland, T.; Navarro, N.; Wilczek, J.; Gantulga, J.O.; Esin, Y.; Granjon, L.; Allard, A.C.; Chateau-Smith, C. Machine learning for rapid mapping of archaeological structures made of dry stones–Example of burial monuments from the Khirgisuur culture, Mongolia–. J. Cult. Herit. 2020, 43, 118–128. [Google Scholar] [CrossRef]
  22. Lindsay, I.; Mkrtchyan, A. Free and Low-Cost Aerial Remote Sensing in Archaeology: An Overview of Data Sources and Recent Applications in the South Caucasus. Adv. Archaeol. Pract. 2023, 1–20. [Google Scholar] [CrossRef]
  23. Jiang, H.; Peng, M.; Zhong, Y.; Xie, H.; Hao, Z.; Lin, J.; Ma, X.; Hu, X. A Survey on Deep Learning-Based Change Detection from High-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 1552. [Google Scholar] [CrossRef]
  24. UIS. World Heritage in Danger. 2021. Available online: https://whc.unesco.org/en/158/ (accessed on 14 May 2023).
  25. Levin, N.; Ali, S.; Crandall, D.; Kark, S. World Heritage in danger: Big data and remote sensing can help protect sites in conflict zones. Glob. Environ. Chang. 2019, 55, 97–104. [Google Scholar] [CrossRef]
  26. Davis, D.S. Geographic disparity in machine intelligence approaches for archaeological remote sensing research. Remote Sens. 2020, 12, 921. [Google Scholar] [CrossRef]
  27. Van der Maaten, L.; Boon, P.; Lange, G.; Paijmans, H.; Postma, E. Computer Vision and Machine Learning for Archaeology. In Proceedings of the 34th Computer Applications and Quantitative Methods in Archaeology, Fargo, ND, USA, 18–21 April 2006; pp. 112–130. [Google Scholar]
  28. Character, L.; Ortiz, A., Jr.; Beach, T.; Luzzadder-Beach, S. Archaeologic Machine Learning for Shipwreck Detection Using Lidar and Sonar. Remote Sens. 2021, 13, 1759. [Google Scholar] [CrossRef]
  29. Pinilla-Buitrago, L.A.; Carrasco-Ochoa, J.A.; Martinez-Trinidad, J.F. Including Foreground and Background Information in Maya Hieroglyph Representation. In Proceedings of the Mexican Conference on Pattern Recognition, Puebla, Mexico, 27–30 June 2018; pp. 238–247. [Google Scholar]
  30. Pinilla-Buitrago, L.A.; Carrasco-Ochoa, J.A.; Martínez-Trinidad, J.F.; Román-Rangel, E. Improved hieroglyph representation for image retrieval. J. Comput. Cult. Herit. (JOCCH) 2019, 12, 1–15. [Google Scholar] [CrossRef]
  31. Hollesen, J.; Jepsen, M.S.; Harmsen, H. The Application of RGB, Multispectral, and Thermal Imagery to Document and Monitor Archaeological Sites in the Arctic: A Case Study from South Greenland. Drones 2023, 7, 115. [Google Scholar] [CrossRef]
  32. Liu, Y.; Hu, Q.; Wang, S.; Zou, F.; Ai, M.; Zhao, P. Discovering the Ancient Tomb under the Forest Using Machine Learning with Timing-Series Features of Sentinel Images: Taking Baling Mountain in Jingzhou as an Example. Remote Sens. 2023, 15, 554. [Google Scholar] [CrossRef]
  33. Bachagha, N.; Elnashar, A.; Tababi, M.; Souei, F.; Xu, W. The Use of Machine Learning and Satellite Imagery to Detect Roman Fortified Sites: The Case Study of Blad Talh (Tunisia Section). Appl. Sci. 2023, 13, 2613. [Google Scholar] [CrossRef]
  34. Demydov, V. SAS Planet. Available online: http://www.sasgis.org/ (accessed on 14 May 2023).
  35. Seong, H.; Son, H.; Kim, C. A comparative study of machine learning classification for color-based safety vest detection on construction-site images. KSCE J. Civ. Eng. 2018, 22, 4254–4262. [Google Scholar] [CrossRef]
  36. Castillo, R.; Hernández, J.M.; Inzunza, E.; Torres, J.P. Procesamiento Digital de Imágenes Empleando Filtros Espaciales. In Proceedings of the Décima Segunda Conferencia Iberoamericana en Sistemas, Cibernética e Informática: CISCI 2013, Orlando, FL, USA, 9–12 July 2013. [Google Scholar]
  37. Intel Corporation, Bradski, Gary and Kaehler, Adrian and Others Open Source Computer Vision Library. Available online: https://docs.opencv.org (accessed on 14 May 2023).
Figure 1. Proposed method for detection of archaeological structures.
Figure 1. Proposed method for detection of archaeological structures.
Applsci 13 06663 g001
Figure 2. (a,b) The image acquisition process for the archaeological sites using SASPlanet.
Figure 2. (a,b) The image acquisition process for the archaeological sites using SASPlanet.
Applsci 13 06663 g002
Figure 3. Comparison of images of the archaeological site of Calakmul by applying filters (Canny (a,d); Laplacian (b,e); and Sobel (c,f)) with different color spaces (RGB (ac) and HSL (df)). (a) Image generated using the Canny filter on the RGB color space. (b) Image generated using the Laplacian filter on the RGB color space. (c) Image generated using the Sobel filter on the RGB color space. (d) Image generated using the Canny filter on the HSL color space. (e) Image generated using the Laplacian filter on the HSL color space. (f) Image generated using the Sobel filter on the HSL color space.
Figure 3. Comparison of images of the archaeological site of Calakmul by applying filters (Canny (a,d); Laplacian (b,e); and Sobel (c,f)) with different color spaces (RGB (ac) and HSL (df)). (a) Image generated using the Canny filter on the RGB color space. (b) Image generated using the Laplacian filter on the RGB color space. (c) Image generated using the Sobel filter on the RGB color space. (d) Image generated using the Canny filter on the HSL color space. (e) Image generated using the Laplacian filter on the HSL color space. (f) Image generated using the Sobel filter on the HSL color space.
Applsci 13 06663 g003
Figure 4. Image manually labeled. The red color represents structure samples, and the blue color represents non-structure samples.
Figure 4. Image manually labeled. The red color represents structure samples, and the blue color represents non-structure samples.
Applsci 13 06663 g004
Figure 5. Archaeological sites used for the experiments. (a) Calakmul, Mexico. (b) Palenque, Mexico. (c) Comalcalco, Mexico. (d) Tikal, Guatemala.
Figure 5. Archaeological sites used for the experiments. (a) Calakmul, Mexico. (b) Palenque, Mexico. (c) Comalcalco, Mexico. (d) Tikal, Guatemala.
Applsci 13 06663 g005
Figure 6. Training set for the manually labeled image. The red color represents structure, and the blue color represents non-structure.
Figure 6. Training set for the manually labeled image. The red color represents structure, and the blue color represents non-structure.
Applsci 13 06663 g006
Figure 7. Fully manually labeled image corresponding to a structure of Calakmul.
Figure 7. Fully manually labeled image corresponding to a structure of Calakmul.
Applsci 13 06663 g007
Figure 8. Results of changing the color space to HLS and applying the filters for the satellite image of Calakmul: (a) HLS; (b) Laplacian; (c) Canny; (d) Sobel.
Figure 8. Results of changing the color space to HLS and applying the filters for the satellite image of Calakmul: (a) HLS; (b) Laplacian; (c) Canny; (d) Sobel.
Applsci 13 06663 g008
Figure 9. Detection of the archaeological site of Calakmul. (a) Satellite image of Calakmul in RGB. (b) Structure detected by our method of the archaeological site.
Figure 9. Detection of the archaeological site of Calakmul. (a) Satellite image of Calakmul in RGB. (b) Structure detected by our method of the archaeological site.
Applsci 13 06663 g009
Figure 10. (a) Satellite image of Calakmul in RGB, and (b) structure detected by our method in RGB.
Figure 10. (a) Satellite image of Calakmul in RGB, and (b) structure detected by our method in RGB.
Applsci 13 06663 g010
Figure 11. Detection results of the proposed method for the Palenque, Comalcalco, and Tikal archaeological sites. (a) Satellite image of Palenque, Mexico. (b) Structure detected of Palenque, Mexico. (c) Satellite image of Comalcalco, Mexico. (d) Structure detected of Comalcalco, Mexico. (e) Satellite image of Tikal, Guatemala. (f) Structure detected of Tikal, Guatemala.
Figure 11. Detection results of the proposed method for the Palenque, Comalcalco, and Tikal archaeological sites. (a) Satellite image of Palenque, Mexico. (b) Structure detected of Palenque, Mexico. (c) Satellite image of Comalcalco, Mexico. (d) Structure detected of Comalcalco, Mexico. (e) Satellite image of Tikal, Guatemala. (f) Structure detected of Tikal, Guatemala.
Applsci 13 06663 g011
Figure 12. Detection results of the proposed method for the Dzibilchaltun, Xcambó, Mayapán, and Aké archaeological sites. (a) Satellite image of Dzibilchaltun, Mexico. (b) Structure detected of Dzibilchaltun, Mexico. (c) Satellite image of Aké, Mexico. (d) Structure detected of Aké, Mexico. (e) Satellite image of Xcambó, Mexico. (f) Structure detected of Xcambó, Mexico. (g) Structure detected of Mayapán, Mexico. (h) Structure detected of Mayapán, Mexico.
Figure 12. Detection results of the proposed method for the Dzibilchaltun, Xcambó, Mayapán, and Aké archaeological sites. (a) Satellite image of Dzibilchaltun, Mexico. (b) Structure detected of Dzibilchaltun, Mexico. (c) Satellite image of Aké, Mexico. (d) Structure detected of Aké, Mexico. (e) Satellite image of Xcambó, Mexico. (f) Structure detected of Xcambó, Mexico. (g) Structure detected of Mayapán, Mexico. (h) Structure detected of Mayapán, Mexico.
Applsci 13 06663 g012
Table 1. Training set size for the four archaeological sites.
Table 1. Training set size for the four archaeological sites.
Archaeological SiteStructureNon-Structure
Calakmul178222
Palenque133204
Comalcalco193564
Tikal113192
Table 2. Results obtained for different classifiers performing five-fold cross-validation over the training set in each archaeological site. The results obtained for precision, recall, and F1 score correspond to the macro average.
Table 2. Results obtained for different classifiers performing five-fold cross-validation over the training set in each archaeological site. The results obtained for precision, recall, and F1 score correspond to the macro average.
AlgorithmPrecisionRecallF1 ScoreAccuracy
Calakmul
KNN1.001.001.001.00
NN1.001.001.001.00
RF1.001.001.001.00
SVM1.001.001.001.00
LR1.001.001.001.00
Palenque
KNN1.000.990.990.99
NN1.001.001.001.00
RF1.001.001.001.00
SVM1.000.960.980.98
LR1.001.001.001.00
Comalcalco
KNN1.000.970.980.99
NN0.990.980.990.99
RF1.000.990.990.99
SVM0.981.000.990.99
LR1.000.990.990.99
Tikal
KNN1.001.001.001.00
NN1.001.001.001.00
RF1.001.001.001.00
SVM1.001.001.001.00
LR1.001.001.001.00
Table 3. Results obtained for the different classifiers over the whole image. The results obtained for precision, recall, and F1 score correspond to the macro average. The best results are highlighted in black.
Table 3. Results obtained for the different classifiers over the whole image. The results obtained for precision, recall, and F1 score correspond to the macro average. The best results are highlighted in black.
AlgorithmPrecisionRecallF1 ScoreAccuracy
KNN0.970.920.940.94
NN0.950.940.940.96
RF0.960.920.940.96
SVM0.960.950.950.97
LR0.960.940.950.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fuentes-Carbajal, J.A.; Carrasco-Ochoa, J.A.; Martínez-Trinidad, J.F.; Flores-López, J.A. Machine Learning and Image-Processing-Based Method for the Detection of Archaeological Structures in Areas with Large Amounts of Vegetation Using Satellite Images. Appl. Sci. 2023, 13, 6663. https://doi.org/10.3390/app13116663

AMA Style

Fuentes-Carbajal JA, Carrasco-Ochoa JA, Martínez-Trinidad JF, Flores-López JA. Machine Learning and Image-Processing-Based Method for the Detection of Archaeological Structures in Areas with Large Amounts of Vegetation Using Satellite Images. Applied Sciences. 2023; 13(11):6663. https://doi.org/10.3390/app13116663

Chicago/Turabian Style

Fuentes-Carbajal, José Alberto, Jesús Ariel Carrasco-Ochoa, José Francisco Martínez-Trinidad, and Jorge Arturo Flores-López. 2023. "Machine Learning and Image-Processing-Based Method for the Detection of Archaeological Structures in Areas with Large Amounts of Vegetation Using Satellite Images" Applied Sciences 13, no. 11: 6663. https://doi.org/10.3390/app13116663

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop