Next Article in Journal
On the Integration of VLBI Observations to GENESIS into Global VGOS Operations
Next Article in Special Issue
AIDCON: An Aerial Image Dataset and Benchmark for Construction Machinery
Previous Article in Journal
The Right Triangle Model: Overcoming the Sparse Data Problem in Thermal/Optical Remote Sensing of Soil Moisture
Previous Article in Special Issue
DIPHORM: An Innovative DIgital PHOtogrammetRic Monitoring Technique for Detecting Surficial Displacements of Landslides
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Photogrammetric Tool for Landslide Recognition and Volume Calculation Using Time-Lapse Imagery

1
Department of Infrastructure, Second Xiangya Hospital, Central South University, Changsha 410011, China
2
Department of Civil, Environmental and Architectural Engineering, University of Padova, Via Ognissanti 39, 35129 Padova, Italy
3
IATE, University of Montpellier, INRAE, Institut Agro, 2 Place Pierre Viala, 34060 Montpellier, France
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(17), 3233; https://doi.org/10.3390/rs16173233
Submission received: 3 July 2024 / Revised: 23 August 2024 / Accepted: 27 August 2024 / Published: 31 August 2024
(This article belongs to the Special Issue Remote Sensing in Civil and Environmental Engineering)

Abstract

:
Digital photogrammetry has attracted widespread attention in the field of geotechnical and geological surveys due to its low-cost, ease of use, and contactless mode. In this work, with the purpose of studying the progressive block surficial detachments of a landslide, we developed a monitoring system based on fixed multi-view time-lapse cameras. Thanks to a newly developed photogrammetric algorithm based on the comparison of photo sequences through a structural similarity metric and the computation of the disparity map of two convergent views, we can quickly detect the occurrence of collapse events, determine their location, and calculate the collapse volume. With the field data obtained at the Perarolo landslide site (Belluno Province, Italy), we conducted preliminary tests of the effectiveness of the algorithm and its accuracy in the volume calculation. The method of quickly and automatically obtaining the collapse information proposed in this paper can extend the potential of landslide monitoring systems based on videos or photo sequence and it will be of great significance for further research on the link between the frequency of collapse events and the driving factors.

1. Introduction

In recent decades, digital photogrammetry has emerged as a powerful and cost-effective method for the 3D reconstruction of objects and surfaces across various scientific and engineering disciplines. Its advantages, including ease of use and the ability to mount cameras on drones, have led to its expanding application in engineering geology. This technology offers an affordable and efficient survey and monitoring solution compared to other terrestrial approaches like laser scanning and radar interferometry. Most photogrammetric applications in this field focus on the morphological 3D reconstruction of slope surfaces and rock masses [1,2,3]. More recently, its use has been extended to landslide displacement monitoring [4,5,6], but, to the authors’ knowledge, there are still few applications specifically focused on sudden collapse events. In the mining sector, technologies using calibrated fixed camera systems to monitor rock faces and quantify detached block volumes through point cloud comparison have been successfully tested [7]. Despite their success, these systems are not yet fully automatable. They typically rely on volume difference calculations and are prone to false positives when determining actual block detachment. Additionally, they may not be suitable for more complex surfaces such as landslides, where ongoing movement, vegetation growth, and weather events can significantly alter the image color contents. The occurrence of sudden collapses and local surface sliding on nearly vertical slopes may represent the precursor signals of landslide reactivation but is rarely taken into account due to the difficulty of detecting and quantifying these events with currently available technologies. These events are crucial for understanding landslide behavior, assessing stability conditions, and forecasting potential evolution.
Conventional techniques, such as topographic surveys, generally fail to capture these events due to excessive deformations associated with collapses and the disruption of optical targets. Moreover, these methods often provide sparse spatial information, limited to the locations of optical prisms. Radar interferometry, while effective for detecting gradual movements, is inadequate for capturing large sudden displacements due to loss of signal coherence. Similarly, laser scanner surveys can estimate missing volumes but face challenges in long-term monitoring. In this context, time-lapse close-range photogrammetry offers a viable alternative. It allows for the prolonged observation of landslides, automatically obtaining detailed optical and spatial information from images. By comparing and analyzing image sequences from multiple cameras through appropriate image processing algorithms, local slip events and collapsed areas can be quickly identified.
The volume of collapsed material can be estimated based on the relative positions of the cameras. These data can then be used to hypothesize correlations between detachment events and driving factors such as rainfall and piezometric levels. Given the importance of automatic detection and susceptibility mapping for understanding landslide characteristics and risk assessment, numerous intelligent methods based on image analysis have been proposed [8,9,10]. Lei et al. [11] proposed an end-to-end change detection algorithm using a symmetric, fully convolutional network. Lv et al. [12] introduced a landslide detection method based on multiscale segmentation and the object-based majority voting of images. Convolutional neural network-based methods have been widely adopted in this field [10,13,14]. Additionally, Lu et al. [15] presented an object-oriented change detection method for rapid landslide mapping. While these algorithms can detect landslides over large areas using aerial or satellite images, they typically operate at low temporal resolution and often fail to detect detachments on nearly vertical slopes. In contrast, our approach focuses on terrestrial photo sequences at high spatial and medium-high temporal resolution. Terrestrial photos offer the advantage of adjustable spatial resolution (e.g., by using different lenses or changing the camera distance) and temporal resolution.
Among image comparison algorithms suitable for multitemporal sequences, simple pixel-based methods (e.g., RMSE, PSNR) are effective in controlled environments [16]. However, these methods perform poorly with natural slopes, where factors such as local erosion and variations in lighting and environmental conditions can significantly impact the images. Similarly, feature-based algorithms (e.g., SIFT [17], SURF [18,19]), which are commonly used for convergent image matching and the 3D reconstruction of landslide slopes, are also less effective for fixed-frame time-lapse imagery. While deep learning algorithms have the potential to be more robust, they are not applicable in this case due to the absence of a sufficiently large dataset of supporting images.
In this paper, we introduce a novel algorithm that automatically analyzes and processes sequences of terrestrial images from multiple cameras by searching for areas where differences are evident in terms of luminance, contrast, and texture. Utilizing the structural similarity index measure (SSIM) algorithm [20] and a series of convolution filters, our program identifies collapse events and determines the collapsed areas. The stereo configuration of the cameras enables the calculation of the 3D shape and volume of the detached mass [7,21,22]. This stereo-camera system has been prototyped and tested in the field at the Perarolo landslide site (Belluno Province, Italy), where spontaneous slope collapses frequently occur. The system, comprising three cameras and a large spotlight, was installed on the opposite side of the valley. Nearly a year of daily and nightly photo sequences were analyzed. Additionally, three large-scale slope stabilization operations using blasting and excavators at the same site were employed to evaluate the effectiveness of the collapse detection method and the accuracy of volume estimation, which was compared to laser scanner surveys.

2. Methods

2.1. Architecture of the Algorithm

Our program is designed to work with fixed multi-view cameras to perform image acquisition, collapse detection, and the calculation of collapsed areas and volumes in a fully automatic and continuous manner, without any third-party intervention or manual operation. As shown in Figure 1, the process consists of two main phases, the collapse recognition on the image plane through image comparison at different time steps, and the identification and calculation of the collapsed area and volume. Before initiating the collapse recognition phase, wooded and grassy areas, if present, must be excluded from the analysis. The color and texture of these areas can change continuously due to wind and seasonal vegetation growth, hindering accurate comparison between successive images. This operation can be performed using a deep learning mask [23]. It should be noted that this limitation naturally restricts the method’s applicability in fully vegetated areas.
The second pre-processing step involves filtering out areas covered by shadows, which could introduce collapse artifacts, particularly due to changes in the sun position over time [24]. A detailed analysis of this aspect will be provided in Section 2.3.
The core of the algorithm uses the structural similarity index measure (SSIM) to compare pairs of images in sequence, generating a structural similarity map at each time interval [20,25]. To reduce noise caused by natural small variations in color intensity, brightness, and contrast, image convolution filters were applied to the similarity map. We employed a cascade of Gaussian and median convolution filters with different patch sizes for this purpose [26]. Based on the statistics of pixel values in the final structural similarity map, potential collapse areas can be identified, and a unique structural similarity index is obtained. This index is used to determine whether a collapse occurred between the images taken at times t(i) and t(i + 1). When the structural similarity index exceeds a certain threshold, the program signals the presence of a collapse and automatically initiates the calculation of the collapsed volume using images from a second camera. This second phase of the process involves comparing two point-clouds generated from the 3D reconstruction of the slope before and after the collapse. The details of this phase will be discussed in Section 2.4.

2.2. Structural Similarity Algorithm Approach

Among image comparison algorithms, the simplest and most reliable ones for this work seem to be those that compare, even locally (i.e., pixel by pixel or patch by patch), some of the key image components such as brightness, contrast, and texture. More advanced algorithms, such as those based on feature matching and commonly used to account for possible rotations or other spatial transformations of objects or cameras (e.g., SIFT [17], SURF [18,19]), are in fact to be considered overly complex with fixed-frame images such as those in this paper. Other semantic-type algorithms that exploit convolutional neural networks are also to be considered over-refined and prone to over-fitting when changes are expected in only small portions of the images [27]. Indeed, note how the latter require a sufficiently large training set of images, whereas in this case, collapse events are generally few and site-dependent (i.e., the training set of images of one landslide cannot be easily used to refine the model of another type of landslide).
The structural similarity index measure (SSIM) algorithm is commonly used for assessing image quality and evaluating the performance of image compression algorithms and systems [28,29]. It measures the level of degradation of the image relative to a reference one, and although it is one of the most widely used metrics for defining loss functions during the training of (convolutional) neural networks [30,31], to the authors’ knowledge, it has never been used for this type of application.
In this study, we propose using SSIM as a local measure of structural changes to detect collapses in a sequence of images taken from a fixed camera oriented towards the slope. The SSIM calculates the structural similarity index (SSI) for each pixel in the image and generates a structural similarity map (SSM) by comparing a target photo with a reference photo [25,32]. The SSI is derived from the variation in the following three components between the two images or portions of images x and y: the luminance term, the contrast term, and the structural term, according to the following metric:
S S I ( x , y ) = [ l ( x , y ) ] α · [ c ( x , y ) ] β · [ s ( x , y ) ] γ
where
l ( x , y ) = 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1
c ( x , y ) = 2 σ x σ y + C 2 σ x 2 + σ y 2 + C 2
s ( x , y ) = σ x y + C 3 σ x σ y + C 3
and μ x , μ y , σ x , σ y and σ x y are the local means, standard deviations, and cross-covariances for images x and y. The parameters α, β, and γ are the exponents for the luminance, contrast, and structural terms, respectively. Their value is usually set to one [33,34], and was kept as such in this work to give equal weight to the three visual perception components. C1, C2, and C3 are the small regularization constants for luminance, contrast, and texture and are used to avoid instability in image regions where the local mean or standard deviation is close to zero (i.e., denominator close to zero). In this case, their value was set to 1 × 10−4.
The structural similarity map (SSM) is a grayscale map (values ranging from 0 to 1) with the same dimensions as the target image in SSIM. It consists of the SSI values for each pixel in the target image. Figure 2 illustrates an example of an SSM applied to two consecutive images of a landslide taken at different time steps, where the darker pixels indicate the smallest similarity values, representing areas with a higher probability of erosion or collapse, while the lighter pixels indicate areas with high SSI values, signifying minimal structural and color changes. At first glance, it is evident that the vegetated parts are classified as dark areas, which could lead to erroneous collapse detection. Additionally, due to the complex and changing environmental conditions of the landslide scene, factors such as brightness variation and shadows—primarily caused by clouds, fog, and sun position—can significantly affect the structural and chromatic information of the image. This can result in artifacts in the assessment of collapse areas. Therefore, it was deemed necessary to investigate the impact of these variables on the calculation of the SSI in detail, using sets of images where no collapse events occurred.

2.2.1. Effect of Shadows

Shadow is well known as one of the main sources of error in the analysis of time-lapse aerial and satellite images, and its detection remains a challenging task [24,33,34,35]. The following two types of shadows may affect the images: cast shadows, produced by sunlight being blocked by asperities of the slope, which are regular and have a fairly constant periodicity, and shadows produced by clouds, which are generally less sharp and unpredictable. Essentially, there are property-based and model-based methods for identifying and extracting the shadows cast on the ground from digital images [34,35]. The first category of methods uses information about colors and textures in the images for shadow segmentation and can be applied to most shadows. The second category uses external models based on timing, latitude, longitude, and digital elevation models to estimate the sun’s position and forecast shadow casting. The latter approach is suitable for analyzing zenithal images and is particularly effective if elevation information can be integrated [36]. In our application, we will employ the first type of shadow detection methods.
To evaluate the influence of shadows cast by the sun on the SSI, we selected a study case involving five sunny days from 27 July 2020 to 31 July 2020, choosing consecutive hourly images taken from 11:00 a.m. to 5:00 p.m. For these five sets of images, we computed the SSI using the 3:00 p.m. shot as the reference image. In each group of images, the shadow cast on the slope gradually decreases from 11:00 a.m. to 3:00 p.m., with the images taken at 4:00 p.m. and 5:00 p.m. having almost no shadows, as shown in Figure 3.
As shown in Figure 4, the SSI is very high and changes little when the reference image is compared with images with less shadow cast (Δt = −1, +1, +2 h), while it significantly decreases and changes greatly when the reference image is compared with images with strong shadows. We also found in Figure 4 that as the shadow in the images gradually decreases (Δt = −4, −3, −2, −1 h), the difference between the SSIM results with and without the shadow filter decreases significantly. Moreover, the results of the five groups of tests maintained the same trend. The overall trend and the standard deviation shown in Figure 4 suggest that shadows have a significant impact on SSIM.

2.2.2. Effect of Illuminance

To evaluate the effect of illuminance on the structural similarity metric, we conducted five groups of image sequence comparison tests using photos with different illuminance levels. Vegetation and shadow filters were applied in these tests. Each group of tests comprised consecutive night images and day images taken within the same time interval. From 23 February 2021 to 28 February 2021, we selected four night-images (taken at 11:00 p.m., 1:00 a.m., 3:00 a.m., 5:00 a.m.) and four day-images (taken at 11:00 a.m., 1:00 p.m., 3:00 p.m., 5:00 p.m.) as test images to compare with the reference image (taken at 8:00 a.m.) in each group. Figure 5 shows an example set of images used in one of these tests. The SSI between the target image and the reference image is illustrated in Figure 6a. It can be observed that the SSI changes little when the reference image is compared with the night images (t = 11:00 p.m., 1:00 a.m., 03:00 a.m., 05:00 a.m.). This is due to the artificial illuminance provided by the spotlight, which ensures a constant level of brightness. Shadows cast are fixed and limited within the region of interest, since the illuminator is close to the cameras and directed almost along the same line-of-sight. On the other hand, the varying illuminance of the day images (t = 11:00 a.m., 13:00 a.m., 15:00 a.m., 17:00 a.m.) has a significant effect on SSI due to the substantial changes in sunlight intensity and direction, as discussed earlier. Moreover, the results of the five groups of tests consistently showed the same trend. It is worth mentioning that the reference images of Group 5 have different illuminance levels compared to the reference images of the other groups (see Figure 6b), resulting in a significantly lower SSI for Group 5 compared to the others.

2.3. Image Filters

Several filters are used in our program to enhance the accuracy of the collapse detection phase from images. These filters can be broadly categorized into the following two groups: image pre-processing filters and noise filters applied directly to the structural similarity map (SSM). The image pre-processing filters include a vegetation filter and a shadow filter. The vegetation filter removes areas covered by vegetation, which were visually identified and sampled from a set of pre-learning images. These sampled areas were used to create a deep learning mask [37]. Similarly, the shadow filter removes areas covered by shadows, which were also visually identified and sampled from a set of pre-learning images. These sampled areas were used to create a color mask. These filters are applied before comparing two images to prevent false-positive identification of collapsed areas due to movement of vegetation and changes in shadows.
The filters applied to the SSM include various image convolution filters, which smooth the noise generated by SSIM to avoid detection of small spots with low SSI and to consolidate detected collapsed areas. In this case, we used multiple median filters with kernel sizes of 23 × 23 and 7 × 7 pixels to preserve edges of the SSM [38,39,40], threshold filters [41], and Gaussian filters with a 19 × 19 kernel and standard deviation of 3 [42,43] for final smoothing. Figure 7 illustrates the specific roles of these filters in image comparison by showing the evolution of the SSM at each step.

2.4. Calculation of Collapsed Volumes

2.4.1. The 3D Surface Reconstruction and Disparity Calculation

Once a collapse is detected, the program automatically calculates the collapsed volume. The first step in this phase involves the 3D reconstruction of the slope surface before and after the collapse using fixed calibrated cameras that enable a stereoscopic view of the scene [44,45,46]. Each camera must be calibrated with a series of shots taken from different angles to calculate the intrinsic parameters, including the focal length distortion parameters as well as the extrinsic parameters, which include the position and orientation of each image [47]. The extrinsic parameters require the system to be geolocalized. This is achieved using fixed points and colored balls specially positioned on the landslide, whose exact positions are determined through a GPS survey [48]. The final image used for calibration is taken from the camera’s operational position to obtain its accurate pose.
For the reconstruction of the scene based on only two images, the principles of stereophotogrammetry were used. Given the large area to be monitored and its distance from the cameras, a larger baseline with two slightly converging lines of sight was chosen. Using the intrinsic and extrinsic parameters of the two cameras, the stereo parameters can be derived [49,50]. This includes the intrinsic parameters of the individual cameras and a roto-translation matrix that allows conversion from the reference system of the left camera to the reference system of the right camera. This matrix provides the relative position between the two cameras.
The second step is to obtain the undistorted and rectified images from the left and right cameras using the stereo camera parameters and considering an epipolar geometry [51,52,53]. To ensure the stereo images are well-aligned and the 3D reconstruction is as accurate as possible, it is necessary to correct for any image oscillations. This correction is achieved by aligning fixed points in the scene using the calibration image as a reference.
The third and final step involves calculating the disparity value of each pixel in the rectified image to obtain the disparity map, which contains the depth information of each pixel in space. Using the disparity map and stereo camera parameters [54,55,56], the 3D position of each point is calculated, resulting in the 3D point cloud.

2.4.2. Point Cloud Comparison and Volume Calculation

After the 3D reconstruction, the next step is to combine the information of the collapse area obtained with SSIM and the point cloud of the entire landslide. As described in the workflow of Figure 8, the first step involves using a median filter with a 10-by-10 kernel to remove the noise generated in the 3D reconstruction stage. To determine the collapse volume, we compute the difference between the point clouds before and after the collapse and apply a filter with a 0.5 m threshold. Subsequently, a median filter with a 21-by-21 kernel is used to remove any remaining noise. Finally, we search for continuous areas in the binary map and use the largest area as the approximate collapsed area.
Next, we dilate the boundary of the approximate collapsed area and fill the invalid points within the collapsed area on the depth information map (disparity map or Z-axis coordinate map), as illustrated in Figure 9 and Figure 10. We then compare the point clouds again to determine the exact collapsed area and its pixel coordinates. This step is crucial because invalid points in the disparity map can cause some real collapsed areas to go unrecognized; hence, identifying the lost real collapsed areas near the boundaries is essential. Figure 10 shows an example from the Perarolo landslide site.
Based on the pixel coordinates of the collapsed area on the rectified image, we can identify the points on the surface of the collapsed area. As shown in Figure 11, by using all the point clouds on the surface of the collapsed area before and after the collapse, we can create the alpha shape representing the collapsed bodies and easily estimate the collapsed volumes [57,58,59]. This approach also allows us to visualize the spatial shape of the collapsed bodies and calculate their 3D coordinates.

3. Case Study and Results

3.1. Perarolo Landslide Site

The Perarolo landslide, also known as the Sant’Andrea landslide, is located in the Cadore area of the Belluno province in northeastern Italy. This area is one of the most geohazard-prone regions in the southern Alpine mountains. The coordinates of the landslide are 46°23′44″N and 12°21′23″E close to the Alpi Carniche. The landslide is part of an older landslide on the southern flank of Mount Zucco, covering approximately 72,000 square meters and extending from an elevation of 490 to 580 m above sea level.
As shown in Figure 12, the active portion of the landslide is near the left bank of the Boite River. A village is on the right bank along the river, with its central area located downstream relative to the landslide. Additionally, about 100 m southeast from the bottom of the landslide, there is a road (SP42) that ascends the gentle slope on the southeast side of the landslide. This road features a bend adjacent to the northeast corner of the upper part of the landslide. In the 1980s, a small railway line ran along the upper section of the landslide. However, when landslide activity was detected, the railway was relocated to a tunnel inside the mountain. Furthermore, the Piave River flows southwest on the east side of the landslide body, intersecting with the Boite River approximately 150 m southeast of the landslide’s base. Behind the village, there is a hillside with a height comparable to that of the landslide, where the cameras used for visual monitoring in this study were installed. As shown in Figure 13, the camera system consists of three Canon EOS 1300D cameras (Manufacturer: Canon Inc., Taiwan. Italian distributor: Canon Italia S.p.A., Milan, Italy), each mounted in a covered iron box and equipped with remote connection and a solar charging panel. Moreover, a searchlight is also present for night photo acquisition. Near the photographic monitoring system, a terrestrial interferometric radar was later installed to measure slope displacements along the line of sight. To further enhance redundancy and, consequently, the robustness of the monitoring system, a topographic system was also present on-site, continuously collecting position measurements of reflectors specifically installed on the landslide body. Over the years, the landslide has exhibited intermittent activity, with significant accelerations that have sometimes caused collapses of the uppermost material, typically followed by periods of temporary return to an apparent stable condition [60,61,62,63].

3.2. Image-Based Collapse Detection at the Perarolo Landslide Site

To confirm the capability of our photogrammetric tool based on SSIM to detect collapses, we conducted a test using an image sequence acquired at the Perarolo landslide site from July 2020 to July 2021. One image was taken per day, except during the winter months when the slope was partially or totally covered by snow, and interferometric data and total station data were unavailable. The trend in SSI between the images on day i and i + 1 is shown in Figure 14, where sudden changes above the threshold indicate the occurrence of a collapse, in four different periods (from 8 July 2020 to 30 September 2020; from 1 October 2020 to 1 December 2020; from 15 February 2021 to 30 April 2021; from 30 April 2021 to 19 July 2021.). We defined that when the SSI is lower than 0.9998, a collapse has occurred between the two images. This threshold is calculated based on the minimum area of detached mass that we aim to detect, depending on the ground pixel size. Considering the camera resolution, the distance from the slope surface, and the mean angle between the line of sight and surface normal, an SSI lower than 0.9998 corresponds to a collapsed area larger than about 7 m². As shown in Figure 14, the program detected 13 distinct collapse events during the monitoring period. These events are consistent with on-site observations and the sudden acceleration of displacements obtained from ground-based radar interferometry and the topographic system. It should be noted, however, that these latter monitoring systems, although they can effectively measure the phases preceding the detachment, are generally unable to capture large displacements due to the loss of spatio-temporal coherence of the signal [64] or the excessive instability of the optical targets.
The first nine events correspond to small natural collapses, as shown in Figure 15b. The comparison between Figure 15a and b allows the identification of changes in the slope’s condition before and after the event of 28 February 2021. Figure 15c–e illustrate other significant collapses that occurred in the Perarolo landslide area during June–July 2021. As mentioned earlier, a major natural collapse happened on 9 June 2021; the algorithm detected this extreme condition, assigning an SSI of 0.9135 (Figure 15b). Subsequently, on 25 June 2021, an artificial collapse was induced by explosives to restore the safety condition of the slope (SSI = 0.9449, Figure 15c). Finally, in July, anthropic activities were carried out, where a bulldozer reshaped the toe of the landslide. These operations were also correctly detected by the algorithm, with SSIs of 0.9936 and 0.9941 for July 13 and 14, respectively (Figure 15d,e).
For all the cases, the program identified the location of the collapsed area on the original images, as shown in Figure 16. We can assess its accuracy by comparing the original images obtained before and after the events. Since image availability may not be ensured on some days due to bad weather conditions or other technical reasons, we tested the accuracy of our method by comparing pairs of images taken at different time intervals and computing the SSI, essentially estimating a temporal correlation function. For this purpose, we used a daily sequence of 25 images before a collapse event and 25 images after the collapse, from 6 August 2020 to 24 September 2020, to calculate the SSI between images on day i and i + Δt. As shown in Figure 16, among the possible pairs of images, they can be categorized according to the Δt, and grouped into the following two categories: pairs of images crossing the collapse event (shown in red in Figure 17 and called “with collapse”), and pairs not crossing the event (shown in black and called “without collapse”).
With boxplots using whiskers at 1.5 times the interquartile range, we found that the median of the “with collapse” group fluctuates little with the change in Δt, and there are very few outliers, as shown in Figure 18a. It was found that almost all the SSI values in the “with collapse” group were lower than the threshold value, indicating that the lack of images for several days after a collapse does not affect its detection. On the other hand, in the boxplot of image pairs “without collapse” (Figure 18b), we can see that there are cases where the first quartile (Q1) is below the threshold when Δt ≥ 14 days. When Δt = 20 days, the median is also below the threshold. This means that the time interval between the two consecutive images should be less than 14 days to ensure a 75% probability of detection. When the interval is 20 days, this probability drops to 50%.

3.3. Volume Calculation in Perarolo Landslide Site

Our photogrammetric tool automatically calculates the collapse volume for each detected event. Table 1 illustrates the volumes of collapse events occurring after 9 April 2021, calculated using our photogrammetric tool. To assess the accuracy of this approach, we conducted a couple of laser scanner surveys before and after each of the collapses on 9 June 2021 and 25 June 2021. From the relative point clouds, we measured the collapse volumes using CloudCompare software (version 2.11.0, https://www.cloudcompare.org) (see Figure 19), evaluating the distance in vertical direction between the slope surfaces before and after each event. We compared them with the results of our photogrammetric approach, obtaining that the relative errors were found to be 6.6% and 3.1%, respectively.

4. Discussion

The results of the case study at the landslide site demonstrate the capability of our photogrammetric tool in rapid collapse detection and volume calculation. This small-scale time-lapse photogrammetric tool allows for the quick collection of collapse information for landslides that continue to collapse, which is crucial for understanding landslide characteristics, back-analysis of collapse causes, and further research toward developing a collapse prediction tool. The fully automatic, non-contact, and low-cost features of this technology make it easily acceptable and suitable for widespread adoption.
However, the results from the tests over multiple days in Section 3.2 have shown that using the current threshold of 0.9998, comparing two images with an interval of more than 14 days may cause the program to fail. It is important to understand the maximum allowable interval between two consecutive images for reliable program operation, especially when natural environmental factors such as fog can prevent image acquisition. In this case, we found that the probability of program failure is 50% when the interval is 20 days. For other probabilities (75% or 90%), we plan to conduct tests with longer image sequences that include a collapse event in the middle, as soon as suitable data become available in the future. To further improve the accuracy of collapse detection, it is necessary to study the SSIM specifically for geotechnical materials and utilize machine learning algorithms to identify the most suitable image sequences. These efforts are currently ongoing.

5. Conclusions

In this paper, a new automatic photogrammetric tool is presented, designed for rapid collapse detection and calculation of the collapsed volume of a landslide slope. The identification of the collapsed areas is fully automated and achieved through the structural similarity comparison of consecutive image sequences. We investigated the impact of natural environmental factors at the landslide site on the classic image structural similarity algorithm (SSIM) and developed multiple filters within the tool to eliminate noise and optimize detection accuracy. After that, the algorithm for automatic calculation of collapsed volume relies on 3D reconstruction and point cloud comparison using several previously calibrated fixed cameras.
By analyzing image sequences taken over one year at the Perarolo landslide site, along with laser scanner surveys, we validated the tool’s effectiveness under the following various conditions: day and night images; images captured at different time intervals; and the effect of vegetation and shadows. This on-site application allowed some of the potential and limitations to be identified concerning this method:
  • Full Automation. The collapse event detection algorithm based on the structural similarity metric, once calibrated, allows for reducing the need for human interaction and mitigating false positives that could arise from merely comparing 3D surfaces. This also makes it possible to potentially perform real-time detection by comparing images at very short time intervals.
  • Spatial accuracy and precision. The proposed algorithm has the advantage of processing the entire image area while excluding shaded and vegetated areas. The minimum identifiable collapsed volume has been estimated for our test site but in most cases, it is not easily controllable and is certainly higher than what can be obtained by comparing laser scanner point clouds. It depends on a multitude of factors related to the collapse detection process and the 3D reconstruction process. Among the most important are factors related to the instantaneous field of view (i.e., camera resolution, sensor size, focal length), factors related to the geometry of the subject being framed and the 3D reconstruction process (i.e., distance from the camera, angle between the local plane normal and the line of sight, baseline between cameras), and factors related to the image analysis process (e.g., the type and size of the filters’ kernel applied). Further details on the quantification of errors introduced by this method can be found in [6,65]. Further insights may result from the extensive use of this technique on different landslide surfaces.
  • Low-Cost and Long-Term. Digital cameras have significantly lower costs compared to laser scanner systems or interferometry. Moreover, their maintenance or replacement is easier, making the system potentially suitable for long-term monitoring.
In general, the optimal operating conditions for this method are those where the cameras can be positioned frontally to the area under investigation, sufficiently close to it to maximize resolution, with a line of sight almost normal to the slope surface. The best landslide surface is also a bare, non-vegetated surface with irregular texture (i.e., not a monochromatic surface), not exposed to direct sunlight (to minimize shaded areas, at least during certain hours of the day), and not too susceptible to shallow erosion (to avoid false positives). Heavy rain, snow, clouds, and fog can naturally compromise the method’s applicability, although a preliminary selection of the most suitable images could be made using deep learning algorithms.
Further analysis is needed to refine this tool and gain more precise insights into the dynamics of the Perarolo landslide, as well as other landslide sites monitored with camera-based systems. Specifically, the ongoing work includes studying the SSIM specific to geotechnical materials and applying machine learning algorithms to enhance the selection of the most suitable image sequences [63]. For the further and improved development of this methodology, it will be particularly strategic to evaluate its performance across different types of landslides and varying environmental conditions. For instance, rainfall that frequently precedes collapse events can significantly alter the color and texture of landslide surfaces, depending on the material type, potentially leading to false positives during the collapse detection phase. Specific filters and algorithms will need to be developed to address the various scenarios that may arise, potentially utilizing artificial intelligence for both identifying the scenarios and selecting the appropriate resolution strategies.

Author Contributions

Conceptualization, F.G., Z.L. and L.B.; methodology, Z.L., F.G., A.P. and L.B.; software, Z.L. and F.G.; validation, L.B., Z.L. and A.P.; formal analysis, Z.L., F.G. and L.B.; investigation, L.B., Z.L. and A.P.; resources, F.G. and L.B.; data curation, Z.L. and F.G.; writing—original draft preparation, L.B., F.G. and Z.L; writing—review and editing, L.B. and A.P.; visualization, Z.L. and A.P.; supervision, F.G. and L.B.; project administration, F.G. and L.B.; funding acquisition, F.G. and L.B. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financially supported by the Fondazione Cariverona (https://www.fondazionecariverona.org ID: 11170, cod.SIME 2019.0430.209) through the research grant titled “Monitoring of Natural Hazards and Protective Structures Using Computer Vision Techniques for Environmental Safety and Preservation” and by the Veneto Region through the grant “Scientific Support for the Characterization of Hydrogeological Risk and the Evaluation of the Effectiveness of Interventions Related to the Landslide Phenomenon of Busa del Cristo in Perarolo di Cadore (BL) through the Development of Predictive Geo-Hydrological Models”. This study was also financially supported by the “The research project of Central South University funded by grant QH20230270”.

Data Availability Statement

The time-lapse photos and other information are available at https://geotechlab.dicea.unipd.it/wp-content/uploads/perarolo/main.html (accessed on 2 July 2024). Additional data will be made available upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of this study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Sturzenegger, M.; Stead, D. Close-Range Terrestrial Digital Photogrammetry and Terrestrial Laser Scanning for Discontinuity Characterization on Rock Cuts. Eng. Geol. 2009, 106, 163–182. [Google Scholar] [CrossRef]
  2. Stumpf, A.; Malet, J.P.; Allemand, P.; Ulrich, P. Surface Reconstruction and Landslide Displacement Measurements with Pléiades Satellite Images. ISPRS J. Photogramm. Remote Sens. 2014, 95, 1–12. [Google Scholar] [CrossRef]
  3. Livio, F.A.; Bovo, F.; Gabrieli, F.; Gambillara, R.; Rossato, S.; Martin, S.; Michetti, A.M. Stability Analysis of a Landslide Scarp by Means of Virtual Outcrops: The Mt. Peron Niche Area (Masiere Di Vedana Rock Avalanche, Eastern Southern Alps). Front. Earth Sci. 2022, 10, 863880. [Google Scholar] [CrossRef]
  4. Antonello, M.; Gabrieli, F.; Cola, S.; Menegatti, E. Automated Landslide Monitoring through a Low-Cost Stereo Vision System. In Proceedings of the CEUR Workshop Proceedings, Paris, France, 27–28 April 2013; Volume 1107. [Google Scholar]
  5. Travelletti, J.; Delacourt, C.; Allemand, P.; Malet, J.P.; Schmittbuhl, J.; Toussaint, R.; Bastard, M. Correlation of Multi-Temporal Ground-Based Optical Images for Landslide Monitoring: Application, Potential and Limitations. ISPRS J. Photogramm. Remote Sens. 2012, 70, 39–55. [Google Scholar] [CrossRef]
  6. Gabrieli, F.; Corain, L.; Vettore, L. A Low-Cost Landslide Displacement Activity Assessment from Time-Lapse Photogrammetry and Rainfall Data: Application to the Tessina Landslide Site. Geomorphology 2016, 269, 56–74. [Google Scholar] [CrossRef]
  7. Giacomini, A.; Thoeni, K.; Santise, M.; Diotri, F.; Booth, S.; Fityus, S.; Roncella, R. Temporal-Spatial Frequency Rockfall Data from Open-Pit Highwalls Using a Low-Cost Monitoring System. Remote Sens. 2020, 12, 2459. [Google Scholar] [CrossRef]
  8. Ding, A.; Zhang, Q.; Zhou, X.; Dai, B. Automatic Recognition of Landslide Based on CNN and Texture Change Detection. In Proceedings of the 2016 31st Youth Academic Annual Conference of Chinese Association of Automation, YAC, Wuhan, China, 11–13 November 2016. [Google Scholar]
  9. Ghorbanzadeh, O.; Meena, S.R.; Blaschke, T.; Aryal, J. UAV-Based Slope Failure Detection Using Deep-Learning Convolutional Neural Networks. Remote Sens. 2019, 11, 2046. [Google Scholar] [CrossRef]
  10. Ghorbanzadeh, O.; Blaschke, T.; Gholamnia, K.; Meena, S.R.; Tiede, D.; Aryal, J. Evaluation of Different Machine Learning Methods and Deep-Learning Convolutional Neural Networks for Landslide Detection. Remote Sens. 2019, 11, 196. [Google Scholar] [CrossRef]
  11. Lei, T.; Zhang, Q.; Xue, D.; Chen, T.; Meng, H.; Nandi, A.K. End-to-End Change Detection Using a Symmetric Fully Convolutional Network for Landslide Mapping. In Proceedings of the ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019; Volume 2019. [Google Scholar]
  12. Lv, Z.Y.; Shi, W.; Zhang, X.; Benediktsson, J.A. Landslide Inventory Mapping from Bitemporal High-Resolution Remote Sensing Images Using Change Detection and Multiscale Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1520–1532. [Google Scholar] [CrossRef]
  13. Amit, S.N.K.B.; Aoki, Y. Disaster Detection from Aerial Imagery with Convolutional Neural Network. In Proceedings of the Proceedings—International Electronics Symposium on Knowledge Creation and Intelligent Computing, IES-KCIC 2017, Surabaya, Indonesia, 26–27 September 2017; Volume 2017. [Google Scholar]
  14. Ji, S.; Shen, Y.; Lu, M.; Zhang, Y. Building Instance Change Detection from Large-Scale Aerial Images Using Convolutional Neural Networks and Simulated Samples. Remote Sens. 2019, 11, 1343. [Google Scholar] [CrossRef]
  15. Lu, P.; Stumpf, A.; Kerle, N.; Casagli, N. Object-Oriented Change Detection for Landslide Rapid Mapping. IEEE Geosci. Remote Sens. Lett. 2011, 8, 701–705. [Google Scholar] [CrossRef]
  16. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  17. Lowe, D.G. Object Recognition from Local Scale-Invariant Features. In Proceedings of the IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2. [Google Scholar]
  18. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  19. Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded up Robust Features. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Berlin, Germany, 18–22 September 2006; Springer LNCS: Berlin, Germany, 2006; Volume 3951 LNCS. [Google Scholar]
  20. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  21. Blanch, X.; Abellan, A.; Guinau, M. Point Cloud Stacking: A Workflow to Enhance 3D Monitoring Capabilities Using Time-Lapse Cameras. Remote Sens. 2020, 12, 1240. [Google Scholar] [CrossRef]
  22. Blanch, X.; Eltner, A.; Guinau, M.; Abellan, A. Multi-Epoch and Multi-Imagery (Memi) Photogrammetric Workflow for Enhanced Change Detection Using Time-Lapse Cameras. Remote Sens. 2021, 13, 1460. [Google Scholar] [CrossRef]
  23. Shen, B.; Chen, S.; Yin, J.; Mao, H. Image Recognition of Green Weeds in Cotton Fields Based on Color Feature. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng. 2009, 25, 163–167. [Google Scholar] [CrossRef]
  24. Sanin, A.; Sanderson, C.; Lovell, B.C. Shadow Detection: A Survey and Comparative Evaluation of Recent Methods. Pattern Recognit 2012, 45, 1684–1695. [Google Scholar] [CrossRef]
  25. Wang, Z.; Bovik, A.; Sheikh, H. Structural Similarity Based Image Quality Assessment. In Digital Video Image Quality and Perceptual Coding, Ser. Series in Signal Processing and Communications; RC Press: Boca Raton, FL, USA, 2005. [Google Scholar] [CrossRef]
  26. Lim, J.S. Two-Dimensional Signal and Image Processing; Prentice Hall Inc.: Englewood Cliffs, NJ, USA, 1990; Volume 710. [Google Scholar]
  27. Zagoruyko, S.; Komodakis, N. Learning to Compare Image Patches via Convolutional Neural Networks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  28. Brunet, D.; Vrscay, E.R.; Wang, Z. On the Mathematical Properties of the Structural Similarity Index. IEEE Trans. Image Process. 2012, 21, 1488–1499. [Google Scholar] [CrossRef]
  29. Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss Functions for Image Restoration with Neural Networks. IEEE Trans. Comput. Imaging 2017, 3, 47–57. [Google Scholar] [CrossRef]
  30. Zhang, Q.; Wang, T. Deep Learning for Exploring Landslides with Remote Sensing and Geo-Environmental Data: Frameworks, Progress, Challenges, and Opportunities. Remote Sens. 2024, 16, 1344. [Google Scholar] [CrossRef]
  31. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale Structural Similarity for Image Quality Assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  32. Zujovic, J.; Pappas, T.N.; Neuhoff, D.L. Structural Texture Similarity Metrics for Image Analysis and Retrieval. IEEE Trans. Image Process. 2013, 22, 2545–2558. [Google Scholar] [CrossRef] [PubMed]
  33. Liasis, G.; Stavrou, S. Satellite Images Analysis for Shadow Detection and Building Height Estimation. ISPRS J. Photogramm. Remote Sens. 2016, 119, 437–450. [Google Scholar] [CrossRef]
  34. Arévalo, V.; González, J.; Ambrosio, G. Shadow Detection in Colour High-Resolution Satellite Images. Int. J. Remote Sens. 2008, 29, 1945–1963. [Google Scholar] [CrossRef]
  35. Adeline, K.R.M.; Chen, M.; Briottet, X.; Pang, S.K.; Paparoditis, N. Shadow Detection in Very High Spatial Resolution Aerial Images: A Comparative Study. ISPRS J. Photogramm. Remote Sens. 2013, 80, 21–38. [Google Scholar] [CrossRef]
  36. Li, F.; Jupp, D.L.B.; Thankappan, M.; Lymburner, L.; Mueller, N.; Lewis, A.; Held, A. A Physics-Based Atmospheric and BRDF Correction for Landsat Data over Mountainous Terrain. Remote Sens. Env. 2012, 124, 756–770. [Google Scholar] [CrossRef]
  37. Hua, S.; Shi, P. GrabCut Color Image Segmentation Based on Region of Interest. In Proceedings of the 2014 7th International Congress on Image and Signal Processing, CISP 2014, Dalian, China, 14–16 October 2014. [Google Scholar]
  38. Ko, S.J.; Lee, Y.H. Center Weighted Median Filters and Their Applications to Image Enhancement. IEEE Trans. Circuits Syst. 1991, 38, 984–993. [Google Scholar] [CrossRef]
  39. Hwang, H.; Haddad, R.A. Adaptive Median Filters: New Algorithms and Results. IEEE Trans. Image Process. 1995, 4, 499–502. [Google Scholar] [CrossRef]
  40. Shrestha, S. Image Denoising Using New Adaptive Based Median Filter. Signal Image Process 2014, 5, 1–13. [Google Scholar] [CrossRef]
  41. Reddi, S.S.; Rudin, S.F.; Keshavan, H.R. An Optimal Multiple Threshold Scheme for Image Segmentation. IEEE Trans. Syst. Man. Cybern. 1984, 4, 661–665. [Google Scholar] [CrossRef]
  42. Deng, G.; Cahill, L.W. An Adaptive Gaussian Filter for Noise Reduction and Edge Detection. In Proceedings of the 1993 IEEE Conference Record Nuclear Science Symposium and Medical Imaging Conference, San Francisco, CA, USA, 31 October–6 November 1993. [Google Scholar]
  43. Shin, D.H.; Park, R.H.; Yang, S.; Jung, J.H. Block-Based Noise Estimation Using Adaptive Gaussian Filtering. IEEE Trans. Consum. Electron. 2005, 51, 218–226. [Google Scholar] [CrossRef]
  44. Tack, F.; Buyuksalih, G.; Goossens, R. 3D Building Reconstruction Based on given Ground Plan Information and Surface Models Extracted from Spaceborne Imagery. ISPRS J. Photogramm. Remote Sens. 2012, 67, 52–64. [Google Scholar] [CrossRef]
  45. Liu, S.; Zhao, L.; Li, J. The Applications and Summary of Three Dimensional Reconstruction Based on Stereo Vision. In Proceedings of the 2012 International Conference on Industrial Control and Electronics Engineering, ICICEE 2012, Xi’an, China, 23–25 August 2012. [Google Scholar]
  46. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  47. Luhmann, T.; Fraser, C.; Maas, H.G. Sensor Modelling and Camera Calibration for Close-Range Photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [Google Scholar] [CrossRef]
  48. Colomina, I.; Molina, P. Unmanned Aerial Systems for Photogrammetry and Remote Sensing: A Review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  49. Heikkila, J.; Silven, O. Four-Step Camera Calibration Procedure with Implicit Image Correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997. [Google Scholar]
  50. Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L. Image-based reconstruction and analysis of dynamic scenes in a landslide simulation facility. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 63–70. [Google Scholar] [CrossRef]
  51. Kjær-Nielsen, A.; Jensen, L.B.W.; SøSrensen, A.S.; Krüger, N. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA. In Proceedings of the 2008 International Conference on Reconfigurable Computing and FPGAs, ReConFig 2008, Cancun, Mexico, 3–5 December 2008. [Google Scholar]
  52. Junger, C.; Hess, A.; Rosenberger, M.; Notni, G. FPGA-Based Lens Undistortion and Image Rectification for Stereo Vision Applications. In Photonics and Education in Measurement Science; SPIE: New York, NY, USA, 2019; Volume 11144, pp. 284–291. [Google Scholar]
  53. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, MA, USA, 2004. [Google Scholar]
  54. Hamzah, R.A.; Ibrahim, H. Literature Survey on Stereo Vision Disparity Map Algorithms. J. Sens. 2016, 2016, 8742920. [Google Scholar] [CrossRef]
  55. Bleyer, M.; Gelautz, M. A Layered Stereo Matching Algorithm Using Image Segmentation and Global Visibility Constraints. ISPRS J. Photogramm. Remote Sens. 2005, 59, 128–150. [Google Scholar] [CrossRef]
  56. Georgoulas, C.; Kotoulas, L.; Sirakoulis, G.C.; Andreadis, I.; Gasteratos, A. Real-Time Disparity Map Computation Module. Microprocess. Microsyst. 2008, 32, 159–170. [Google Scholar] [CrossRef]
  57. Xu, X.; Harada, K. Automatic Surface Reconstruction with Alpha-Shape Method. Vis. Comput. 2003, 19, 431–443. [Google Scholar] [CrossRef]
  58. Hadas, E.; Borkowski, A.; Estornell, J.; Tymkow, P. Automatic Estimation of Olive Tree Dendrometric Parameters Based on Airborne Laser Scanning Data Using Alpha-Shape and Principal Component Analysis. GIsci. Remote Sens. 2017, 54, 898–917. [Google Scholar] [CrossRef]
  59. Carrea, D.; Abellan, A.; Derron, M.H.; Gauvin, N.; Jaboyedoff, M. Matlab Virtual Toolbox for Retrospective Rockfall Source Detection and Volume Estimation Using 3D Point Clouds: A Case Study of a Subalpine Molasse Cliff. Geosciences 2021, 11, 75. [Google Scholar] [CrossRef]
  60. Brezzi, L.; Carraro, E.; Pasa, D.; Teza, G.; Cola, S.; Galgaro, A. Post-Collapse Evolution of a Rapid Landslide from Sequential Analysis with FE and SPH-Based Models. Geosciences 2021, 11, 364. [Google Scholar] [CrossRef]
  61. Liu, Y.; Brezzi, L.; Liang, Z.; Gabrieli, F.; Zhou, Z.; Cola, S. Image Analysis and LSTM Methods for Forecasting Surficial Displacements of a Landslide Triggered by Snowfall and Rainfall. Landslides 2024. [Google Scholar] [CrossRef]
  62. Teza, G.; Cola, S.; Brezzi, L.; Galgaro, A. Wadenow: A Matlab Toolbox for Early Forecasting of the Velocity Trend of a Rainfall-Triggered Landslide by Means of Continuous Wavelet Transform and Deep Learning. Geosciences 2022, 12, 205. [Google Scholar] [CrossRef]
  63. Brezzi, L.; Gabrieli, F.; Vallisari, D.; Carraro, E.; Pol, A.; Galgaro, A.; Cola, S. DIPHORM: An Innovative DIgital PHOtogrammetRic Monitoring Technique for Detecting Surficial Displacements of Landslides. Remote Sens. 2024, 16, 3199. [Google Scholar] [CrossRef]
  64. Touzi, R.; Lopes, A.; Bruniquel, J.; Vachon, P.W. Coherence Estimation for SAR Imagery. IEEE Trans. Geosci. Remote Sens. 1999, 37, 135–149. [Google Scholar] [CrossRef]
  65. Guccione, D.E.; Turvey, E.; Roncella, R.; Thoeni, K.; Giacomini, A. Proficient Calibration Methodologies for Fixed Photogrammetric Monitoring Systems. Remote Sens. 2024, 16, 2281. [Google Scholar] [CrossRef]
Figure 1. Scheme of the various steps composing the collapse detection algorithm.
Figure 1. Scheme of the various steps composing the collapse detection algorithm.
Remotesensing 16 03233 g001
Figure 2. Example of the input images of the monitoring area used for comparison and the corresponding structural similarity map (SSM) generated by the SSIM algorithm.
Figure 2. Example of the input images of the monitoring area used for comparison and the corresponding structural similarity map (SSM) generated by the SSIM algorithm.
Remotesensing 16 03233 g002
Figure 3. Example set of images used for the shadow test. Consecutive hourly images taken from 11:00 a.m. to 5:00 p.m. on sunny days from 27 July 2020 to 31 July 2020. The shadow cast on the slope gradually decreases from 11:00 a.m. to 3:00 p.m., with minimal shadows in the images at 4:00 p.m. and 5:00 p.m. The image with red borders is the reference one.
Figure 3. Example set of images used for the shadow test. Consecutive hourly images taken from 11:00 a.m. to 5:00 p.m. on sunny days from 27 July 2020 to 31 July 2020. The shadow cast on the slope gradually decreases from 11:00 a.m. to 3:00 p.m., with minimal shadows in the images at 4:00 p.m. and 5:00 p.m. The image with red borders is the reference one.
Remotesensing 16 03233 g003
Figure 4. Influence of shadow cast on the structural similarity index (SSI). The SSI values for images with varying degrees of shadow cast compared to a reference image taken at 3:00 p.m. The difference (∆t) indicates the time difference from the reference image.
Figure 4. Influence of shadow cast on the structural similarity index (SSI). The SSI values for images with varying degrees of shadow cast compared to a reference image taken at 3:00 p.m. The difference (∆t) indicates the time difference from the reference image.
Remotesensing 16 03233 g004
Figure 5. Images used in one group (Group 1) of image sequence comparison tests. The images from all groups show a consistent trend of illuminance change. The image with red borders is the reference one.
Figure 5. Images used in one group (Group 1) of image sequence comparison tests. The images from all groups show a consistent trend of illuminance change. The image with red borders is the reference one.
Remotesensing 16 03233 g005
Figure 6. (a) SSI between reference image and target images taken at different times within one day. (b) Reference images used in image sequence comparison tests to identify the influence of illuminance. The reference images in Group 5 have different illuminance levels compared to the reference images in the other groups.
Figure 6. (a) SSI between reference image and target images taken at different times within one day. (b) Reference images used in image sequence comparison tests to identify the influence of illuminance. The reference images in Group 5 have different illuminance levels compared to the reference images in the other groups.
Remotesensing 16 03233 g006
Figure 7. Scheme of filter application in the algorithm, highlighting the effects of each filtering step. The image illustrates the application of various filters in the image comparison program. SSI is lowest when no filters are applied and increases with the addition of filters.
Figure 7. Scheme of filter application in the algorithm, highlighting the effects of each filtering step. The image illustrates the application of various filters in the image comparison program. SSI is lowest when no filters are applied and increases with the addition of filters.
Remotesensing 16 03233 g007
Figure 8. Steps of the algorithm for calculating the volume of collapsed material.
Figure 8. Steps of the algorithm for calculating the volume of collapsed material.
Remotesensing 16 03233 g008
Figure 9. Method for identifying and filling invalid points with the nearest valid points in the collapsed area. Note: The red arrows indicate an example of the method for locating the nearest valid point.
Figure 9. Method for identifying and filling invalid points with the nearest valid points in the collapsed area. Note: The red arrows indicate an example of the method for locating the nearest valid point.
Remotesensing 16 03233 g009
Figure 10. Process of refining the approximate collapse area to determine the exact collapse area, illustrated with data from the Perarolo landslide site collapse on 9 June 2021.
Figure 10. Process of refining the approximate collapse area to determine the exact collapse area, illustrated with data from the Perarolo landslide site collapse on 9 June 2021.
Remotesensing 16 03233 g010
Figure 11. Method for creating an alpha shape to represent collapsed bodies for volume calculation, demonstrated with data from the Perarolo landslide site collapse on 9 June 2021.
Figure 11. Method for creating an alpha shape to represent collapsed bodies for volume calculation, demonstrated with data from the Perarolo landslide site collapse on 9 June 2021.
Remotesensing 16 03233 g011
Figure 12. Overview of the Perarolo landslide site, showing the location of the monitoring system, the village on the right bank of the Boite river, and other geographic features.
Figure 12. Overview of the Perarolo landslide site, showing the location of the monitoring system, the village on the right bank of the Boite river, and other geographic features.
Remotesensing 16 03233 g012
Figure 13. (a) Searchlight for night photos; (b) view of one of the three photographic system used for monitoring the Perarolo landslide; (c) detailed view of the hardware components of the system for the automatic acquisition and transmission of the time-lapse images.
Figure 13. (a) Searchlight for night photos; (b) view of one of the three photographic system used for monitoring the Perarolo landslide; (c) detailed view of the hardware components of the system for the automatic acquisition and transmission of the time-lapse images.
Remotesensing 16 03233 g013
Figure 14. The trend in SSI between consecutive daily images over the monitoring period shows that values below the threshold (red dots) indicate a collapse event, while those above the threshold (blue dots) indicate no collapse. (a) From 9 July 2020 to 30 September 2020; (b) from 1 October 2020 to 1 December 2020; (c) from 16 February 2021 to 30 April 2021; (d) from 1 May 2021 to 19 July 2021.
Figure 14. The trend in SSI between consecutive daily images over the monitoring period shows that values below the threshold (red dots) indicate a collapse event, while those above the threshold (blue dots) indicate no collapse. (a) From 9 July 2020 to 30 September 2020; (b) from 1 October 2020 to 1 December 2020; (c) from 16 February 2021 to 30 April 2021; (d) from 1 May 2021 to 19 July 2021.
Remotesensing 16 03233 g014
Figure 15. Images of the major collapse events recorded on the main landslide scarp. (a) Slope condition on 9 June 2021; (b) after the major natural collapse on 9 June 2021 (SSI = 0.9135); (c) after the artificial collapse induced by explosives on 25 June 2021 to restore slope safety (SSI = 0.9449); (d) after the anthropic activity on 13 July 2021, where a bulldozer reshaped the toe of the landslide (SSI = 0.9936; (e) during the continuation of anthropic activity on 14 July 2021 (SSI = 0.9941).
Figure 15. Images of the major collapse events recorded on the main landslide scarp. (a) Slope condition on 9 June 2021; (b) after the major natural collapse on 9 June 2021 (SSI = 0.9135); (c) after the artificial collapse induced by explosives on 25 June 2021 to restore slope safety (SSI = 0.9449); (d) after the anthropic activity on 13 July 2021, where a bulldozer reshaped the toe of the landslide (SSI = 0.9936; (e) during the continuation of anthropic activity on 14 July 2021 (SSI = 0.9941).
Remotesensing 16 03233 g015
Figure 16. Location of the collapsed area identified on the original images (in black, (a,b)), showing the accuracy of the photogrammetric tool by comparing images (c) before and (d) after the events.
Figure 16. Location of the collapsed area identified on the original images (in black, (a,b)), showing the accuracy of the photogrammetric tool by comparing images (c) before and (d) after the events.
Remotesensing 16 03233 g016
Figure 17. Test to verify the accuracy of our program at different time lags. Using 50 consecutive images taken from 6 August 2020 to 24 September 2020, a collapse event occurred between 30 August 2020 and 31 August 2020. The red connectors identify the pairs of images in which a collapse is detected, while the black ones represent the pairs of images without collapse.
Figure 17. Test to verify the accuracy of our program at different time lags. Using 50 consecutive images taken from 6 August 2020 to 24 September 2020, a collapse event occurred between 30 August 2020 and 31 August 2020. The red connectors identify the pairs of images in which a collapse is detected, while the black ones represent the pairs of images without collapse.
Remotesensing 16 03233 g017
Figure 18. Boxplot showing the distribution of structural similarity index (SSI) values at different time lags (∆t) for image pairs with (a) and without (b) collapse events. The central blue box of the plot represents the interquartile range (IQR), the blue line identifies the median (Q2) and the whiskers extend to 1.5 times the interquartile range (IQR). Outliers are also shown in red.
Figure 18. Boxplot showing the distribution of structural similarity index (SSI) values at different time lags (∆t) for image pairs with (a) and without (b) collapse events. The central blue box of the plot represents the interquartile range (IQR), the blue line identifies the median (Q2) and the whiskers extend to 1.5 times the interquartile range (IQR). Outliers are also shown in red.
Remotesensing 16 03233 g018
Figure 19. Comparison of collapse volume measured using laser scanner and our photogrammetric algorithms during the collapse event on (a) 9 June 2021 and (b) 25 June 2021, using CloudCompare software. The contour plots indicate the vertical distance, in z-direction, between the slope surfaces before and after each collapse.
Figure 19. Comparison of collapse volume measured using laser scanner and our photogrammetric algorithms during the collapse event on (a) 9 June 2021 and (b) 25 June 2021, using CloudCompare software. The contour plots indicate the vertical distance, in z-direction, between the slope surfaces before and after each collapse.
Remotesensing 16 03233 g019
Table 1. Volume of some collapse events detected after 9 April 2021 calculated by our photogrammetric tool.
Table 1. Volume of some collapse events detected after 9 April 2021 calculated by our photogrammetric tool.
DateType of CollapseVolume (m3)SSI
09/04/2021Natural353.20.9971
09/06/2021Natural8558.70.9135
25/06/2021Explosive2739.60.9449
13/07/2021Anthropic activity217.90.9936
14/07/2021Anthropic activity457.90.9941
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, Z.; Gabrieli, F.; Pol, A.; Brezzi, L. Automated Photogrammetric Tool for Landslide Recognition and Volume Calculation Using Time-Lapse Imagery. Remote Sens. 2024, 16, 3233. https://doi.org/10.3390/rs16173233

AMA Style

Liang Z, Gabrieli F, Pol A, Brezzi L. Automated Photogrammetric Tool for Landslide Recognition and Volume Calculation Using Time-Lapse Imagery. Remote Sensing. 2024; 16(17):3233. https://doi.org/10.3390/rs16173233

Chicago/Turabian Style

Liang, Zhipeng, Fabio Gabrieli, Antonio Pol, and Lorenzo Brezzi. 2024. "Automated Photogrammetric Tool for Landslide Recognition and Volume Calculation Using Time-Lapse Imagery" Remote Sensing 16, no. 17: 3233. https://doi.org/10.3390/rs16173233

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop