Next Article in Journal
Comparison Study of Earth Observation Characteristics between Moon-Based Platform and L1 Point of Earth-Moon System
Previous Article in Journal
A Comparison of Different Machine Learning Methods to Reconstruct Daily Evapotranspiration Time Series Estimated by Thermal–Infrared Remote Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nearshore Bathymetry from ICESat-2 LiDAR and Sentinel-2 Imagery Datasets Using Physics-Informed CNN

1
Donghai Laboratory, No. 1 Zhejiang Da Rd., Zhoushan 310030, China
2
State Key Laboratory of Satellite Ocean Environment Dynamics, Second Institute of Oceanography, Ministry of Natural Resources, 36 Baochubeilu, Hangzhou 310012, China
3
School of Oceanography, Shanghai Jiao Tong University, Shanghai 200030, China
4
Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(3), 511; https://doi.org/10.3390/rs16030511
Submission received: 20 December 2023 / Revised: 20 January 2024 / Accepted: 22 January 2024 / Published: 29 January 2024
(This article belongs to the Section Environmental Remote Sensing)

Abstract

:
The recently developed Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2), furnished with the Advanced Terrain Laser Altimeter System (ATLAS), delivers considerable benefits in providing accurate bathymetric data across extensive geographical regions. By integrating active lidar-derived reference seawater depth data with passive optical remote sensing imagery, efficient bathymetry mapping is facilitated. In recent times, machine learning models are frequently used to define the nonlinear connection between remote sensing spectral data and water depths, which consequently results in the creation of bathymetric maps. A salient model among these is the convolutional neural network (CNN), which effectively integrates contextual information concerning bathymetric points. However, current CNN models and other machine learning approaches mainly concentrate on recognizing mathematical relationships within the data to determine a water depth function and remote sensing spectral data, while oftentimes disregarding the physical light propagation process in seawater before reaching the seafloor. This study presents a physics-informed CNN (PI-CNN) model which incorporates radiative transfer-based data into the CNN structure. By including the shallow water double-band radiative transfer physical term (swdrtt), this model enhances seawater spectral features and also considers the context surroundings of bathymetric pixels. The effectiveness and reliability of our proposed PI-CNN model are verified using in situ data from St. Croix and St. Thomas, validating its correctness in generating bathymetric maps with a broad experimental R2 accuracy exceeding 95% and remaining errors below 1.6 m. Preliminary results suggest that our PI-CNN model surpasses conventional methodologies.

1. Introduction

Nearshore areas hold immense importance for ecological systems, providing valuable resources and serving as critical zones [1]. Accurate bathymetric maps, which detail the depths of underwater terrain, play a significant role in supporting activities such as coastal research, environmental management, and marine spatial planning [2]. So far, only much less than 15% of the world’s oceans have been mapped using high-resolution sounding technology [3]. Both active and passive spaceborne/airborne platforms can be effectively used for bathymetry [4]. Traditional active bathymetry methods, such as multibeam echosounders (MBES) airborne or shipborne LiDAR, have been highly accurate, providing detailed marine mapping data [5,6]. However, these methods encounter limitations when it comes to nearshore waters characterized by harsh environments, complex topography, and inaccessibility of shallow disputed areas [7,8]. Some studies have demonstrated the potential of spaceborne remote sensing techniques in acquiring bathymetric information with adequate accuracy and also offers advantages in terms of economic viability, global coverage, and timeliness [9,10,11,12]. As a result, satellite-based bathymetry (SDB) is emerging as a promising alternative to in situ measurements for obtaining bathymetric information [13].
The booming development of hyperspectral remote sensing techniques has caught researchers’ attention and hyperspectral technology is widely used in bathymetric mapping. This technology permits the use of high-resolution satellite images to determine depth ranges based on the wavelength of the spectral bands of the image. Passive satellite-based bathymetry processes hyperspectral remote sensing images, such as those captured by Landsat and Sentinel-2 satellites [14,15], and employs radiation transfer theory for optical depth inversion, ushering in a new era of land and ocean monitoring. The Copernicus Sentinel-2 dual satellite mission has a resolution of 10 to 60 m and a revisit interval of five days. More importantly, it has an open and free data access policy, and satellite imagery is an excellent database of spectral imagery for bathymetry. In shallow inland waters, enclosed and semi-enclosed offshore bays, Sentinel-2 based spectral image inversion of bathymetry was able to achieve good accuracy [16,17,18]. At present, these data from Sentinel-2 and the use of advanced computational techniques for bathymetric estimation represent important advances in the field [19,20].
Based on the hyperspectral remote sensing technique, SDB approaches can be categorized into two general classes: physical and empirical models [21,22,23]. Physics-based model inversion approaches aim to estimate bathymetry even in the absence of known bathymetric points by radiative transfer model with multispectral imagery [24,25]. However, these models often overlook important influences and lack comprehensive mathematical statistics [26]. It also proves challenging to comprehensively estimate site-specific inherent optical factors [27]. On the other hand, empirical models leverage the relationship between the image-derived spectral features and water depth data [28,29]. However, empirical models necessitate calibration with known water depth data to determine empirical parameters. Some models relied on specific band combinations that prove too simplistic to handle the complexities of the nearshore environment [30,31,32]. Current physical and empirical models in bathymetric studies each have their advantages, but they always fail to consider the spatial correlation between bathymetric points and surrounding pixels.
Some passive empirical bathymetric models still rely on in situ depth data, which greatly limits their widespread usage. Active satellite-based bathymetric methods, such as spaceborne Light Detection and Ranging (LiDAR), can detect large-scale spatial information of underwater features, and gradually gain traction as a substitute for in situ measurements in obtaining bathymetric information [33,34,35]. The National Aeronautics and Space Administration (NASA) launched the new Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2) with the Advanced Terrain Laser Altimeter System (ATLAS) on 15 September 2018 [36,37]. Due to the penetration of lasers, ICESat-2/ATLAS data products have great potential to map seafloor terrains and yield bathymetric points [33,38]. Density-based noisy application spatial clustering (DBSCAN) has been proven to be an effective method for photon signal processing [39,40,41]. Nonetheless, challenges remain in capturing accurate sea surface height distribution changes on short scales. Furthermore, the original ATLAS photons contain many noise signals, including noise from solar photons and system currents due to the optical characteristics of photon counting detectors. Thus, determining the key values of DBSCAN parameters is always a challenge. In our previous research, we proposed an adaptive ellipse DBSCAN (AE-DBSCAN) for ICESat-2 photon data processing [42], which provides greater potential to obtain large-scale nearshore bathymetric data.
As we mentioned earlier, establishing the relationship between water depth and remote sensing reflectance using traditional linear or band ratio empirical models is too simple, because relationships are often nonlinear [28,29]. Machine learning (ML) methods have garnered considerable attention in nearshore bathymetry [43]. They can effectively extract high-dimensional features to construct a nonlinear function but do not require rigorous atmospheric correction or assumptions about the water’s optical properties. Various machine learning algorithms, including neural networks (NN) [44,45], support vector machines (SVM) [46], random forest (RF) [43], multi-layer perceptron (MLP) [47], and convolutional neural networks (CNNs) [48,49], have been applied to bathymetry estimation. Compared to other machine learning models, CNNs can explore the spatial correlation between adjacent pixels with similar reflectance and corresponding water depth, showing promise in enhancing the accuracy of bathymetric inversion [50]. However, the existing machine learning methods mainly focus on extracting high-dimensional features from training data to increase the robustness of the models. Previous studies have shown that multi-band spectral feature information can help improve the accuracy of water depth inversion [51,52]. However, current machine learning approaches mainly concentrate on recognizing general characteristics of data to determine a water depth function and remote sensing spectral data, while oftentimes disregarding the physical light propagation process in seawater before reaching the seafloor.
This study presents a physics-informed CNN (PI-CNN) model which incorporates radiative transfer-based data into a CNN structure. By including the shallow water double-band radiative transfer physical term (swdrtt), this model enhances seawater spectral features and also takes surroundings of bathymetric pixels into account. We apply the adaptive ellipse DBSCAN (AE-DBSCAN) method for the ICESat-2 reference bathymetric points as prior training data. Moreover, we generate training and test samples from Sentinel-2 spectral reflectance features and radiative transfer features and establish relationships between and ICESat-2 reference bathymetric points using the CNN model. The next section introduces the research areas and data sources. We present our proposed methodology in Section 3. Then, we illustrate the obtained results and provide validation of our proposed methodology and discuss comprehensively the effects of CNN structure, inversion models, and band combinations, etc., on SDB performance. The final section offers a concise summary.

2. Data Sources

2.1. ICESat-2 Data

The ICESat-2 satellite, launched in September 2018, is equipped with ATLAS [53]. With a revisit period of 91 days, this advanced system emits green (532 nm) laser pulses at a rate of approximately one pulse every 0.7 m and a diameter of around 13 m along the satellite’s ground orbit [54]. To enhance surface elevation detection capabilities, the laser pairs are divided into strong and weak beams, with an energy ratio of 1:4. The separation between each beam pair is 3.3 km, while on the ground, the strong and weak beams within each pair are spaced 90 m apart. Operating at a wavelength of 532 nm, each laser has a repetition frequency of 10 kHz. The standard ATL03 product provides geographic positioning ellipsoid height based on WGS84 [55]. We downloaded all the experimental data from the National Snow and Ice Data Center (NSIDC).

2.2. Sentinel-2 Data

Sentinel-2A and 2B satellites both carry multispectral instruments (MSI) and were launched by the European Space Agency (ESA) in June 2015 and March 2017, respectively, with a revisit period of 5 days [56]. The MSI provides the multispectral images required for this study and offers 13 spectral bands with varying resolutions for each band. We mainly use the following bands: the blue (B2 490 nm), green (B3 560 nm), red (B4 665 nm), and near-infrared (B8 842 nm) channels with a resolution of 10 m. Additionally, the resolution of the red edge (B5 705 nm), near-infrared (B6 740 nm, B7 783 nm), and short-wave infrared (B11 1610 nm, B12 2190 nm) is 20 m [57]. We download the Level 1C (L1C) images for less than 10% cloud from the U.S. Geological Survey (USGS) website. We used the Sen2Cor plug-in to obtain atmospherically corrected Level-2A (L2A) images.

2.3. Validation Data

The continuously updated digital elevation model (CUDEM) dataset is developed by NOAA’s National Centers for Environmental Information (NCEI) and is generated from water depth data spanning from 2009 to the present [58]. It represents the highest-resolution, seamless depiction of the entire U.S. Atlantic and Gulf Coasts available in the public domain. The coastal topographic-bathymetric DEMs offer a spatial resolution of 1/9th arc-second (~3 m), while the offshore bathymetric DEMs provide a coarser resolution of 1/3rd arc-second (~10 m). In this study, we utilize CUDEM data as validation data for our research conducted around the St. Thomas and St. Croix areas. The Vdatum employed in the CUDEM is VIVD09, and we utilize VDatum developed by NOAA to convert CUDEM from VIVD09 to WGS84 in our study.

2.4. Study Regions

Four regions are selected for nearshore bathymetry considering the water clarity in this study. The selected four islands are located in the eastern Caribbean as shown in Figure 1. St. Croix is bounded by land to the west and coral reefs to the east, and is surrounded by deep-water areas. Since the region contains distinct deep-water areas and shallow-water areas, the region has sea area with wide range of the water depth, which is ideal for validation of the presented model. The Sentinel 2 image of St. Croix covers an area of 1100 km2. St. Thomas represents a second study area, offering diverse coastal features to explore. St. Thomas is traversed by a mountain range, and the waters around the island are clear, with extensive coral reef resources offshore. The island has a number of natural bays and harbors. The Sentinel 2 image of St. Thomas covers an area of ~560 km2. Anegada comprises a combination of peripheral reefs, a central lagoon, and a variety of substrate compositions. Its Sentinel 2 image covers an area of ~780 km2. Barbuda is a flat island with an image area of ~900 km2 in the Sentinel 2 image, dominated by Codrington Lagoon in the west and the Barbuda Highlands in the east. The first two regions are used to verify the accuracy of the method, while the last two regions are used to test the applicability in different regions. To ensure the bathymetric analysis concentrates solely on the underwater areas of interest, land areas within the study regions would be masked using the Space Shuttle Radar Topography Mission (SRTM) 5 min Digital Elevation Model. All data utilized for each region are listed in Table 1.

3. Method

3.1. PI-CNN Method Overview

A concise flowchart in Figure 2 outlines the process. We use the adaptive ellipse DBSCAN algorithm to compute precise ICESat-2 reference bathymetric points for creating CNN training labels. Physics information (PI) mainly contains water reflectances R , radiative transfer term s w d r t t ,and attenuation coefficient K d λ . The Sentinel-2-processed spectral image contains reflectance data in different bands, and the radiative transfer features refer to the radiative transfer term s w d r t t and attenuation coefficient K d λ . We generate PI-CNN training datasets from Sentinel-2-processed spectral images and the radiative transfer features. Subsequently, the trained CNN model establishes the link between remote sensing spectral features and bathymetric points. Finally, inputting the complete Sentinel-2 spectral feature data into the trained PI-CNN model produces the entire bathymetric maps.

3.2. Data Preparation

3.2.1. Bathymetric Data from ICESat-2

The radius ε and the threshold M i n p t s are key parameters in DBSCAN. Points in a cluster are classified as signals when the density of adjacent points in radius ε exceeds a threshold m i n p t s [59]. We extract the reference bathymetric points from the ICESat-2 ATL03 data using the adaptive ellipse DBSCAN algorithm, which could adaptively compute these key parameters [42]. Firstly, we intercept the input ICESat-2 ATL03 data along the vertical elevation direction, ensuring that the intercepted range encompasses both the sea surface and the bottom topography. Next, the data are segmented, treating each segment as a photonic signal dataset, denoted as D . Within each segment, we calculate the instantaneous sea surface elevation s s and remove the photonic data above the surface. We compute the Euclidean distance matrix for all points in dataset D and sort each row element in ascending order. For dataset D , the candidate radius D ε is calculated as follows:
D ε = ε k = D k ¯ 1 k N
where k = 1 ,   2 , , N ;   N is the length of D . D k is the element of the sorted Euclidean distance matrix of D .
The candidate threshold dataset D m p t s is calculated as follows:
D m p t s = m i n p t s k = ( N sn N no ) + ln M ln N s n / N n o , 1 k N
where N s n is the mean count of seafloor signal and noise photons, and N n o is the mean count of noise photons. M the number of frames in which the photonic data are divided in the vertical direction.
We calculate the optimal candidate radius ε o and candidate threshold m i n p t s o through iteration: iterating through k , we put the corresponding ε k and m i n p t s k into DBSCAN for clustering photons. Then, we obtain the generated clusters under different k values. When the clusters generated are the same for three consecutive times, the cluster is recorded as the optimal cluster. Continue clustering until the generated number of clusters is no longer the optimal number of clusters, and choose the maximum k corresponding to when the number of clusters is the optimal number of clusters as the optimal k ; the radius corresponding to the optimal k and the minimum clustering threshold are the optimal DBSCAN parameters of D . Using the optimal parameters in DBSCAN, obtain the preliminary ICESat-2 bathymetric photon points. Following this, we eliminate outliers along both the horizontal and vertical axes.
Subsequently, to account for displacements caused by the refraction of the air–water interface geometry, the ICESat-2 bathymetric photon points undergo refraction correction [60]. In bathymetric measurements, achieving temporal alignment between in situ data, ICESat-2 data, and Sentinel-2 images is challenging. It should be noted that sea currents are not strong and is usually considered to change little over time, and in ocean color remote sensing, data matching is only considered if the time difference between satellite and in situ measurements is within three (±3) hours [4]. Therefore, we mainly eliminate the effect of tides and retained the bathymetry data based on mean sea level. We remove the influence of tide based on the tide model GOT4.8 [36]. These parameters include diurnal and semidiurnal, and longer period tides and were chosen to remove the instantaneous effects of tides during the ICESat-2 transit.
The water depth data H based on mean sea level can be expressed as follows:
H = d e Δ H r e f Δ H t i d e
where d e is the uncorrected depth; Δ H r e f the refraction correction variability simulated by Parrish’s method; Δ H t i d e is the tide variability simulated by the global ocean tide model GOT4.8.

3.2.2. Multispectral Imagery Data from Sentinel-2

After downloading the original Sentinel 2 L1C image from the ESA website, Sentinel 2 L1C images are atmospherically corrected to L2A images by Sen2Cor tool. Resampling is necessary to standardize band resolutions to 10 m. Subsequently, the L2A images are clipped to the region of interest. Land within the Sentinel-2 sub-image is filtered using STRM 5’s data, and any land pixel values are substituted with the value of 0. All procedures are carried out with the SNAP v9.0 software developed by ESA.

3.3. PI-CNN Model

3.3.1. Optical Physical Data Composition

The earlier proposed single-band model in empirical bathymetric inversion offers simplicity and ease of use but relies on prerequisites like higher water reflectivity, clear water quality, and a single substrate type, limiting its practicality. In contrast, the double-band model, derived from the single-band model, overcomes the influence of water and substrate types through band ratio calculations [29]. This band ratio approach mitigates the impact of factors such as solar altitude angle, sensor orientation, and wave conditions on bathymetric accuracy. Assuming negligible backscattering in deep water compared to bottom reflectance, the double-band ratio radiative transfer model for optical shallow water (OSW) integration is expressed as follows [61]:
l n R 1 R 2 ( c 2 c 1 ) z + l n A d 1 R 1 A d 2 R 2
where R   is the remote sensing reflectance. z is the water depth. R is the remote sensing reflectance of optical deep water (ODW). A d is the bottom reflectance, and c is the effective attenuation coefficient. Subscripts 1 and 2 identify two different bands.
Lai [62] introduced a method to distinguish optical shallow water (OSW) from optical deep water (ODW) using an independent neural network (NN), achieving a general accuracy exceeding 94% with Landsat-8 satellite images. In this study, we also utilize a two-layer NN for OSW and ODW classification. Since the maximum penetration depth of ICESat-2 is 40 m [60] and our NN training data are based on ICESat-2 detections, OSW is defined as the water with a depth of less than 40 m, and ODW is defined as the water with a depth of more than 60 m. We train the NN using processed Sentinel-2 L2A images, with OSW datasets derived from the remote sensing reflectance near ICESat-2 reference bathymetric points, and ODW datasets obtained from established databases and visual inspection. We randomly select the same number of data points from the ODW dataset as there are in the OSW dataset to create the ODW training datasets. Both OSW and ODW training datasets are used for training in Matlab 2023a. After model training, we input the reflectance vectors from the entire Sentinel-2 image to obtain deep and shallow water classification results.
Although the reflectance in deep water is extremely low, it still affects the reflectance ratio of the two bands. Based on the classification results, we further calculate the average reflectance R O D W ¯ in deep water, as shown in (5), highlighting the remote sensing reflectance ratio in shallow water:
s w d r t t ( R 1 , R 2 ) = l n R 1 R 1 R 2 R 2 l n R 1 R O D W 1 ¯ R 2 R O D W 2 ¯
where R O D W ¯ is the average reflectances of the ODW in the region. Subscripts 1 and 2 identify two different bands.
The diffuse attenuation coefficient K d λ (where λ is the light wavelength in free space) is an apparent optical property. K d λ can help to present turbidity of oceanic and coastal waters [63], so we also use K d λ as input data to train the network models for depth inversion, K d is estimated based on the relationship between attenuation coefficient and the blue-to-green ratio of water-leaving radiance. An empirical model based with original fitting coefficient values is expressed as follows [64]:
K d 490 = K w 490 + A L w λ b l u e L w λ g r e e n B
where K w 490 is the attenuation coefficient associated with pure seawater at λ = 490   n m ; L w λ b l u e and L w λ g r e e n are normalized water-leaving radiances corresponding to the blue and green bands, respectively.
Based on Equation (6), we adopt the parameterization from Lee to estimate the K d [65]:
K d 490 = 0.016 + 0.15645 1.05 × R ( λ b l u e ) R ( λ g r e e n ) 1.5401
where R ( λ b l u e ) and R ( λ g r e e n ) are the reflectances of water corresponding to the blue and green bands, respectively.
We design a physics-informed CNN architecture for bathymetry inversion. Alongside remotely sensed reflectance data, we incorporate prior knowledge from Equations (5) and (7) to build band ratio radiative transfer information as part of our training model inputs. We also utilize the priori reference bathymetric points extracted from ICESat-2 ATL03 as training model label inputs. To effectively train the model, it is essential to align the ICESat-2 bathymetric points with the pixel values in the corresponding Sentinel-2 image. We extract sub-images of size n × n × m d , centered on the prior bathymetry points, as input feature tensors, where m d represents the number of input bands, and n × n is the input window size. Simultaneously, the corresponding prior bathymetry points serve as labels for model training.
Before training, data normalization is an important step; here, we use the z-score normalization as follows:
x = X μ σ
where X is the dataset, μ is the global mean value of the dataset, and σ is the global standard deviation.
With the increase in water depth, the primary focus is on enhancing the band ratio radiative transfer information for the red, green, and blue bands, as these wavelengths are more responsive to changes in water depth. In our study, we investigate the impact of different band combinations on the prediction models, as detailed in Table 2. Unless otherwise specified, the results presented in this paper for the PI-CNN are generated by default using the Bs_3 band combination.

3.3.2. PI-CNN Structure

The structure of the PI-CNN model is shown in Figure 3. The first layer serves as the input layer, with a depth of m d , where m d represents the dimension of the input training data, that is, the number of bands of the input. The second convolutional layer features 32 convolutional kernels with a size of 2 × 2, while the third convolutional layer also utilizes 64 convolutional kernels of the same size. In the fourth and fifth convolutional layers, the number of convolutional kernels is set to 128 and 32, with a kernel size of 3 × 3 for both layers. The batch normalization is added to perform regularization in order to avoid overfitting. Batch normalization could reduce the internal covariate shift due to the randomness in the input data as well as the parameter initialization. After the last convolutional layer, the output data are fed into the fully connected layers. The flattened and dense layers convert the high-dimensional feature representations into one-dimensional vectors. This dense layer applies a nonlinear activation function ReLU. Then, the dropout layer randomly omits a certain percentage of neurons and finally feeds the data into the dense layer again. The final dense layer uses a linear activation function for regression. All activation functions except the fully connected layer use the ReLU function to model the nonlinear relationship between the reflectance eigenvectors and the bathymetry vectors to improve network efficiency.
While training, the network is optimized using mean squared error (MSE) as the default loss function. Training is conducted over 300 epochs, with a maximum of 1000 epochs, an initial learning rate of 0.0004, and a dropout rate of 0.3 to prevent model overfitting. The training data are divided into 15% for validation, 15% for testing, and 70% for training during the model training process. The GPU processor is employed to run the model, and all simulations are performed using Python 3.19.
The input feature tensors are generated from the sub-images of size n × n × m d , centered on the prior bathymetry points, where m d represents the number of input bands, and n × n is the input window size. Meanwhile, the corresponding prior bathymetry points serve as labels for model training. After training the model, the physics-informed data vectors of the whole Sentinel-2 image are fed into the trained model to obtain the corresponding bathymetric map of the whole image.

3.4. Accuracy Assessment

In this study, we compare the results obtained from our model with existing bathymetric models and in situ CUDEM data. We evaluate the performance of our proposed model by employing coefficient of determination ( R 2 ) and root mean squared error (RMSE), defined as follows:
R M S E = i = 1 N ( d r i d p i ) 2 / N
R 2 = 1 i = 1 N ( d r i d p i ) 2 i = 1 N ( d r i d p i ¯ 2 )
where d r is the measured depth, d p is the estimated depth, d p ¯ is the mean value of the predicted depth, and N is the length of the dataset.

4. Model Results

4.1. Lidar-Derived Bathymetric Results

The reference bathymetric points from lidar, extracted by the adaptive ellipse DBSCAN algorithm, play a vital role in our test data, and an example is shown in Figure 4. The ATL03 example data corresponds to the ICESat-2 Gt2r trajectory passing through Barbuda on 5 March 2019, within a latitude range of 17.51~17.63°N. Visually, standard DBSCAN can detect underwater terrain return signals, but it retains a significant number of noise points around these signals. In contrast, the results obtained using AE-DBSCAN not only ensure extraction accuracy but also exhibit very few noise points. This illustrates how AE-DBSCAN significantly improves the efficiency and accuracy of reference bathymetric point extraction.

4.2. Optical Water Classification Results

Figure 5 presents true-color images of St. Croix, St. Thomas, Anegada, and Barbuda, with the yellow lines representing ICESat-2 laser trajectories passing through these areas. The true-color images may present diverse water environments used for training and evaluating the optical water classification results presented in Figure 6. Figure 5 reveals distinct variations in remote sensing spectra for land, deep water, and shallow water across different bands. In the Sentinel 2 true-color image, the shallows and coral reefs belong to the shallow-water region and appear light blue, while the deep-water region appears dark blue. As can be seen in Figure 5, there are large areas of shallow water in eastern St. Croix, southeastern Anegada, and western Barbuda that are clearly separated from the deep-water areas. St. Thomas is surrounded by deep-water harbors, so the shallow water is mainly around the land, while the farther land is deep water.
In Figure 6, light and dark blue colors distinguish between OSW and ODW. St. Thomas and Anegada exhibit OSW distributions in waters farther from land, while the other two regions predominantly feature OSW areas near the land, aligning with the regional bathymetric patterns. Table 3 showcases classification accuracies based on pixel values around the ICESat-2 reference bathymetric points and the training deep-water region as the reference validation data. Remarkably, the method achieves an average accuracy of 95.45% in distinguishing OSW from the whole images (see Table 3), with only a few neighboring pixels misclassified due to environmental complexities. These results underscore the feasibility and effectiveness of our classification method.

4.3. CNN Architectures Verification

The size of the convolutional window can significantly impact the efficiency and accuracy of CNN training. In regions like St. Croix and St. Thomas, where CUDEM data are available for comparison, we present accuracy assessment plots for different convolutional window sizes in Figure 7. In both sites, the PI-CNN model with a 7 × 7 window demonstrates the highest inversion accuracy, closely followed by the 9 × 9 window, while the 3 × 3 window results in the lowest accuracy. It is important to note that we focus on inversion results for water depths below 40 m due to the ICESat-2 detection capability limitations. Comparing the absolute errors in St. Croix, as shown in Figure 8, it becomes evident that the absolute errors do not decrease with larger window sizes. These errors are mainly concentrated in the northeastern area. Notably, bathymetry maps with 7 × 7 and 9 × 9 windows exhibit less deviation from in situ depths. Therefore, considering efficiency and accuracy, we opt for a 7 × 7 convolutional window size.

4.4. Bathymetric Estimates Validation

To validate the entire bathymetric inversion process, we initially compare the depth of ICESat-2 reference bathymetric points with in situ depth measurements in St. Croix and St. Thomas, as shown in Figure 9a,b, respectively. The fitting results of the AE-DBSCAN extraction exhibit some errors in water depths less than 5 m in St. Croix, potentially due to water quality variations in the region. However, the overall error in bathymetric points derived from AE-DBSCAN does not exceed 1 m in both sites, indicating the promising potential of ICESat-2 bathymetric points as reliable references for bathymetric inversion.
Figure 10a,c display in situ maps of St. Croix and St. Thomas. Figure 10b,d show PI-CNN-derived bathymetric maps for both sites. The PI-CNN-derived bathymetric maps for St. Croix and St. Thomas closely align with the actual measurements. However, in the southeast corner of St. Croix and the northwest corner of St. Thomas, the PI-CNN-derived bathymetric results appear shallower. Figure 5a shows that very few trajectories passed through the northeast corner of St. Croix, resulting in fewer bathymetric points detected by ICESat-2 in this region. Figure 5b shows no trajectories passed through the southwestern corner of St. Thomas, resulting in no bathymetric points in that region. Therefore, without bathymetric points for calibration, the bathymetric results may appear shallower, which illustrates the importance of integrating ICESat-2 bathymetric points in the bathymetric inversion. Error scatter plots for PI-CNN-estimated depth and in situ depth in Figure 10e,f illustrate that the error spread in very shallow waters (<5 m) is smaller than the error budget in deeper waters (>15 m), suggesting a slight decrease in the accuracy of ICESat-2-derived depth with increasing depth. Overall, the RMSE of PI-CNN-estimated depths amounts to 1.39 m in St. Croix and 1.56 m in St. Thomas, with R-squared values exceeding 95%, indicating a robust correlation between PI-CNN-derived SDB results and in situ data.
For the area without in situ data, we used ICESat-2 reference bathymetric point data as in situ data to verify the PI-CNN estimation results. The error scatter plots for St. Croix, St. Thomas, Anegada, and Barbuda are shown in Figure 11a–d. The area appears as a widening spread of point scatteredness with depth in Figure 10, which means the error trend slightly increases with depth. St. Thomas has the best fitting result with R2 = 0.97 and RMSE = 1.61 m, followed by St. Croix’s result. Figure 12a,b present the PI-CNN-derived bathymetric maps in Barbuda and Anegada. Barbuda and Anegada have larger areas of shallow water with more fitting points and the deep-water areas are located in the southeast corner (see Figure 12a) and eastern part (see Figure 12b). The error analysis results revealed that the R2 of these two areas also exceeds 95% and their variance is within 10% of the maximum water depth.

5. Discussion

5.1. Comparison with Different Band Combinations

Selecting the right band combination significantly enhances the accuracy and efficiency of the trained model. To evaluate the results with various band combinations more specifically in St. Croix, we calculate RMSE and R2 values per depth range, as presented in Figure 13. The results show that the Bs_3 combination outperforms other configurations, with Bs_6 coming in a close second. Using Bs_3, the overall RMSE is lower by 0.07 m compared to the B_3 combination in St. Croix, and the R2 with Bs_3 improves by up to 3%. All combinations yield similar R2 values in the depth range of 0 to 11 m and 20 m, with RMSE values being close in the 0 to 5 m depth range. Accuracy does not notably improve with more bands, possibly due to training bias resulting from an excessive number of bands. In Figure 14, partial absolute error maps in St. Croix with different band combinations are depicted. Band combinations containing physical information swdrtt exhibit smaller errors in both near-land and near-deep-water regions, suggesting that band combinations including physical information swdrtt contribute to maintaining high accuracy.
Considering that blue and green wavelength bands can penetrate deeper waters, our focus centers on the green, blue, and red bands to provide radiative transfer characteristics. Figure 15a,d,e display the distribution of raw green, blue, and red band scatterplots in St. Croix, respectively. These scatterplots exhibit an exponential trend, with points of varying water depths clustering together, making it challenging to distinguish between them. In contrast, Figure 15b,e,h show the distribution of ln(green), ln(blue), and ln(red) scatterplots in St. Croix, respectively. These plots reveal a more diffuse trend for points at different water depths compared to the raw reflectance scatterplot. Figure 15c,f,i depict the distribution of l n ( R g r e e n R O D W g r e e n ¯ ) , l n ( R r e d R O D W r e d ¯ ) , and l n ( R b l u e R O D W b l u e ¯ ) scatterplots. Here, different water depth points display a flared diffusion trend, emphasizing the characteristics of deeper water depth points. Consequently, the inclusion of s w d r t t information contributes to a more effective construction of spectra as a function of water depth.

5.2. Comparison with Other Bathymetric Retrieval Models

The neural network is an extremely effective method for establishing the relationship between the priori water depths and remote sensing reflectances. For comparison, we apply a regression two-layer feedforward network with a sigmoid transfer function in the hidden layer and a linear transfer function in the output layer to estimate the bathymetric map [66]. The size of the hidden layer is set to 50. The reference bathymetric points are imported as a prediction vector, and the remote sensing reflectance data corresponding to the position of the reference bathymetric points in the Sentinel-2 image are imported as a response vector for training the network. During model construction and simulation, the dataset was randomly divided into a training dataset (70%) and a test dataset (30%) to obtain a robust network. We finally input the reflectance vectors of the entire Sentinel-2 image into the trained network to obtain the entire bathymetric map.
In addition, the linear [67] and the band ratio models [29] are also selected to obtain bathymetric maps from Sentinel-2 imagery. The ratio and linear models are forms expressed as follows:
H b a n d = m 1 l n ( n R ( λ g r e e n ) ) l n ( n R ( λ b l u e ) ) m 0
H l i n e a r = g 0 g 1 R ( λ b l u e ) + g 2 R ( λ g r e e n ) + g 3 R ( λ r e d )
where m 1 is a tunable constant to scale the ratio to depth, n is a constant, R is the reflectance of water for the corresponding band, and m 0 is the offset for a depth of sea level. The remaining parameters are empirical coefficients calculated by regression.
Figure 16 shows the comparison of accuracy assessment obtained using the different methods in St. Croix and St. Thomas. For both sites, the PI-CNN model has the best accuracy, with the R2 of 95% in St. Croix and 97% in St. Thomas, and the RMSEs are all less than 1.6 m, while the linear model yields the worst accuracy among the results. Compared to the PI-CNN results, the RMSE of the results using the linear model increased by 0.7 m in St. Croix and 0.6 m in St. Thomas, and the R2 of the results using the linear model decreased by 30% in both sites. Although the R2 of the results obtained using NN and PI-CNN methods are similar in St. Croix and St. Croix, the performance of the PI-CNN model, as signified by R2 results, surpasses 95% at both reference sites. PI-CNN outperforms the NN-based results with an advantage in R2 values by 1% and 3%, as observed in St. Croix and St. Thomas, respectively. The RMSE for PI-CNN amounts to 1.39 m and 1.56 m, respectively, in St. Croix and St. Thomas, which is an improvement over the NN-method-derived RMSE of 1.43 m and 1.69 m. This suggests a reduction in RMSE using the PI-CNN model by an estimated 2.9% and 8.3%. Overall, PI-CNN outperforms the other models, and the PI-CNN model can more fully capture the spectral features, which leads to improved model accuracy. The performance of NN also closely follows that of PI-CNN. Similar conclusions can be drawn from the bathymetric maps obtained by these models, which are shown in Figure 17. Among them, NN-derived bathymetric maps in St. Croix and St. Thomas were more similar to the PI-CNN results, but the deep-water areas of NN are not smooth compared to the results from PI-CNN, which shows the advantage of CNN to utilize the surrounding information of the water depth pixels.

5.3. Atmospheric Correction

Atmospheric scattering contributes to over 90% of the total measured radiation [68]. Atmospheric correction programs are capable of converting the Digital Number (DN) of surface reflectance values in Sentinel-2 images to actual surface reflectance, mitigating a portion of the atmospheric effects [69,70]. The Sen2Cor algorithm necessitates the presence of enough dark pixels in the scene, allowing the assumption of constant Bottom of the Atmosphere (BOA) reflectance for different bands. In cases where pixels with dense dark vegetation are absent, they can be substituted with deep water pixels [71]. Figure 18a illustrates the bathymetric map in St. Croix generated using an image without atmospheric correction, while the error plot is shown in Figure 18b. The water depth trend generally aligns closely with the bathymetric map generated using the corrected image (as seen in Figure 10c). However, in the uncorrected image of St. Croix, the bathymetry in the southwest corner of the island appears shallower compared to in situ data. The SDB error assessment for the uncorrected image yields an accuracy of 1.61 m with an R2 value of 0.94, which is similar to the result using the corrected image.
In addition to the bathymetric map and accuracy assessment, we also present the absolute error map using the uncorrected image and a cross-profile comparison, as shown in Figure 19. In comparison to Figure 8b, the absolute errors based on the corrected and uncorrected images closely resemble each other. However, upon examining the cross-section depth plot, it becomes evident that the retrieval results using the uncorrected image slightly overestimate the water depth below 10 m and underestimate the water depth above 15 m. Overall, bathymetric inversion using the corrected image is more accurate in this area.

5.4. Analysis of Model Portability

Portability analysis of the machine learning model is essential for its subsequent application in various regions. In this study, we utilized the PI-CNN model trained in one region to train Sentinel-2 images from another region to obtain bathymetric maps. We conducted cross-training in St. Croix and St. Thomas, and the cross-trained bathymetric maps are depicted in Figure 20. The results obtained from the pretrained models for other regions indicate that the water depth in the optical deep-water area appears much shallower, and there are no gradients in the nearshore water depth. This suggests that the PI-CNN model is less portable and may not be suitable for cross-application. Realistic bottom turbidity and water column conditions can affect the reflectance of remote sensing images, and optical remote bathymetry represents reflectance as a function of depth [72]. Figure 21 provide different reflectance average values within various depth ranges for St. Croix and St. Thomas. Due to the large area of deep water in St. Croix, its reflectance average values are considerably higher than those of St. Thomas, which may explain the poor portability of the pretrained model. In conclusion, using a pretrained model in other regions may require additional coastal bathymetric feature information for further training. Combining the model with ICESat-2 bathymetric point data for transfer learning may be a more effective approach in such cases.

6. Conclusions

In this study, we propose a novel PI-CNN model for bathymetric mapping that integrates radiative transfer-based information into a CNN model to enhance the correlation between spectral information and water depth. The incorporation of swdrtt data highlights spectral features in the shallow-water region, while the CNN architecture effectively incorporates peripheral information from bathymetric measurement points.
  • Our results demonstrate that the AE-DBSCAN method accurately tracks underwater topographic data. The accuracy and robustness of the generated bathymetric maps with the PI-CNN model are validated using CUDEM data from St. Croix and St. Thomas, with all experiments achieving an error less than 1.6 m.
  • The PI-CNN model exhibits higher accuracy, and the RMSE using the PI-CNN model is reduced by 8.3% compared to the NN model in St. Thomas.
  • With the NN method, avoiding atmospheric correction can result in more data products rather than the missing data due to atmospheric correction failure.
  • When assessing SDB errors using uncorrected images, our proposed PI-CNN method achieves an accuracy of 1.61 m with an R2 value of 0.94, which is similar to the results obtained using corrected images, indicating minimal impact of atmospheric conditions on our approach’s performance.
The potential significance of spatial profiling lidar in obtaining high-precision bathymetric profile information is underscored by our proposed PI-CNN method, contributing towards its widespread adoption and application in future SDB research.

Author Contributions

Conceptualization, C.X. and P.C.; methodology, C.X.; software, C.X.; validation, C.X. and S.Z.; writing—original draft preparation, C.X.; writing—review and editing, C.X. and P.C.; funding acquisition, P.C. and H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded the National Natural Science Foundation (42322606; 42276180; 61991453), the National Key Research and Development Program of China (2022YFB3901703; 2022YFB3902603), the Key Special Project for Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory (GML2021GD0809), the Donghai Laboratory Preresearch Project (DH2022ZY0003), and the Key Research and Development Program of Zhejiang Province (grant No. 2020C03100).

Data Availability Statement

The ICESat-2 ATL03 data are available at https://nsidc.org/data/ATL03/versions/5 (accessed on 21 January 2024). The Sentinel-2 Level-1C (L1C) imagery products are available at https://earthexplorer.usgs.gov/ (accessed on 21 January 2024). The Continuously Updated Digital Elevation Model (CUDEM) data are available at https://chs.coast.noaa.gov/htdata/raster2/elevation/ (accessed on 21 January 2024).

Acknowledgments

We thank the anonymous reviewers for their suggestions, which significantly improved the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Benveniste, J.; Cazenave, A.; Vignudelli, S.; Fenoglio-Marc, L.; Shah, R.; Almar, R.; Andersen, O.; Birol, F.; Bonnefond, P.; Bouffard, J. Requirements for a coastal hazards observing system. Front. Mar. Sci. 2019, 6, 348. [Google Scholar] [CrossRef]
  2. Pacheco, A.; Horta, J.; Loureiro, C.; Ferreira, Ó. Retrieval of nearshore bathymetry from Landsat 8 images: A tool for coastal monitoring in shallow waters. Remote Sens. Environ. 2015, 159, 102–116. [Google Scholar] [CrossRef]
  3. Mayer, L.; Jakobsson, M.; Allen, G.; Dorschel, B.; Falconer, R.; Ferrini, V.; Lamarche, G.; Snaith, H.; Weatherall, P. The Nippon Foundation—GEBCO Seabed 2030 Project: The Quest to See the World’s Oceans Completely Mapped by 2030. Geosciences 2018, 8, 63. [Google Scholar] [CrossRef]
  4. Jawak, S.D.; Vadlamani, S.S.; Luis, A.J. A Synoptic Review on Deriving Bathymetry Information Using Remote Sensing Technologies: Models, Methods and Comparisons. Adv. Remote Sens. 2015, 4, 16. [Google Scholar] [CrossRef]
  5. Diesing, M.; Coggan, R.; Vanstaen, K. Widespread rocky reef occurrence in the central English Channel and the implications for predictive habitat mapping. Estuar. Coast. Shelf Sci. 2009, 83, 647–658. [Google Scholar] [CrossRef]
  6. Porskamp, P.; Rattray, A.; Young, M.; Ierodiaconou, D. Multiscale and hierarchical classification for benthic habitat mapping. Geosciences 2018, 8, 119. [Google Scholar] [CrossRef]
  7. Choi, C.; Kim, D.-J. Optimum baseline of a single-pass In-SAR system to generate the best DEM in tidal flats. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 919–929. [Google Scholar] [CrossRef]
  8. Harris, P.T.; Baker, E.K. Why map benthic habitats? In Seafloor Geomorphology as Benthic Habitat; Elsevier: Amsterdam, The Netherlands, 2012; pp. 3–22. [Google Scholar]
  9. Almeida, L.P.; Almar, R.; Bergsma, E.W.; Berthier, E.; Baptista, P.; Garel, E.; Dada, O.A.; Alves, B. Deriving high spatial-resolution coastal topography from sub-meter satellite stereo imagery. Remote Sens. 2019, 11, 590. [Google Scholar] [CrossRef]
  10. Salameh, E.; Frappart, F.; Marieu, V.; Spodar, A.; Parisot, J.-P.; Hanquiez, V.; Turki, I.; Laignel, B. Monitoring sea level and topography of coastal lagoons using satellite radar altimetry: The example of the Arcachon Bay in the Bay of Biscay. Remote Sens. 2018, 10, 297. [Google Scholar] [CrossRef]
  11. Rahman, M.S.; Di, L. The state of the art of spaceborne remote sensing in flood management. Nat. Hazards 2017, 85, 1223–1248. [Google Scholar] [CrossRef]
  12. Casal, G.; Monteys, X.; Hedley, J.; Harris, P.; Cahalane, C.; McCarthy, T. Assessment of empirical algorithms for bathymetry extraction using Sentinel-2 data. Int. J. Remote Sens. 2018, 40, 2855–2879. [Google Scholar] [CrossRef]
  13. Han, W.; Zhang, X.; Wang, Y.; Wang, L.; Huang, X.; Li, J.; Wang, S.; Chen, W.; Li, X.; Feng, R.; et al. A survey of machine learning and deep learning in remote sensing of geological environment: Challenges, advances, and opportunities. ISPRS J. Photogramm. Remote Sens. 2023, 202, 87–113. [Google Scholar] [CrossRef]
  14. Caballero, I.; Stumpf, R.P. Retrieval of nearshore bathymetry from Sentinel-2A and 2B satellites in South Florida coastal waters. Estuar. Coast. Shelf Sci. 2019, 226, 106277. [Google Scholar] [CrossRef]
  15. Simpson, C.E.; Arp, C.D.; Sheng, Y.; Carroll, M.L.; Jones, B.M.; Smith, L.C. Landsat-derived bathymetry of lakes on the Arctic Coastal Plain of northern Alaska. Earth Syst. Sci. Data 2021, 13, 1135–1150. [Google Scholar] [CrossRef]
  16. Dörnhöfer, K.; Göritz, A.; Gege, P.; Pflug, B.; Oppelt, N. Water constituents and water depth retrieval from Sentinel-2A—A first evaluation in an oligotrophic lake. Remote Sens. 2016, 8, 941. [Google Scholar] [CrossRef]
  17. Chybicki, A. Mapping south baltic near-shore bathymetry using Sentinel-2 observations. Pol. Marit. Res. 2017, 24, 15–25. [Google Scholar] [CrossRef]
  18. Chybicki, A. Three-dimensional geographically weighted inverse regression (3GWR) model for satellite derived bathymetry using Sentinel-2 observations. Mar. Geod. 2018, 41, 1–23. [Google Scholar] [CrossRef]
  19. Almar, R.; Kestenare, E.; Reyns, J.; Jouanno, J.; Anthony, E.; Laibi, R.; Hemer, M.; Du Penhoat, Y.; Ranasinghe, R. Response of the Bight of Benin (Gulf of Guinea, West Africa) coastline to anthropogenic and natural forcing, Part1: Wave climate variability and impacts on the longshore sediment transport. Cont. Shelf Res. 2015, 110, 48–59. [Google Scholar] [CrossRef]
  20. Caballero, I.; Stumpf, R.P.; Meredith, A. Preliminary assessment of turbidity and chlorophyll impact on bathymetry derived from Sentinel-2A and Sentinel-3A satellites in South Florida. Remote Sens. 2019, 11, 645. [Google Scholar] [CrossRef]
  21. Dekker, A.G.; Phinn, S.R.; Anstee, J.; Bissett, P.; Brando, V.E.; Casey, B.; Fearns, P.; Hedley, J.; Klonowski, W.; Lee, Z.P. Intercomparison of shallow water bathymetry, hydro-optics, and benthos mapping techniques in Australian and Caribbean coastal environments. Limnol. Oceanogr. Methods 2011, 9, 396–425. [Google Scholar] [CrossRef]
  22. Hamylton, S.M.; Hedley, J.D.; Beaman, R.J. Derivation of high-resolution bathymetry from multispectral satellite imagery: A comparison of empirical and optimisation methods through geographical error analysis. Remote Sens. 2015, 7, 16257–16273. [Google Scholar] [CrossRef]
  23. Gao, J. Bathymetric mapping by means of remote sensing: Methods, accuracy and limitations. Prog. Phys. Geogr. Earth Environ. 2009, 33, 103–116. [Google Scholar] [CrossRef]
  24. Brando, V.E.; Anstee, J.M.; Wettle, M.; Dekker, A.G.; Phinn, S.R.; Roelfsema, C. A physics based retrieval and quality assessment of bathymetry from suboptimal hyperspectral data. Remote Sens. Environ. 2009, 113, 755–770. [Google Scholar] [CrossRef]
  25. Lee, Z.; Carder, K.L.; Mobley, C.D.; Steward, R.G.; Patch, J.S. Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization. Appl. Opt. 1999, 38, 3831–3843. [Google Scholar] [CrossRef] [PubMed]
  26. Casal, G.; Harris, P.; Monteys, X.; Hedley, J.; Cahalane, C.; McCarthy, T. Understanding satellite-derived bathymetry using Sentinel 2 imagery and spatial prediction models. GISci. Remote Sens. 2019, 57, 271–286. [Google Scholar] [CrossRef]
  27. Niroumand-Jadidi, M.; Bovolo, F.; Bruzzone, L.; Gege, P. Physics-based bathymetry and water quality retrieval using planetscope imagery: Impacts of 2020 COVID-19 lockdown and 2019 extreme flood in the Venice Lagoon. Remote Sens. 2020, 12, 2381. [Google Scholar] [CrossRef]
  28. Lyzenga, D.R. Passive remote sensing techniques for mapping water depth and bottom features. Appl. Opt. 1978, 17, 379–383. [Google Scholar] [CrossRef]
  29. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 2003, 48, 547–556. [Google Scholar] [CrossRef]
  30. Niroumand-Jadidi, M.; Bovolo, F.; Bruzzone, L. SMART-SDB: Sample-specific multiple band ratio technique for satellite-derived bathymetry. Remote Sens. Environ. 2020, 251, 112091. [Google Scholar] [CrossRef]
  31. Traganos, D.; Poursanidis, D.; Aggarwal, B.; Chrysoulakis, N.; Reinartz, P. Estimating Satellite-Derived Bathymetry (SDB) with the Google Earth Engine and Sentinel-2. Remote Sens. 2018, 10, 859. [Google Scholar] [CrossRef]
  32. Caballero, I.; Stumpf, R. Towards Routine Mapping of Shallow Bathymetry in Environments with Variable Turbidity: Contribution of Sentinel-2A/B Satellites Mission. Remote Sens. 2020, 12, 451. [Google Scholar] [CrossRef]
  33. Forfinski-Sarkozi, N.A.; Parrish, C.E. Analysis of MABEL Bathymetry in Keweenaw Bay and Implications for ICESat-2 ATLAS. Remote Sens. 2016, 8, 772. [Google Scholar] [CrossRef]
  34. Forfinski-Sarkozi, N.A.; Parrish, C.E. Active-Passive Spaceborne Data Fusion for Mapping Nearshore Bathymetry. Photogramm. Eng. Remote Sens. 2019, 85, 281–295. [Google Scholar] [CrossRef]
  35. Li, Y.; Gao, H.; Jasinski, M.F.; Zhang, S.; Stoll, J.D. Deriving high-resolution reservoir bathymetry from ICESat-2 prototype photon-counting lidar and landsat imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7883–7893. [Google Scholar] [CrossRef]
  36. Neumann, T.A.; Martino, A.J.; Markus, T.; Bae, S.; Bock, M.R.; Brenner, A.C.; Brunt, K.M.; Cavanaugh, J.; Fernandes, S.T.; Hancock, D.W.; et al. The Ice, Cloud, and Land Elevation Satellite—2 mission: A global geolocated photon product derived from the Advanced Topographic Laser Altimeter System. Remote Sens. Environ. 2019, 233, 111325. [Google Scholar] [CrossRef] [PubMed]
  37. Neumann, T.; Scott, V.S.; Markus, T.; McGill, M. The Multiple Altimeter Beam Experimental Lidar (MABEL): An Airborne Simulator for the ICESat-2 Mission. J. Atmos. Ocean. Technol. 2013, 30, 345–352. [Google Scholar] [CrossRef]
  38. Zhang, W.; Xu, N.; Ma, Y.; Yang, B.; Zhang, Z.; Wang, X.H.; Li, S. A maximum bathymetric depth model to simulate satellite photon-counting lidar performance. ISPRS J. Photogramm. Remote Sens. 2021, 174, 182–197. [Google Scholar] [CrossRef]
  39. Ma, Y.; Xu, N.; Liu, Z.; Yang, B.; Yang, F.; Wang, X.H.; Li, S. Satellite-derived bathymetry using the ICESat-2 lidar and Sentinel-2 imagery datasets. Remote Sens. Environ. 2020, 250, 112047. [Google Scholar] [CrossRef]
  40. Chen, Y.; Le, Y.; Zhang, D.; Wang, Y.; Qiu, Z.; Wang, L. A photon-counting LiDAR bathymetric method based on adaptive variable ellipse filtering. Remote Sens. Environ. 2021, 256, 112326. [Google Scholar] [CrossRef]
  41. Xu, N.; Ma, X.; Ma, Y.; Zhao, P.; Yang, J.; Wang, X.H. Deriving Highly Accurate Shallow Water Bathymetry from Sentinel-2 and ICESat-2 Datasets by a Multitemporal Stacking Method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6677–6685. [Google Scholar] [CrossRef]
  42. Xie, C.; Chen, P.; Pan, D.; Zhong, C.; Zhang, Z. Improved Filtering of ICESat-2 Lidar Data for Nearshore Bathymetry Estimation Using Sentinel-2 Imagery. Remote Sens. 2021, 13, 4303. [Google Scholar] [CrossRef]
  43. Auret, L.; Aldrich, C. Interpretation of nonlinear relationships between process variables by use of random forests. Miner. Eng. 2012, 35, 27–42. [Google Scholar] [CrossRef]
  44. Kaloop, M.R.; El-Diasty, M.; Hu, J.W.; Zarzoura, F. Hybrid Artificial Neural Networks for Modeling Shallow-Water Bathymetry via Satellite Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5403811. [Google Scholar] [CrossRef]
  45. Liu, S.; Wang, L.; Liu, H.; Su, H.; Li, X.; Zheng, W. Deriving bathymetry from optical images with a localized neural network algorithm. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5334–5342. [Google Scholar] [CrossRef]
  46. Misra, A.; Vojinovic, Z.; Ramakrishnan, B.; Luijendijk, A.; Ranasinghe, R. Shallow water bathymetry mapping using Support Vector Machine (SVM) technique and multispectral imagery. Int. J. Remote Sens. 2018, 39, 4431–4450. [Google Scholar] [CrossRef]
  47. Wang, Y.; Zhou, X.; Li, C.; Chen, Y.; Yang, L. Bathymetry Model Based on Spectral and Spatial Multifeatures of Remote Sensing Image. IEEE Geosci. Remote Sens. Lett. 2020, 17, 37–41. [Google Scholar] [CrossRef]
  48. Peng, K.; Xie, H.; Xu, Q.; Huang, P.; Liu, Z. A Physics-Assisted Convolutional Neural Network for Bathymetric Mapping Using ICESat-2 and Sentinel-2 Data. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4210513. [Google Scholar] [CrossRef]
  49. Ai, B.; Wen, Z.; Wang, Z.; Wang, R.; Su, D.; Li, C.; Yang, F. Convolutional neural network to retrieve water depth in marine shallow water area from remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2888–2898. [Google Scholar] [CrossRef]
  50. Chen, H.; Yin, D.; Chen, J.; Chen, X.; Liu, S.; Liu, L. Stacked spectral feature space patch: An advanced spectral representation for precise crop classification based on convolutional neural network. Crop J. 2022, 10, 1460–1469. [Google Scholar] [CrossRef]
  51. Legleiter, C.J.; Overstreet, B.T. Mapping gravel bed river bathymetry from space. J. Geophys. Res. Earth Surf. 2012, 117, F04024. [Google Scholar] [CrossRef]
  52. Philpot, W.D. Bathymetric mapping with passive multispectral imagery. Appl. Opt. 1989, 28, 1569–1578. [Google Scholar] [CrossRef] [PubMed]
  53. Neumann, T.; Brenner, A.; Hancock, D.; Robbins, J.; Saba, J.; Harbeck, K.; Gibbons, A.; Lee, J.; Luthcke, S.; Rebold, T. Algorithm Theoretical Basis Document (ATBD) for Global Geolocated Photons ATL03; Goddard Space Flight Center: Greenbelt, MD, USA, 2018. [Google Scholar]
  54. Altamimi, Z.; Rebischung, P.; Métivier, L.; Collilieux, X. ITRF2014: A new release of the International Terrestrial Reference Frame modeling nonlinear station motions. J. Geophys. Res. Solid Earth 2016, 121, 6109–6131. [Google Scholar] [CrossRef]
  55. Markus, T.; Neumann, T.; Martino, A.; Abdalati, W.; Brunt, K.; Csatho, B.; Farrell, S.; Fricker, H.; Gardner, A.; Harding, D.; et al. The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2): Science requirements, concept, and implementation. Remote Sens. Environ. 2017, 190, 260–273. [Google Scholar] [CrossRef]
  56. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  57. Richter, R.; Wang, X.; Bachmann, M.; Schläpfer, D. Correction of cirrus effects in Sentinel-2 type of imagery. Int. J. Remote Sens. 2011, 32, 2931–2941. [Google Scholar] [CrossRef]
  58. Amante, C.J.; Love, M.; Carignan, K.; Sutherland, M.G.; MacFerrin, M.; Lim, E. Continuously Updated Digital Elevation Models (CUDEMs) to Support Coastal Inundation Modeling. Remote Sens. 2023, 15, 1702. [Google Scholar] [CrossRef]
  59. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996; pp. 226–231. [Google Scholar]
  60. Parrish, C.E.; Magruder, L.A.; Neuenschwander, A.L.; Forfinski-Sarkozi, N.; Alonzo, M.; Jasinski, M. Validation of ICESat-2 ATLAS Bathymetry and Analysis of ATLAS’s Bathymetric Mapping Performance. Remote Sens. 2019, 11, 1634. [Google Scholar] [CrossRef]
  61. Legleiter, C.J.; Roberts, D.A.; Lawrence, R.L. Spectrally based remote sensing of river bathymetry. Earth Surf. Process. Landf. 2009, 34, 1039–1059. [Google Scholar] [CrossRef]
  62. Lai, W.; Lee, Z.; Wang, J.; Wang, Y.; Garcia, R.; Zhang, H. A Portable Algorithm to Retrieve Bottom Depth of Optically Shallow Waters from Top-of-Atmosphere Measurements. J. Remote Sens. 2022, 2022, 9831947. [Google Scholar] [CrossRef]
  63. Kirk, J.T. Light and Photosynthesis in Aquatic Ecosystems; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  64. Mueller, L.; O’Reilly, J.; Hooker, S.; Firestone, E. SeaWiFS postlaunch calibration and validation analyses. SeaWiFS Algorithm Diffus. Attenuation Coeff. K (490) Using Water-Leaving Radiances; NASA Goddard Space Flight Center: Washington, DC, USA, 2000; Volume 490, pp. 24–27. [Google Scholar]
  65. Lee, Z.-P.; Darecki, M.; Carder, K.L.; Davis, C.O.; Stramski, D.; Rhea, W.J. Diffuse attenuation coefficient of downwelling irradiance: An evaluation of remote sensing methods. J. Geophys. Res. Ocean. 2005, 110, C02017. [Google Scholar] [CrossRef]
  66. Xie, C.; Chen, P.; Zhang, Z.; Pan, D. Satellite-derived bathymetry combined with Sentinel-2 and ICESat-2 datasets using machine learning. Front. Earth Sci. 2023, 11, 1111817. [Google Scholar] [CrossRef]
  67. Lyzenga, D.R. Shallow-water bathymetry using combined lidar and passive multispectral scanner data. Int. J. Remote Sens. 1985, 6, 115–125. [Google Scholar] [CrossRef]
  68. Robinson, I.S. Discovering the Ocean from Space: The Unique Applications of Satellite Oceanography; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  69. Toming, K.; Kutser, T.; Laas, A.; Sepp, M.; Paavel, B.; Nõges, T. First Experiences in Mapping Lake Water Quality Parameters with Sentinel-2 MSI Imagery. Remote Sens. 2016, 8, 640. [Google Scholar] [CrossRef]
  70. Platt, U.; Pfeilsticker, K.; Vollmer, M. Radiation and Optics in the Atmosphere. In Springer Handbook of Lasers and Optics; Springer: Berlin/Heidelberg, Germany, 2007; p. 1165. [Google Scholar]
  71. Müller, W. Sen2Cor Software Release Note; European Space Agency: Paris, France, 2018. [Google Scholar]
  72. Eugenio, F.; Marcello, J.; Martin, J. High-Resolution Maps of Bathymetry and Benthic Habitats in Shallow-Water Environments Using Multispectral Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3539–3549. [Google Scholar] [CrossRef]
Figure 1. Distribution of study regions. The base map comes from The World Ocean Reference on Arcmap 10.5.
Figure 1. Distribution of study regions. The base map comes from The World Ocean Reference on Arcmap 10.5.
Remotesensing 16 00511 g001
Figure 2. Flowchart of the proposed PI-CNN model.
Figure 2. Flowchart of the proposed PI-CNN model.
Remotesensing 16 00511 g002
Figure 3. The CNN model structure.
Figure 3. The CNN model structure.
Remotesensing 16 00511 g003
Figure 4. Extraction results of seafloor points from ICESat-2 ATL03 data using the standard DBSCAN and AE-DBSCAN. (a) Standard DBSCAN with ε = 1.2 m and M i n p t s = 4 ; (b) adaptive ellipse DBSCAN. The raw photons are shown in grey, the surface photons are shown in blue, the detected seafloor photons are shown in green, and the refracted seafloor photons are shown in red, respectively.
Figure 4. Extraction results of seafloor points from ICESat-2 ATL03 data using the standard DBSCAN and AE-DBSCAN. (a) Standard DBSCAN with ε = 1.2 m and M i n p t s = 4 ; (b) adaptive ellipse DBSCAN. The raw photons are shown in grey, the surface photons are shown in blue, the detected seafloor photons are shown in green, and the refracted seafloor photons are shown in red, respectively.
Remotesensing 16 00511 g004
Figure 5. Sentinel-2 L2A true-color images in (a) St. Croix, (b) St. Thomas, (c) Anegada, and (d) Barbuda. ICESat-2 laser trajectories at different dates are shown by yellow lines.
Figure 5. Sentinel-2 L2A true-color images in (a) St. Croix, (b) St. Thomas, (c) Anegada, and (d) Barbuda. ICESat-2 laser trajectories at different dates are shown by yellow lines.
Remotesensing 16 00511 g005
Figure 6. ODW/ODW classification results using NN in (a) St. Croix, (b) St. Thomas, (c) Anegada, and (d) Barbuda. Land is shown in yellow, ODW areas are shown in dark blue, and OSW areas are shown in light blue, respectively.
Figure 6. ODW/ODW classification results using NN in (a) St. Croix, (b) St. Thomas, (c) Anegada, and (d) Barbuda. Land is shown in yellow, ODW areas are shown in dark blue, and OSW areas are shown in light blue, respectively.
Remotesensing 16 00511 g006
Figure 7. Accuracy assessment with different window sizes.
Figure 7. Accuracy assessment with different window sizes.
Remotesensing 16 00511 g007
Figure 8. Absolute error maps in St. Croix with different window sizes: (a) 3 × 3, (b) 7 × 7, (c) 9 × 9, and (d) 11 × 11.
Figure 8. Absolute error maps in St. Croix with different window sizes: (a) 3 × 3, (b) 7 × 7, (c) 9 × 9, and (d) 11 × 11.
Remotesensing 16 00511 g008
Figure 9. Error plots of (a) ICESat-2 reference bathymetric point depths vs. in situ depths in St. Croix; (b) ICESat-2 reference bathymetric point depths vs. in situ depths in St. Thomas.
Figure 9. Error plots of (a) ICESat-2 reference bathymetric point depths vs. in situ depths in St. Croix; (b) ICESat-2 reference bathymetric point depths vs. in situ depths in St. Thomas.
Remotesensing 16 00511 g009
Figure 10. (a) In situ map in St. Croix, (b) PI-CNN-derived bathymetric map in St. Croix, (c) in situ map in St. Thomas, (d) PI-CNN-derived bathymetric map in St. Thomas, (e) error plot of PI-CNN-estimated depths vs. in situ depths in St. Croix, and (f) error plot of PI-CNN-estimated depths vs. in situ depths in St. Thomas.
Figure 10. (a) In situ map in St. Croix, (b) PI-CNN-derived bathymetric map in St. Croix, (c) in situ map in St. Thomas, (d) PI-CNN-derived bathymetric map in St. Thomas, (e) error plot of PI-CNN-estimated depths vs. in situ depths in St. Croix, and (f) error plot of PI-CNN-estimated depths vs. in situ depths in St. Thomas.
Remotesensing 16 00511 g010
Figure 11. Error plots of PI-CNN-estimated depths vs. ICESat-2-stimated depths in different sites: (a) St. Croix, (b) St. Thomas, (c) Anegada, and (d) Barbuda.
Figure 11. Error plots of PI-CNN-estimated depths vs. ICESat-2-stimated depths in different sites: (a) St. Croix, (b) St. Thomas, (c) Anegada, and (d) Barbuda.
Remotesensing 16 00511 g011
Figure 12. PI-CNN-derived bathymetric map in (a) Anegada and (b) Barbuda.
Figure 12. PI-CNN-derived bathymetric map in (a) Anegada and (b) Barbuda.
Remotesensing 16 00511 g012
Figure 13. Accuracy assessment plots from 0 to 20 m with different bands combination in St. Croix (a) R2 and (b) RMSE.
Figure 13. Accuracy assessment plots from 0 to 20 m with different bands combination in St. Croix (a) R2 and (b) RMSE.
Remotesensing 16 00511 g013
Figure 14. Absolute error maps with different bands combination in St. Croix: (a) Bs_3, (b) B_3, (c) Bs_6, (d) B_6, (e) Bs_9, and (f) B_9.
Figure 14. Absolute error maps with different bands combination in St. Croix: (a) Bs_3, (b) B_3, (c) Bs_6, (d) B_6, (e) Bs_9, and (f) B_9.
Remotesensing 16 00511 g014
Figure 15. Scatterplots of (a) green vs. blue reflectance, (b) ln(green) vs. ln(blue), (c) l n ( R g r e e n R O D W g r e e n ¯ ) vs. l n ( R b l u e R O D W b l u e ¯ ) , (d) red vs. blue reflectance, (e) ln(red) vs. ln(blue), (f) l n ( R r e d R O D W r e d ¯ ) vs. l n ( R g r e e n R O D W g r e e n ¯ ) , (g) blue vs. red reflectance, (h) ln(blue) vs. ln(red), and (i) l n ( R b l u e R O D W b l u e ¯ ) vs. l n ( R r e d R O D W r e d ¯ ) . The color of the points represents the depth of the pixel point with the corresponding color bar on the right.
Figure 15. Scatterplots of (a) green vs. blue reflectance, (b) ln(green) vs. ln(blue), (c) l n ( R g r e e n R O D W g r e e n ¯ ) vs. l n ( R b l u e R O D W b l u e ¯ ) , (d) red vs. blue reflectance, (e) ln(red) vs. ln(blue), (f) l n ( R r e d R O D W r e d ¯ ) vs. l n ( R g r e e n R O D W g r e e n ¯ ) , (g) blue vs. red reflectance, (h) ln(blue) vs. ln(red), and (i) l n ( R b l u e R O D W b l u e ¯ ) vs. l n ( R r e d R O D W r e d ¯ ) . The color of the points represents the depth of the pixel point with the corresponding color bar on the right.
Remotesensing 16 00511 g015
Figure 16. Accuracy assessment with different bathymetric models.
Figure 16. Accuracy assessment with different bathymetric models.
Remotesensing 16 00511 g016
Figure 17. (a) Neural network derived bathymetric map in St. Croix, (b) neural network derived bathymetric map in St. Thomas, (c) band ratio model derived bathymetric map in St. Croix, (d) band ratio model derived bathymetric map in St. Thomas, (e) linear model derived bathymetric map in St. Croix, and (f) linear model derived bathymetric map in St. Thomas.
Figure 17. (a) Neural network derived bathymetric map in St. Croix, (b) neural network derived bathymetric map in St. Thomas, (c) band ratio model derived bathymetric map in St. Croix, (d) band ratio model derived bathymetric map in St. Thomas, (e) linear model derived bathymetric map in St. Croix, and (f) linear model derived bathymetric map in St. Thomas.
Remotesensing 16 00511 g017
Figure 18. (a) PI-CNN bathymetric map using the uncorrected image in St. Croix; (b) error plot of PI-CNN-estimated depths vs. in situ depths in St. Croix.
Figure 18. (a) PI-CNN bathymetric map using the uncorrected image in St. Croix; (b) error plot of PI-CNN-estimated depths vs. in situ depths in St. Croix.
Remotesensing 16 00511 g018
Figure 19. (a) Absolute error map with different bands combination in St. Croix; (b) the profile of in situ depth, and the result using the uncorrected image and the corrected image. The cross-section line is shown as the green line in (a).
Figure 19. (a) Absolute error map with different bands combination in St. Croix; (b) the profile of in situ depth, and the result using the uncorrected image and the corrected image. The cross-section line is shown as the green line in (a).
Remotesensing 16 00511 g019
Figure 20. Other pretrained PI-CNN-derived bathymetric maps in different sites: (a) St. Croix, (b) St. Thomas.
Figure 20. Other pretrained PI-CNN-derived bathymetric maps in different sites: (a) St. Croix, (b) St. Thomas.
Remotesensing 16 00511 g020
Figure 21. Different reflectance average values within different depth ranges in different sites: (a) St. Croix and (b) St. Thomas.
Figure 21. Different reflectance average values within different depth ranges in different sites: (a) St. Croix and (b) St. Thomas.
Remotesensing 16 00511 g021
Table 1. Information of Study Data.
Table 1. Information of Study Data.
St. CroixSt. ThomasAnegadaBarbuda
Longitude Limits64.89~64.44°W65.11~64.80°W64.51~64.23°W61.95~61.67°W
Latitude Limits17.63~17.85°N18.26~18.44°N18.57~18.82°N17.50~17.79°N
Area (km2)~36.71 × 31.54~32.03 × 20.64~55.21 × 54.86~40.67 × 40.41
ICESat-2 data Time21 December 2018—04:39:25
19 January 2019—03:15:23
19 April 2019—22:55:19
17 June 2019—08:01:34
16 September 2019—03:41:25
15 September 2021—04:57:28
22 November 2018—06:03:25
15 December 2019—23:21:13
13 December 2020—06:00:25
18 May 2021—10:41:29
20 October 2018—07:35:37
15 May 2019—09:33:56
14 August 2019—05:13:38
12 September 2019—03:49:44
15 December 2019—23:21:13
18 June 2020—02:38:09
14 August 2020—23:50:15
3 May 2019—09:58:56
2 August 2019—05:38:37
31 August 2019—04:14:43
1 November 2019—01:18:32
29 November 2019—23:54:33
Sentinel-2 data Time13 August 201921 November 201819 April 202023 October 2021
Table 2. Sentinel-2 band combinations.
Table 2. Sentinel-2 band combinations.
Band NameCentral Wavelength (nm)Band DetailBands NameBand Detail
Bs_3B2 Blue 490, B3 Green 560, B4 Red 665B2, B3, B4,
s w d r t t ( B 2 , B 3 ) , s w d r t t ( B 3 , B 4 ) , s w d r t t ( B 4 , B 2 ) ,
K d B 3
B_3B2, B3, B4,
K d 490
Bs_6B2 Blue 490, B3 Green 560, B4 Red 665, B8 NIR 842, B11 SWIR1 1610, B12 SWIR2 2190B2, B3, B4, B8, B11, B12,
s w d r t t ( B 2 , B 3 ) , s w d r t t ( B 3 , B 4 ) , s w d r t t ( B 4 , B 2 ) ,
K d B 3
B_6B2, B3, B4, B8, B11, B12,
K d 490
Bs_9B2 Blue 490, B3 Green 560, B4 Red 665, B5 VRE1 705, B6 VRE2 740, B7 VRE3 783, B8 NIR 842, B11 SWIR1 1610, B12 SWIR2 2190B2, B3, B4, B5, B6, B7, B8, B11, B12, s w d r t t ( B 2 , B 3 ) , s w d r t t ( B 3 , B 4 ) , s w d r t t ( B 4 , B 2 ) ,
K d B 3
B_9B2, B3, B4, B5, B6, B7, B8, B11, B12,
K d 490
Table 3. Accuracy assessment of OSW classification.
Table 3. Accuracy assessment of OSW classification.
RegionsSt. CroixSt. ThomasAnegadaBarbuda
Accuracy (%)95.695.196.394.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, C.; Chen, P.; Zhang, S.; Huang, H. Nearshore Bathymetry from ICESat-2 LiDAR and Sentinel-2 Imagery Datasets Using Physics-Informed CNN. Remote Sens. 2024, 16, 511. https://doi.org/10.3390/rs16030511

AMA Style

Xie C, Chen P, Zhang S, Huang H. Nearshore Bathymetry from ICESat-2 LiDAR and Sentinel-2 Imagery Datasets Using Physics-Informed CNN. Remote Sensing. 2024; 16(3):511. https://doi.org/10.3390/rs16030511

Chicago/Turabian Style

Xie, Congshuang, Peng Chen, Siqi Zhang, and Haiqing Huang. 2024. "Nearshore Bathymetry from ICESat-2 LiDAR and Sentinel-2 Imagery Datasets Using Physics-Informed CNN" Remote Sensing 16, no. 3: 511. https://doi.org/10.3390/rs16030511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop