Next Article in Journal
Enhancing Image Clarity: A Non-Local Self-Similarity Prior Approach for a Robust Dehazing Algorithm
Next Article in Special Issue
Improving Detection of DeepFakes through Facial Region Analysis in Images
Previous Article in Journal
Dynamic Power Reduction in TCAM Using Advanced Selective Pre-Charging of Match Lines
Previous Article in Special Issue
Design and Implementation of an Orbitrap Mass Spectrometer Data Acquisition System for Atmospheric Molecule Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Seismic Image Identification and Detection Based on Tchebichef Moment Invariant

by
Andong Lu
and
Barmak Honarvar Shakibaei Asli
*
Centre for Life-Cycle Engineering and Management, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, Bedfordshire MK43 0AL, UK
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(17), 3692; https://doi.org/10.3390/electronics12173692
Submission received: 3 August 2023 / Revised: 26 August 2023 / Accepted: 29 August 2023 / Published: 31 August 2023

Abstract

:
The research focuses on the analysis of seismic data, specifically targeting the detection, edge segmentation, and classification of seismic images. These processes are fundamental in image processing and are crucial in understanding the stratigraphic structure and identifying oil and natural gas resources. However, there is a lack of sufficient resources in the field of seismic image detection, and interpreting 2D seismic image slices based on 3D seismic data sets can be challenging. In this research, image segmentation involves image preprocessing and the use of a U-net network. Preprocessing techniques, such as Gaussian filter and anisotropic diffusion, are employed to reduce blur and noise in seismic images. The U-net network, based on the Canny descriptor is used for segmentation. For image classification, the ResNet-50 and Inception-v3 models are applied to classify different types of seismic images. In image detection, Tchebichef invariants are computed using the Tchebichef polynomials’ recurrence relation. These invariants are then used in an optimized multi-class SVM network for detecting and classifying various types of seismic images. The promising results of the SVM model based on Tchebichef invariants suggest its potential to replace Hu moment invariants (HMIs) and Zernike moment invariants (ZMIs) for seismic image detection. This approach offers a more efficient and dependable solution for seismic image analysis in the future.

1. Introduction

The seismic techniques, initially pioneered by the oil exploration sector in the 1930s, have witnessed significant advancements over time [1]. Seismic image detection involves identifying two-dimensional boundaries that encapsulate seismic velocity structures within 2D images, using methodologies like full-waveform inversion. This advanced approach aims to unveil subsurface characteristics to an extent nearing half the wavelength of seismic waves [2], thereby enabling the recovery of P-wave velocities [3] derived from seismic reflection surveys.
The implementation of two-dimensional multi-fold surveys, specifically utilizing the common-mid-point technique, along with substantial advancements in instrumentation, computer technology, and data processing methodologies, has been instrumental in significantly improving the resolution of seismic data and the accuracy of subsurface imaging. However, it was not until the 1980s, when 3D reflection [4] came into existence, that seismic surveys started to unveil intricate details regarding subsurface structural and stratigraphic conditions. Channel boundaries, depicted as curve and curvilinear events on time slice, are called edges in image processing, where amplitudes intensively vary. Now, the seismic method has three principal applications: engineering seismology, exploration seismology, and earthquake seismology. Engineering seismology is a near-surface geology technology less than 1 km from the earth’s surface. Exploration seismology is utilized in oil and gas exploration less than 10 km from the earth’s surface. Earthquake seismology is functional in the earth’s crustal structure investigation over 10 km to the earth’s surface [5].
Robert E. Sheriff [6] defines the seismic survey. A seismic survey is a program for mapping geologic structures by observing seismic waves, especially by creating seismic waves with artificial sources and observing the arrival time of the waves reflected from acoustic-impedance contrasts or refracted through high-velocity members.
The investigation of seismic data in the uppermost stratigraphic sections (specifically, within the upper 0.5 to 1.5 s of data) can provide a valuable understanding of preserved deposition elements in shallow and deep deposition environments [7].
To illustrate seismic geomorphological and stratigraphic expression, the horizon slice is always selected in the seismic image, while for dip, magnitude map is more suitable, utilizing seismic horizon slice [8]. Horizon slices are shown in Figure 1. A horizon slice is a cross-sectional view taken along or parallel to an interpreted horizon, approximating the depositional surface for a specific stratigraphic level.
Channel identification is an additional area that should be taken into account in seismic image processing. Steffens et al. [10] found that a broad spectrum of channel morphology is encountered in the deep water setting by utilizing near-seafloor 3D seismic data. The channel probably experienced changes in its gradient profile as it crossed each fault, resulting in dramatic changes in sinuosity and associated fill pattern. Also, in seismic face-fill patterns, a large erosion base is detected. Morgan [11] found the complexity of the channel in the Levee on the near-surface is related to the aggregation and lateral migration during the early part of levee growth. Long et al. [12] utilized the SEG-Y data sets to produce a high-resolution sea bed image and find the chaotically stacked packages through the debris flow seismic section.
SEG-Y data are widely recognized as the standard format for 3D seismic image datasets. In the oil and gas industry, An et al. [13] is widely acknowledged as the software platform for seismic data interpretation, facilitating the opening and creation of horizontal slices from 3D seismic image datasets. Some 3D seismic image datasets, such as the Poseidon 3D seismic dataset from Australia, can be chosen from SEG Wiki [14]. These datasets serve as valuable resources for seismic interpretation purposes. The interpretation window is shown in Figure 2.
Moments have been widely applied in the field of image processing to extract global features. In 1962, Hu [15] proposed a collection of moment invariants derived from the principles of algebraic invariants. These invariants are independent of translation, scale, and rotation. However, traditional moments lack orthogonality, making the reconstruction of visual information from these non-orthogonal moments challenging [16]. To address this issue, Teague [16] introduced Zernike moments (ZM) and Legendre moments (LM) based on the theory of continuous orthogonal polynomials.
However, ZMs and LMs have several disadvantages in capturing seismic images [17,18]. LMs encounter challenges in effectively describing small-sized images. This can be attributed to the asymptotic zero distribution of Legendre polynomials, as demonstrated by previous studies [19]. Specifically, an interval near the edges of the range 1 , 1 contains a greater number of zeros compared to an interval near the origin. The main drawback of ZMs, especially radial Zernike moments, is that scale and translation invariance can only be achieved by relating them to the central regular moments [20,21]. Mukundan et al. [22] and Yap et al. [23] introduced a set of discrete orthogonal moments based on the discrete Tchebichef polynomials and Krawtchouk polynomials, respectively. The development of these discrete orthogonal moments has several applications in image detection, classification, and fast computational methods [24,25,26].
In the field of image processing, one important aspect is image quality assessment (IQA), which aims to quantify and evaluate the quality of an image. Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) are two widely recognized standards for assessing the quality of images in the full-reference scale between two images. Blind image spatial quality evaluator (BRISQUE), perception-based image quality evaluator (PIQE), and naturalness image quality evaluator (NIQE) are in the no-reference scale to identify the image quality [27].
Numerous studies have been conducted on seismic image processing to estimate the earth’s rock parameters from seismic data. These properties primarily encompass the P-wave velocity, S-wave velocity, porosity, density, and anisotropic parameters [28]. The process of seismic image processing involves the comprehensive analysis, enhancement, interpretation, and extraction of valuable information from three-dimensional seismic datasets. Seismic image processing comprises several steps including seismic image segmentation, seismic image classification, and seismic image detection.
Seismic image segmentation involves two methods: one is segmenting the wavelet and the other is the measurement of structural discontinuities to identify subsurface faults, channels, and fracture features. Abdelwahhab et al. [29] applied wavelet-based segmentation to delineate wavelet features and introduced the concept of a depositional unit for enhanced classification of sedimentary facies attributes. Luo et al. [30] utilized amplitude gradient as a discontinuity attribute to facilitate the interpretation of faults and stratigraphic boundaries. The process of edge detection plays a crucial role in the characterization of seismic discontinuities [31]. The edge detection mechanism comprises several edge detectors, including the full Sobel operator, the Roberts operator [32], the Prewitt operator [33], and the Canny detector [34]. The Canny detector, in particular, merits attention as it involves evaluating the partial derivatives of the Gaussian filter along the x- and y-directions, respectively. Nahain introduced the U-Net network [35], which incorporates pixel-level segmentation and benefits from its inherent structural characteristics. This network architecture enables enhanced segmentation accuracy. Additionally, during the U-Net network training process, high gradient descent with a momentum value of 0.99 [36] can be utilized, facilitating a self-adjustment process for the learning rate of the training model. Weight partitioning is employed to assign weights to specific pixels, emphasizing their representativeness and distinctive features.
The seismic image classification methods are being considered to support Vector Machine (SVM), Random Forest (RF), Fast Decision Trees (FDT), Naive Baynes (NB) and establish a 1D-basin model. Abdelwahhab et al. [37] constructed a one-dimensional basin model and assessed organic maturity levels in visual images to classify distinct subsurface types. Chevitarese et al. [38] trained a convolutional neural network (CNN) to classify seismic faces. In the processing phase, image resolution, and amplitude quantization are considered. The seismic data are then split into training and test data, CNNs are utilized to classify some lines in the 2D seismic image with the data set from the Netherlands Offshore in F3 block and Penobscot. Chebitarese acquires 99% accuracy using less than 10% of the available data for training. Geng et al. [39] proposed a highly efficient and resource-saving CNN architecture with topological modules and multi-feature fusion units for classifying seismic data. It is called SeismicPatchNet. SeismicPatchNet works 50 times faster than Resnet-50 with an advantage in identifying Bottom Simulating Reflection (BSR). Zhao [40] proposes an encoder–decoder CNN model to classify seismic facies and demonstrates its flexibility in training data by comparing it to a patch-based model. Encoder–decoder models extract high-level abstraction of input images, then recover sample-level class labels by way of deconvolution operations. All samples in the middle of the seismic amplitude line are classified as salt, which is marked in red. Other samples are classified as non-salt, which is marked in white. Souza et al. [41] proposed approaches for the automatic classification of subsurface hydrocarbon-bearing regions from seismic images driven by multi-layer perceptron neural networks (MLP) and CNNs. The parameter of accuracy, recall, precision, F-measures, and loss is computed in both MLP and CNN configuration with a high agreement in blind testing.
In the domain of seismic image detection, a specific supervised learning method has exhibited remarkable success in identifying geological features within seismic data. These features include the detection of salt bodies [42], the delineation of faults [43], and the classification of different types of seismic facies [44]. In addition to supervised learning approaches, several studies have also explored the utilization of unsupervised learning methods for detecting geological features. For instance, principal component analysis [45], K-means clustering [46], a self-organizing map (SOM) [47], and a convolutional autoencoder [48] have been employed in these studies.
Nevertheless, the present study remains constrained by a number of limitations. Firstly, the availability of open-source SEGY seismic datasets is restricted, thereby precluding the incorporation of all conceivable forms of seismic images in the experiment. Additionally, certain seismic images exhibit diminished image quality, necessitating a more protracted experimentation process and an expanded search for suitable images. Lastly, it is worth noting that not all moment transforms are ideally suited for the purpose of seismic image detection and identification. However, it is imperative to underscore the robust efficacy of Tchebichef moment transforms demonstrated throughout the research.
This paper introduces a seismic image processing workflow for segmenting, classifying, and detecting seismic images. The paper’s structure is as follows: Firstly, seismic image segmentation is discussed and preprocessing techniques and U-net networks are used. ResNet-50, VGG-16, VGG-19, and Inception-v3 networks are then presented for seismic image classification. Additionally, this study investigates the application of Tchebichef moment (TM) invariants with an optimized multi-class supported vector machine (SVM) for seismic image detection. The research involves a comprehensive comparative analysis of TM invariants, ZM invariants, and Hu moment invariants to assess their individual detection rates in the context of seismic image detection. In Section 5, some experiments were conducted to validate the algorithm’s performance in seismic image segmentation, classification, and detection. Finally, concluding remarks are discussed in the last section. Given the limited existing literature on seismic image analysis utilizing moment transforms and discerning diverse categories of seismic images, the entirety of the seismic image identification and detection undertaken through Tchebichef Invariant can be regarded as a novel contribution to the field.

2. Seismic Image Segmentation

Seismic image segmentation aims to identify the edges of a 2D seismic image while reducing noise from the background. Two common approaches are utilized: classical methods such as Canny or Sobel for edge detection, and CNNs such as U-net for improved image segmentation through training.

2.1. Classical Seismic Image Segmentation

Seismic image segmentation is comprised of the operation of the Gaussian filter, anisotropic diffusion filter, and adaptive canny, which are shown in Figure 3.
Gaussian filter reduces image noise and enhances details by applying a Gaussian-based kernel to each pixel, averaging it with neighbouring pixels. This blurring effect is achieved through convolution, with the blur level determined by the kernel size. A commonly used Gaussian kernel size is 5 × 5 , effectively reducing high-frequency noise such as Gaussian and salt-and-pepper noise. The anisotropic diffusion filter, also known as the Perona–Malik filter, is primarily used for image smoothing. It overcomes the shortcomings of Gaussian blurring by simultaneously smoothing the image and preserving its edges. In addition, the Perona–Malik filter has found applications in geospatial image processing, such as grinding Digital Elevation Model (DEM) data and extracting river channels. The filter operates on the image by considering it as a heat field, with each pixel representing a flow of heat. By analyzing the relationships between a pixel and its neighbouring pixels, the filter determines if diffusion should occur towards the neighbours. When a noticeable disparity exists between an adjacent pixel and the current pixel, indicating the presence of an edge, diffusion in that direction is prohibited. This preservation of boundaries or edges helps maintain the integrity of the image’s features. The main iteration formula is represented as follows:
I t + 1 = I t + λ c N x , y N I t + c S x , y S I t + c E x , y E I t + c W x , y W I t ,
where I is the image, and t is the iteration time. The four divergence formulas are used to calculate partial derivatives of the current pixel in four different directions, as formulated in the following expression:
N I x , y = I x , y 1 I x , y
S I x , y = I x , y + 1 I x , y
E I x , y = I x 1 , y I x , y
W I x , y = I x + 1 , y I x , y .
The overarching equation encompasses three key parameters that require configuration: the number of iterations t , which should be tailored to the specific context; the diffusion coefficient k associated with thermal conductivity, wherein a higher value yields smoother outcomes at the potential expense of edge preservation; and λ , where a larger value similarly promotes smoother outcomes.
The adaptive Canny algorithm demonstrates robustness. To implement it, create 256 vectors to represent each pixel x , y in the image and determine the middle image value, denoted as m. Next, establish the δ value to define the Canny low threshold and high threshold as follows:
low threshold = 1 δ × m
high threshold = 1 + δ × m ,
where δ is limited in the range of 1 , 1 .

2.2. U-net for Image Segmentation

The U-net network architecture, as depicted in Figure 4, comprises two main components: the contracting path on the left side and the expansive path on the right side. The contracting path follows a CNN structure. It involves iterative applying two 3 × 3 convolutions, each followed by a rectified linear unit R e L U activation function, and a 2 × 2 max pooling operation with a stride of 2 for down-sampling. At each down-sampling step, the number of feature channels is doubled.
The expansive path encompasses multiple stages, wherein each stage entails the up-sampling of the feature map followed by a 2 × 2 convolution, referred to as up-convolution, that decreases the number of feature channels by half. The up-sampled feature map is subsequently concatenated with the corresponding cropped feature map from the contracting path. Following this concatenation, two 3 × 3 convolutions are applied to the merged feature map, with each convolution followed by a Rectified Linear Unit R e L U activation. The cropping operation is essential to compensate for the loss of border pixels that may occur during the convolutions. In the final layer of the architecture, a  1 × 1 convolution is utilized to transform each feature vector, which consists of 64 components, into the target number of classes. In total, the network consists of 23 convolutional layers. Indeed, by leveraging the architecture of a U-net network, the seismic image segmentation process can yield excellent results, particularly when based on the Canny edge detection output [49].

3. Seismic Image Classification

CNNs are popular for seismic image classification due to their ability to handle complex image data. Several well-known CNN architectures, such as ResNet-50, VGG-16, VGG-19, and Inception-v3, are widely used in image classification tasks, including seismic image analysis.
ResNet-50 is a widely used deep CNN architecture for image classification. It addresses the degradation problem in deeper networks by introducing residual learning. With 50 layers and skip connections, it effectively captures both shallow and deep features. ResNet-50 has demonstrated excellent performance in various computer vision tasks. Figure 5 shows the Resnet-50 model architecture that is used in our experiments later.
Inception-v3 is a CNN architecture developed by Google for image classification. It incorporates inception modules to capture features at different scales and achieves high accuracy while maintaining a lightweight design. Inception-v3 utilizes techniques such as batch normalization and factorized convolutions for improved performance. It has been successful in various image classification challenges and is known for its efficiency and effectiveness in handling complex tasks.
VGG-16 and VGG-19 are deep CNN architectures developed by the Visual Geometry Group (VGG). With 16 and 19 layers respectively, these networks utilize 3 × 3 convolutional filters to learn intricate image features across various scales. They are highly regarded for their ability to achieve impressive accuracy in image classification tasks.

4. Seismic Image Detection

Moment invariants in moment transforms are a set of features extracted from image or pattern moments. These invariants exhibit unique properties that make them invariant under specific transformations such as translation, rotation, and scaling. They play a crucial role in pattern recognition and image analysis tasks, providing valuable tools for robust comparisons and identifications. Moment invariants capture essential shape and intensity distribution characteristics, enabling reliable analysis even in the presence of transformations.

4.1. Hu Moment Invariants

Invariant Moments are highly compact image features that possess properties of translation, scaling, and rotation invariance. M.K. Hu [15] first introduced the concept of invariant moments in 1961. The two-dimensional p + q th order geometric moment is defined as follows:
m p q = x p y q f x , y d x d y . p , q = 0 , 1 , 2 ,
The two-dimensional p + q th order central moment is expressed as follows:
μ p q = x x 0 p y y 0 q f x , y d x d y ,
where the moment centre is given by
x 0 = m 10 m 00 y 0 = m 01 m 00 .
Since digital images are discrete data, the  p + q th order geometric moments and central moments of an image f x , y with a size of N × M in the discrete plane are defined as follows:
m p q = x = 0 N 1 y = 0 M 1 x p y q f ( x , y ) ; p , q = 0 , 1 , 2 ,
μ p q = x = 0 N 1 y = 0 M 1 x x 0 p y y 0 q f x , y ; p , q = 0 , 1 , 2 ,
When an image undergoes transformations such as translation or rotation, the value of the moment m p q changes. However, the central moment μ p q is less sensitive to translation but still sensitive to rotation. To address this issue, the concept of normalized central moments is introduced as follows:
y p q = μ p q μ 00 r ; r = p + q + 2 2 where p + q = 2 , 3 ,
Hu proposed the construction of seven invariant moments, denoted as I 1 to I 7 , using the second and third-order central moments p + q = 2 , 3 . These seven invariant moments form a set of features. Hu proved that these moments possess rotation, scale, and translation invariance. Under continuous image conditions, they remain invariant to translation, scale, and rotation. The specific definitions of these moments are expressed in Appendix A.
The difference between two seismic images based on the Hu moment invariants can be expressed as follows [15]:
d = X 1 X 2 2 + Y 1 Y 2 2 ,
where X 1 , Y 1 represent the first image pixel values, and  X 2 , Y 2 represent the second image pixels. These pixels in each image can be expressed as follows [15]:
X = y 20 + y 02 Y = y 20 y 02 2 + 4 y 11 2 .
This point ( X , Y ) in a two-dimensional space is used as a representation of the pattern.

4.2. 2D Discrete Tchebichef Moment Transform in p Direction

Camacho-Bello et al. [50] proposed a discrete Tchebichef transform algorithm. Assume the image size is N × N . For  x = 0 , 1 , , N 1 in the discrete domain, the recurrence relation of Tchebichef polynomials, t p ( x ) , of order p is given by [50]
w p t p x = w t p 1 x + w p 1 t p 2 x , p = 2 , , N 1
where
w = 2 x N + 1 w p = p N 2 p 2 2 p + 1 2 p 1 .
For the initial numerical calculation, the Tchebichef polynomials of zero-order and first-order are expressed as follows [50]:
t 0 x = 1 N
t 1 x = 2 x + 1 N 3 N N 2 1 .
The calculation of the kernel for TMs involves the use of recursive relations. However, this approach can lead to the propagation and accumulation of rounding-off errors, particularly when dealing with higher-order moments and large images. In the field of optics, the Gram–Schmidt process is a widely employed technique to address and correct errors that arise during wavefront expansion when utilizing Zernike polynomials [51]. In this paper, a similar approach is adopted to address the numerical instability that arises in the computation of high-order TMs. The kernel orthonormalization procedure for TMs is presented in Algorithm 1, offering a method to rectify the numerical instability issues. This algorithm aims to enhance the accuracy and reliability of the calculations for high-order TMs by effectively addressing the challenges posed by numerical instability.
Algorithm 1 Orthonormalization of the Tchebichef polynomials with Gram–Schmidt process.
1:
  w 2 x N + 1 x = 0 , 1 , 2 , , N 1
2:
  w 1 N 2 1 3
3:
  t 0 x ; N 1 N
4:
  t 1 x ; N w w 1 t 0 x ; N
5:
for n = 2 to N − 1 do
6:
    w 2 n N 2 n 2 2 n + 1 2 n 1
7:
    t n + 1 x ; N w w 2 t n x ; N w 1 w 2 t n 1 x ; N
8:
    w 1 w 2
9:
    T x ; N t n + 1 x ; N
10:
 for k = 0 to n do
11:
      t n + 1 x ; N t n + 1 x ; N x = 0 N 1 T x ; N t k x ; N × t k x ; N
12:
   end for
13:
    h x = 0 N 1 t n + 1 x ; N 2
14:
    t n + 1 x ; N t n + 1 x ; N h
15:
end for
Above all, for an N × M image represented as f x , y , the TMs, T p q , of order ( p + q ) th are defined as follows:
T p q = x = 0 N 1 t p x y = 0 M 1 t q y f x , y .
The inverse Tchebichef transformation can be expressed as
f x , y = p = 0 N 1 t p x q = 0 M 1 t q y T p q ,
where p and q are the orders of Tchebichef orthogonal polynomials (p = 0, 1, 2, ⋯, N 1 and q = 0 , 1 , 2 , , M 1 ), f x , y is the original image, N and M are the height and width of the image.
To assess the performance of the proposed method, the normalized image reconstruction error (NIRE) is employed. It is defined as the normalized mean square error between the original image f x , y and its reconstructed version, f ˜ x , y , which can be expressed as follows [50]:
N I R E = x = 0 N 1 y = 0 M 1 f x , y f ˜ x , y 2 x = 0 N 1 y = 0 M 1 f 2 x , y .

4.3. Tchebichef Polynomials Expansion

The discrete orthogonal Tchebichef polynomials, denoted as t p ( x ) , with order p and length N, are defined according to [52]
t p x = p ! k = 0 p 1 p k N 1 k p k p + k p x k .
To enhance the computational efficiency of factorial, the Tchebichef polynomial in Equation (22) is replaced with the alternative expression proposed in Equation (16). Taking into account the numerical stability of high-order functions, a scaled version of the Tchebichef polynomial can be formulated as follows [22]:
t ˜ p x = t p x ρ p , N ,
where the squared-norm ρ p , N is defined as:
ρ p , N = 2 p ! N + p 2 p + 1 .
The normalization of the Tchebichef polynomial in Equation (23) can be alternatively replaced by applying the orthogonalization of Tchebichef polynomials using the Gram–Schmidt process, as described in Algorithm 1. By utilizing Equations (22) and (23), it is possible to expand the scaled discrete Tchebichef polynomials t ˜ p x into a power series form as follows [22]:
t ˜ p x = k = 0 p C p , k x k ,
where the coefficient C p , k can be expressed as
C p , k = 1 p ρ p , N p + k ! p k ! k ! 2 N k 1 ! N p 1 ! .
Furthermore, by using the Stirling numbers of the first kind s 1 ( k , i ) , the expression for the Pochhammer symbol ( x ) k is given by:
x k = i = 0 k 1 k i s 1 k , i x i ,
where s 1 k , i is expressed as:
s 1 0 , 0 = 1 s 1 0 , i = s 1 k , 0 = 0 s i k , j = s 1 k 1 , i 1 k 1 s 1 k 1 , i ,
with k 1 , i 1 .
By substituting Equation (27) into Equation (25), the scaled Tchebichef polynomial t ˜ p x can be represented as follows [53]:
t ˜ p x = k = 0 p i = 0 k 1 k C p , k s 1 k , i x i = i = 0 p B i , p x i ,
where B ( i , p ) = k = i p ( 1 ) k C ( p , k ) s 1 ( k , i ) .

4.4. Tchebicef Moment Invariants

As we derived the normalised Tchebichef polynomials in terms of monomials in the previous subsection, using the geometric moments’ definition, it is possible to link the translation and rotation invariants of TMs as follows [53]:
T p q = i = 0 p j = 0 q B i , p B j , q v i j ,
where v i j is the normalized central geometric moment of order ( i + j ) introduced in [53]. Moreover, it is observed that when an image is scaled, the coefficients B ( i , p ) and B ( j , q ) are scaled proportionally according to the scaling factor. Hence, normalizing the T p q using the image size is deemed appropriate. For an N × M image, the zero-order of T p q is derived using Equation (30) as follows:
T 00 = i = 0 0 j = 0 0 B i , p B j , q v 00 = 1 N 1 M .
From Equation (31), it can be observed that dividing the TM invariants by T 00 leads to their scaling. T 00 affects all the TM invariants, making them sensitive to the image’s scaling changes. In conclusion, the proposed translation, rotation, and scale invariants of TMs can be presented as follows [53]:
T ˜ p q = T p q T 00 .

4.5. Feature Extraction

TM invariants of seismic images of any order can be computed using Equation (32). During feature extraction, the TM invariants T ˜ 01 and T ˜ 10 do not contain meaningful image content. Therefore, feature extraction and detection are conducted by starting the second order of TMIs, which constitutes the feature moment as follows:
V = T ˜ 11 , T ˜ 02 , T ˜ 20 , , T ˜ 0 p , T ˜ q 0 .

4.6. Optimize SVM for Multi-Class Detection

Support Vector Machine (SVM) is a supervised machine learning algorithm extensively employed for classification and regression tasks. It surpasses neural network learning by effectively handling concerns like over-fitting and local optima. SVM excels in learning from high-dimensional feature spaces and exhibits remarkable performance even with limited training samples. These attributes make SVM particularly suitable for seismic image detection.
Optimized multi-class SVM refers to an enhanced variant of the SVM algorithm designed explicitly for tackling multi-class classification tasks. While the traditional SVM is designed for binary classification, various techniques have been developed to extend it to handle multiple classes.
The approach employed in this study is the one-vs-all (OvA) method, which entails training separate binary SVM models for each class against the rest of the classes. This methodology is commonly utilized for multi-class classification tasks, allowing for the effective and efficient classification of multiple classes in a given dataset. During the prediction phase, each SVM model generates a confidence score for its corresponding class, and the class with the highest score is considered the predicted class. This technique requires training N SVM models, where N represents the total number of classes in the classification problem. The flow chart diagram of optimized SVM detection is Figure 6.
For constructing multi-class OVA-SVM models, the following steps require building the LIBSVM library. Initially, the LIBSVM package is acquired as a compressed zip file from an online source. Subsequently, within the MATLAB 2023a environment, the inclusion of LIBSVM into the toolbox path is necessary. Following this, the LIBSVM package is compiled using Visual Studio 2019. Once the compilation process is finalized, LIBSVM becomes seamlessly integrated into MATLAB 2023a.
For SVM training, the OVA approach is employed. In each iteration, the image labels of one specific type are converted to 1, while the labels of the other image types are converted to 1 . This allows the SVM to be trained on binary classification tasks. Afterwards, alpha values, bias terms, and support vectors are extracted from each trained SVM model. The prediction process of the SVM model involves calculating the highest decision value for the validation data using the obtained alpha values, bias terms, support vectors, and kernel values. Subsequently, the accuracy of the SVM detection can be determined.

5. Results and Discussion

The results cover various aspects, including seismic image segmentation, classification, and detection. Based on the proposed Tchebichef invariants, multi-class optimised SVM for detection and Zernike moment invariants, three different experiments are conducted using the candidate seismic image dataset.

5.1. Experimental Setup

The F3 Netherlands dataset [54] available on opendtect.com is a dataset suitable for interpreting 2D blocky semicontinuous images. Another publicly available dataset suitable for interpreting blocky semicontinuous seismic images is the Dutch F3 seismic dataset, which can be downloaded from SEG Wiki Poseidon 3D seismic, Australia. The dataset capable of interpreting chaotic discontinuous seismic images is sourced from Utah FORGE: 2D and 3D Seismic Data [55], available from the Geothermal Data Repository. The research data used are open source and can be downloaded at https://mhkdr.openei.org/submissions/489 (accessed on 10 September 2021). The Stratton field 3D seismic survey and accompanying well-log dataset (Stratton 3D survey) are available from the Bureau of Economic Geology, Austin, Texas. It can be utilized for interpreting 2D mounded semicontinuous datasets.
Using appropriate techniques, datasets in SEG format can be transformed into 2D seismic images. The dataset comprised 105 candidate 2D seismic images obtained through interpretation. Among these, the first 35 images (Figure 7a–c) exhibited 2D blocky semicontinuous seismic characteristics; images 36 to 70 (Figure 7d–f) displayed 2D chaotic discontinuous seismic features, while images 71 to 105 (Figure 7g–i) showcased 2D mounded semicontinuous seismic attributes.
Notably, the images demonstrating blocky semicontinuity differed significantly from those displaying chaotic discontinuity and mounded semicontinuity, rendering them easily distinguishable. Subsequent experiments involving seismic image segmentation, classification, and detection were conducted using this candidate seismic image dataset.

5.2. Seismic Image Segmentation Results

In classical image segmentation of the candidate seismic image dataset, the segmentation process involves defining a threshold with specific parameters. The kernel size of the Gaussian filter is set to be 5 × 5 , and the parameters for the anisotropic diffusion filter are configured as follows: iteration t = 80 , the strength of diffusion control parameter λ = 1.0 , and diffusion coefficient k = 5.0 . These parameters are applied to the well-smoothed seismic image as part of the preprocessing stage in this experiment. The preprocessing steps in this approach solely encompass the Gaussian filter and the application of the anisotropic diffusion filter.
In order to demonstrate the quality improvement achieved through the preprocessing process, an image quality assessment evaluation based on no-reference methods is conducted using: PIQE (Pixel-based Image Quality Evaluation), NIQE (Natural Image Quality Evaluator), and BRISQUE (Blind/Referenceless Image Spatial Quality Evaluator) score [56]. This assessment involves selecting twenty random original images from the candidate seismic image dataset, along with their respective preprocessing images. Subsequently, the PIQE, NIQE, and BRISQUE scores are calculated for both the original and preprocessing images. A comparison of these scores enables a comprehensive analysis of the quality enhancement attained through the preprocessing steps in Figure 8.
After conducting the BRISQUE, PIQE, and NIQE score assessments for both the original and preprocessing images, it can be concluded that the preprocessing process has a negligible impact on the BRISQUE score. However, it demonstrates a significant reduction in the PIQE score, resulting in a value below 30, which indicates a favourable outcome regarding image quality. Moreover, the majority of images exhibit NIQE scores below 30, signifying acceptable and satisfactory image quality.
Furthermore, to complement the evaluation, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index) measurements were performed on the same preprocessing images compared to their corresponding original images [27]. The obtained results are visually depicted in Figure 9 and Figure 10.
The adaptive Canny edge detection algorithm employs a value of δ equal to 0.3. A selection of original images, along with their respective preprocessing results and the outcomes of the Canny edge detection process are represented in Figure 11.
After performing image preprocessing on the seismic images, the U-Net network is employed for image segmentation based on the Canny edge detection result, referred to as image labels. The flow diagram depicting the process of seismic image detection using the U-Net network is presented in Figure 12. For effective processing, the U-Net network requires an input dataset with image dimensions of 256 pixels for both width and height. The images within the candidate dataset should be in RGB format, consisting of three channels, while the corresponding labels, representing the adaptive Canny result of the seismic image, should be in either grey or binary format, containing a single channel.
To ensure an unbiased evaluation of the U-Net network’s performance, the candidate dataset is divided into training and testing subsets. Specifically, 70% of the seismic images and their corresponding labels are allocated for training, while the remaining 30% are set aside for testing purposes. In the U-Net architecture used for seismic image segmentation, the output class is 2, signifying the segmentation into two distinct categories. The encoder depth of the U-Net is configured with four layers. The training process was conducted in Matlab, and the results are visually presented in Figure 13. Accuracy in Figure 13 is explained as the training accuracy in the context of a U-Net model. Training accuracy holds significance as a performance metric employed to assess the model’s learning capacity during the training process. Specifically, within the framework of machine learning, and particularly in applications such as image segmentation where U-Net models are commonly applied, the training accuracy represents the proportion of correctly predicted labels produced by the model for the training dataset.
The image segmentation training process demonstrated significant progress, achieving an accuracy of 94.63% after 200 iterations. Several seismic image samples from the candidate dataset, accompanied by their respective edge-detected Canny images and U-Net segmentation results, are depicted in Figure 14. Some corresponding edge-detected Canny images are used as the reference image to compare with the U-net segmentation result to estimate the U-net accuracy.

5.3. Seismic Image Classification

The candidate 2D seismic images can be readily categorized into three distinct parts through manual analysis: blocky semi-continuous high amplitudes, chaotic amplitudes, and mounded semi-continuous high amplitudes. Quantifying the number of different label categories within the seismic data training images was performed to ascertain classification diversity.
In this study, the dataset of candidate seismic images was divided randomly into two subsets: 30% of the images were used for testing, while the remaining 70% were designated for training. To accomplish seismic image classification, Resnet-50, VGG-16, VGG-19, and Inception-v3 networks are considered known for their exceptional performance in achieving high accuracy in classification tasks. The training results for seismic image classification using four deep-learning models are presented in Figure 15. In the context of ResNet-50, VGG-16, VGG-19, and Inception-v3 models, accuracy denotes the measure of correct predictions made during the model’s training phase, reflecting its learning progress on the training dataset. This metric gauges the extent to which the model aligns its predictions with the actual labels, offering insights into its training performance. Figure 16 illustrates specific original images accompanied by their predicted labels, which exhibit a perfect match with the input labels.
In the case of training images, ResNet-50, VGG-16, VGG-19, and Inception-v3 all demonstrate the ability to achieve a classification accuracy of 100% after 200 iterations. Nevertheless, ResNet-50 and Inception-v3 exhibit comparatively higher classification accuracy at lower iteration counts and display less fluctuation in classification accuracy as the number of iterations increases. For the testing sample set, the classification accuracy achieved by each network is provided in Table 1. The table presents the classification accuracy results for ResNet-50, VGG-16, VGG-19, and Inception-v3 on the testing dataset. The accuracy values represent the performance of each network in correctly classifying seismic images after 200 iterations of training. In conclusion, according to the findings from the candidate seismic image dataset, the ResNet-50 and Inception-v3 networks exhibit superior performance in seismic image classification. Their more stable classification results render them more suitable options for seismic image classification when compared to the evaluated VGG-16 and VGG-19 networks.

5.4. Seismic Image Detection

TM invariants have proven useful for image detection and feature extraction [50]. To ensure the accuracy of the Tchebichef invariants, it is essential to verify the recurrence proposed by Camacho [50], as the invariants are directly linked to the Tchebichef recurrence. Following the findings of Mukundan [22], there is widespread recognition that the normalized TM reconstruction error between the Tchebichef reconstruction image and the original image decreases with the increasing maximum order of p and q. This observation supports the TM transform dependability and efficacy in retaining image features and improving image analysis in seismic applications.
Three images were randomly selected from the candidate 2D seismic image dataset for TM reconstruction and error analysis, as illustrated in Figure 17. For the three selected seismic images from the candidate dataset, the same image preprocessing steps were applied as in seismic image segmentation. After image noise reduction, the selected images were reconstructed using the TM with the following steps.
Firstly, following the recurrence of 2D discrete TM transform in the p direction, the Tchebichef polynomials of x and y were orthonormalized using the Gram–Schmidt process. Then, the TM transforms and Tchebichef reconstruction is performed to obtain the reconstruction images. The resulting reconstruction images were then compared with the corresponding preprocessing images using the normalized reconstruction error, represented as the normalized image reconstruction error (NIRE). The NIRE values of the reconstruction images are presented in Figure 18. This figure shows a consistent trend: the NIRE decreases with increasing maximum order of moments [22], despite the presence of image noise. This demonstrates the effectiveness of the TM transform in preserving essential features and enhancing seismic image analysis.
In the context of Tchebichef image reconstruction, the resulting image quality is influenced by the selection of the orders p and q. Higher-order values have the ability to capture finer details in the image, but they are also more susceptible to noise interference. For the same selected images, the image reconstruction using TMs for the selected images is shown in Figure 19, where the first row shows the preprocessed images, the second row is the reconstructed images from TMs up to order 50, and the third row is the recovered images using TMs up to order 100.
Once the efficacy of the Tchebichef recurrence proposed by Camacho [50] has been validated, the TM invariants (TMIs) derived from this recurrence become a viable and practical approach for feature extraction. In this study, the invariant features, T ˜ 01 and T ˜ 10 are intentionally omitted from the analysis due to their lack of relevant image content. Instead, the detection process centres around the second-order TMIs, which play a crucial role in constructing a feature moment. Based on Equation (33), which is a feature extraction of every candidate image starting from the second order of TMIs, we selected the following feature vector for our simulation [53]:
V = T ˜ 21 , T ˜ 22 , T ˜ 30 , T ˜ 40 , T ˜ 41 , T ˜ 50 .
A blocky semicontinuous seismic image is a distinct category characterized by prominent features, such as well-defined blocks or rectangular-shaped patterns, combined with a certain level of continuity in seismic reflectors. Several examples of blocky semicontinuous images from the candidate dataset are depicted in Figure 20.
For the blocky semi-continuous image samples illustrated in Figure 20, the corresponding TMIs for each image are presented in Table 2. The blocky and semicontinuous pattern displayed in Figure 20e,f as well as Figure 20h,i exhibits a similar textural composition, leading to analogous feature vectors as documented in Table 2 rows (e) and (f) as well as rows (h) and (i). Conversely, when contrasting Figure 20a,e, it becomes evident that the latter portrays a notable discontinuity with distinct textural orientation compared to the former. This discrepancy is further corroborated by the dissimilarity observed in the feature vectors detailed in Table 2 rows (a) and (e).
A chaotic discontinuity seismic image is a specific type of seismic image that combines attributes of chaos and discontinuity. It is characterized by irregular and unpredictable patterns, as well as sudden and abrupt variations in seismic reflectors or structures. These images lack consistent and well-defined patterns, often appearing fragmented or scattered. Several examples of chaotic discontinuity images from the candidate seismic image dataset are shown in the following Figure 21.
The corresponding TMIs for candidate chaotic discontinuous seismic images are displayed in Table 3, respectively. The irregular and fragmented pattern evident in Figure 21a,c shares a comparable textural composition, consequently yielding akin feature vectors, as documented in Table 3 rows (a) and (c). However, Figure 21d showcases a pronounced semicontinuous attribute distinct from Figure 21a, an incongruity that finds manifestation in the distinct feature vectors presented in Table 3 rows (a) and (d).
A mounded semicontinuous seismic image is a seismic image that exhibits a combination of features found in mounded seismic images, such as elevated structures or mounds, and also displays a certain degree of continuity observed in semicontinuous seismic images. It indicates the presence of distinct and well-defined shapes, suggesting geological formations like anticlines or domes in the subsurface. Several examples of mounded seismic images from the candidate seismic image dataset are depicted in Figure 22.
The corresponding TMIs for the mounded semicontinuous image from the candidate seismic dataset are provided in Table 4. The mounded and partially continuous pattern observed in Figure 22d,e exhibits a similar textural composition, resulting in similar feature vectors as documented in Table 4 rows (d) and (e). However, Figure 22i highlights clastic features that deviate from those in Figure 22a, leading to distinct differences in the feature vectors detailed in Table 4 rows (i) and (a).
This study uses a multi-class SVM for seismic image detection, leveraging the TM invariants obtained from the candidate 2D seismic image dataset as input features. The SVM model is employed to classify the seismic images into different classes based on the extracted TM invariants, enabling accurate and efficient detection of various seismic image patterns and features. The initialization parameters, such as the penalty parameter, kernel function type, and SVM types are given in Table 5. The SVM type is defined as epsilon-SVM, which establishes a tolerance margin encompassing the projected value. Instances falling within this margin are categorized as accurately predicted, even in cases where they exhibit minor deviations from the exact target value. This margin of tolerance confers enhanced resilience to noise or minor fluctuations in the dataset, thereby rendering the model more robust. The Kernel function of SVM is defined as a Gaussian (RBF) kernel. The RBF kernel assesses data point similarity using Euclidean distance in feature space. It projects the original space to a higher-dimensional realm, enabling Gaussian-based similarity calculations. This kernel empowers SVMs to grasp intricate nonlinear data relationships, particularly useful when data is not linearly separable in its original feature space. The term "penalty parameter" pertains to the parameter denoted as C, which governs the intricate trade-off between minimizing training errors and minimizing testing errors. This parameter exercises influence over the soft margin characteristics of the SVM classifier.
To train a multi-class SVM, it is essential to create both a training dataset and a testing dataset. The candidate 2D seismic image dataset was randomly divided into three subsets, comprising 80%, 70%, and 60% of the total data. This resulted in 84, 73, and 63 images being selected as the training sets, respectively. The remaining dataset is referred to as the testing samples. The detection rates of the testing samples for the three experiments are depicted in Figure 23. The output of the optimized Support Vector Machine (SVM) comprises solely the original image in conjunction with its corresponding predicted labels. The Figure 24 presented below is a selection of original images, along with their corresponding predicted labels derived from the optimized SVM. These predicted labels are juxtaposed with the original labels, revealing their alignment and similarity.
When the training sample set is small, the accuracy of training images may be insufficient, resulting in lower detection rates. However, as the training sample set size increases, the description and classification boundary improve, leading to an upward trend in the detection rates. All the detection rates reach their respective maximum values in Figure 23. For the first experiment of TMIs when the number of training images is only 63, the detection rate based on the optimized SVM is 85.71%. However, as the size of the training set increases to 84, the detection rate improves significantly, reaching 90.48% with higher accuracy and recognition rate. Similarly, the trend of the recognition rate exhibits an upward trajectory as the number of training samples increases in both the second and third experiments of TMIs.
To assess the performance of TMIs, a comparison was conducted with algorithms based on Hu moment invariants (HMIs) and Zernike moment invariants (ZMIs). The ZMIs seismic image detection method is depicted in Appendix B. HMIs, ZMIs, and TMIs are followed with the multi-class SVM for seismic image detection. In multi-class SVM, a random selection of 70% of the candidate seismic images dataset was utilized as the training set, while the remaining 30% of the candidate images served as the testing set. The detection rate of the candidate seismic image dataset based on HMIs, ZMIs, and TMIs algorithm is presented in Table 6.
The feature extraction results using Hu, Zernike, and TM invariants are summarized below. Hu moment invariants were computed first, and the optimization process was completed after 25 iterations. The regularization parameter achieved a value of 0.675676 , and the objective function value was 50.000000 with a corresponding rho value of 1.000000. All 50 support vectors (nSV) were bounded support vectors (nBSV), resulting in 50 support vectors. The accuracy achieved for the Hu moment invariants was 64.52%.
Next, ZMIs were obtained. The optimization process was completed after 37 iterations, and the regularization parameter was 0.621622 . The objective function value was 42.759237 , with a corresponding rho value of 1.350771. The number of support vectors (nSV) was 48, and among them, 44 were bounded support vectors (nBSV), resulting in 48 support vectors. The accuracy achieved for the ZMIs was 87.10%.
Lastly, TM invariants were computed. The optimization process was completed after 24 iterations, and the regularization parameter achieved a value of 0.567568 . The objective function value was −41.999985, and the corresponding rho value was 1.003337. The number of support vectors (nSV) was 43, with 41 being bounded support vectors (nBSV), resulting in 43 support vectors. The accuracy achieved for the TM invariants was 90.32%.
In summary, the proposed algorithm, which combines TM invariants and a multi-class SVM model, achieves accurate seismic image detection while leveraging the advantages of both Hu moment invariants and ZMIs. The effectiveness and feasibility of the proposed algorithm have been verified, demonstrating its potential as a robust and reliable approach for seismic image detection.

6. Conclusions

The contribution of this paper lies in the efficient application of the TMIs for analysing the seismic image datasets. We combine some image preprocessing techniques (such as Gaussian filter and anisotropic diffusion) with six selected TMIs features to detect and identify the variation of seismic images. Since there is the impossibility of acquiring free-noise and free-blur field seismic data, the applied filters can help to reduce the amount of noise and blur in the original images. The image quality assessment (IQA) is a strong criterion that can be applied to evaluate the quality of images after the enhancement process. Both full references (SSIM and PSNR) and no references (BRISQUE, PIQE, and NIQE) methods are considered in this study as image quality assessors. By focusing on three key aspects of seismic image analysis, segmentation, classification, and detection, we presented a comprehensive study for seismic image analysis. In seismic image segmentation, effective image preprocessing techniques are applied to enhance the quality of input data before employing segmentation algorithms. Gaussian filter and anisotropic diffusion are utilized to address the salt and pepper noise in seismic images. An image quality assessment is conducted to evaluate the preprocessing impact on the original image. The U-net segmentation method, incorporating the Canny template, achieves an impressive accuracy of 94.63% after 200 iterations. Regarding seismic image classification, the Resnet-50 and Inception-v3 models exhibit superior performance, particularly in blocky semicontinuous, chaotic discontinuity, and mounded semicontinuous seismic image classification. Remarkably, both the ResNet-50 and Inception-v3 models exhibit classification stability with increasing iterations and achieve a classification accuracy of 100% with 200 iterations.
On the other hand, there is a lack of image moment transformation applications in seismic data. These moment functions, especially discrete moments such as Tchebichef and Krawtcouk moments are primarily used as global features for detecting, reconstructing, and classifying shapes present in the image. In seismic image detection, discrete TM invariants are employed to extract features. The evaluation involves reconstructing the seismic image using the TM transform and inverse TM transform to assess the reconstruction error. The optimized multi-SVM network, integrating TM invariants, attains a noteworthy accuracy of 90.48%. To comprehensively assess the performance of TM invariants in seismic image detection, ZMIs and HMIs are also introduced, enabling a comparative analysis. This approach facilitates a comprehensive evaluation of the effectiveness and suitability of TMIs for seismic image detection tasks.

Author Contributions

Conceptualization, A.L. and B.H.S.A.; methodology, A.L. and B.H.S.A.; resources, B.H.S.A. and A.L.; writing—original draft preparation, B.H.S.A. and A.L.; writing—review and editing, B.H.S.A. and A.L.; supervision, B.H.S.A.; visualization, B.H.S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The seven Hu invariant moments are for translation, rotation, and scale that can be formulated as follows:
I 1 = y 20 + y 02 I 2 = y 20 y 02 2 + 4 y 11 2 I 3 = y 30 3 y 12 2 + 3 y 21 y 03 2 I 4 = y 30 + y 12 2 + y 21 + y 03 2 I 5 = y 30 3 y 12 y 30 + y 12 y 30 + y 12 2 3 y 21 + y 03 2 + 3 y 12 y 03 y 21 + y 03 3 y 30 + y 12 2 y 21 + y 03 2 I 6 = y 20 y 02 y 30 + y 12 2 y 21 + y 03 2 + 4 y 11 y 30 + y 12 y 21 + y 03 I 7 = 3 y 21 y 03 y 30 + y 12 y 30 + y 12 2 3 y 21 + y 03 2 y 30 + 3 y 12 y 21 + y 03 3 y 30 + y 12 2 y 21 + y 03 2

Appendix B

Indeed, ZMIs are widely used for feature extraction and texture detection in various image processing and computer vision applications. In 1934, Zernike introduced a set of complex-valued functions V p q x , y defined on the unit circle x 2 + y 2 = 1 . V p q x , y , possess completeness and orthogonality properties, enabling them to represent any square-integrable function defined within the unit disk as
V p q ρ , θ = R p q ρ e j q θ ,
where ρ = x 2 + y 2 represents the length of the vector from the origin to the point x , y , and θ = tan 1 ( y x ) represents the angle between ρ and the counterclockwise direction of the x-axis. R p q ρ is a real-valued radial polynomial, which can be defined as follows:
R p q ρ = s = 0 p q / 2 1 s p s ! s ! p + q 2 s ! p + q 2 s ! ρ p 2 s .
Zernike polynomials satisfy orthogonality in the following expression:
x 2 + y 2 1 V p q x , y V n m x , y d x d y = π p + 1 δ p n δ q m ,
where δ k l is the Kronecker delta and V p q is the Conjugate polynomial of V p q .
Due to the orthogonality property of Zernike polynomials, any image within the unit circle, f x , y , can be uniquely reconstructed from its ZMs using the following expression:
f x , y = p = 0 q = 0 Z p q V p q x , y d x d y ,
where Z p q is the ZMs of the original image.
When calculating the ZMs of an image, it is necessary to localize the image at the origin of the coordinate system and map the pixel points of the image onto the unit circle. Due to the rotational invariance of ZMs, we can use Z p q as invariant features of the image. The image’s low-frequency characteristics can be obtained by using V p q ρ , θ with small values of p, while the high-frequency features can be extracted by utilizing V p q ρ , θ with higher values of p. As demonstrated earlier, ZMs can be generated for any desired order, enabling the representation of a broad range of image features.
Due to the rotational invariance of ZMs and their lack of translational and scale invariance, normalization of the image becomes necessary before calculating the ZMs. The method of geometric moments (GMs) is employed for image normalization. GMs are defined in Equation (8) where the moment centre is given in Equation (10). Indeed, the translation issue is effectively resolved by relocating the image’s centre of mass or centroid to the centre of the unit circle, which corresponds to the origin of the coordinates. The m 00 moment represents the area of the image. Normalizing the image’s scale involves ensuring consistent sizes of objects within the image rather than the overall image size. It is achieved by applying transformations to the image coordinates x m 00 , y m 00 , where m 00 represents the zero-order moment or the total sum of pixel intensities in the image.
The resulting ZMs of the image f x , y can be denoted by g ( x , y ) via translation, scale, and rotation invariants based on the following formula:
g x , y = f x m 00 x 0 , y m 00 y 0 .
The ZMIs are utilized for feature detection. As ZMIs are complex numbers, the real part of each ZMI is selected to form the feature vector, which is represented as follows:
V = Z ˜ 21 , Z ˜ 22 , Z ˜ 30 , , Z ˜ m n ,
where m and n are the orders and radial frequencies of the Zernike polynomials, respectively. For seismic image detection, selecting the ZMIs feature vectors can be conducted similarly to the TMIs as
V = Z ˜ 21 , Z ˜ 22 , Z ˜ 30 , Z ˜ 40 , Z ˜ 41 , Z ˜ 50 .
For candidate blocky semicontinuous seismic image samples shown in Figure 20, the real part of ZMIs is presented in Table A1.
Table A1. The real part of ZMIs in candidate blocky semicontinuous image samples.
Table A1. The real part of ZMIs in candidate blocky semicontinuous image samples.
Image Z ˜ 21 Z ˜ 22 Z ˜ 30 Z ˜ 40 Z ˜ 41 Z ˜ 50
(a)0.50160.099300.37120.069740.044910.3424
(b)0.56160.16780.47210.0036400.24850.02485
(c)0.52930.13870.39840.032710.11720.1954
(d)0.53120.14250.39770.023550.12560.1650
(e)0.54350.16360.40860.0031930.15920.07370
(f)0.54410.16360.41120.0032880.16320.06876
(g)0.54470.163590.41470.0024570.16840.06166
(h)0.58910.23540.48660.12160.34370.3596
(i)0.58420.23090.47420.11650.32310.3250
For the candidate chaotic discontinuity seismic image samples depicted in Figure 21, Table A2 shows the results of ZMIs in their real part format.
Table A2. The real part of ZMIs in candidate chaotic discontinuous image samples.
Table A2. The real part of ZMIs in candidate chaotic discontinuous image samples.
Image Z ˜ 21 Z ˜ 22 Z ˜ 30 Z ˜ 40 Z ˜ 41 Z ˜ 50
(a)0.10540.30240.64920.26170.010590.9917
(b)0.10270.30140.63890.26500.0090420.9712
(c)0.10440.30040.632890.25700.010710.9312
(d)0.12010.28870.60630.24410.016000.8355
(e)0.12110.29670.63680.22110.013430.8452
(f)0.12000.29510.63100.23380.012200.8936
(g)0.12160.29370.63120.23820.010270.9118
(h)0.15130.27550.59210.18890.025240.7014
(i)0.15420.27540.60060.18750.025910.7301
Finally, Table A3 demonstrates the real part of ZMIs for the candidate mounded semicontinuous seismic image samples depicted in Figure 22.
Table A3. The real part of ZMIs in candidate mounded semicontinuous image samples.
Table A3. The real part of ZMIs in candidate mounded semicontinuous image samples.
Image Z ˜ 21 Z ˜ 22 Z ˜ 30 Z ˜ 40 Z ˜ 41 Z ˜ 50
(a)0.46710.031120.41850.12770.028660.4232
(b)0.47200.031200.44000.12930.057320.3897
(c)0.47230.028290.44820.13370.062290.3924
(d)0.47200.025740.45380.13780.065200.3954
(e)0.47050.027060.444130.13420.057480.3982
(f)0.47420.028220.46340.13060.085030.3607
(g)0.39440.066110.44810.13430.039570.4086
(h)0.39780.067480.46890.13790.027880.3794
(i)0.40290.07430.51620.14040.029530.2981

References

  1. Talagapu, K.K. 2D and 3D Land Seismic Data Acquisition and Seismic Data Processing. Master’s Thesis, Department of Geophysics, College of Science and Technology Andhra University, Visakhaptanam, India, 2005. [Google Scholar]
  2. Virieux, J.; Operto, S. An overview of full-waveform inversion in exploration geophysics. Geophysics 2009, 74, WCC1–WCC26. [Google Scholar] [CrossRef]
  3. Gray, M.; Bell, R.E.; Morgan, J.V.; Henrys, S.; Barker, D.H.; IODP Expedition 372 and 375 Science Parties. Imaging the shallow subsurface structure of the North Hikurangi Subduction Zone, New Zealand, using 2-D full-waveform inversion. J. Geophys. Res. Solid Earth 2019, 124, 9049–9074. [Google Scholar] [CrossRef]
  4. Samyn, K.; Travelletti, J.; Bitri, A.; Grandjean, G.; Malet, J.P. Characterization of a landslide geometry using 3D seismic refraction traveltime tomography: The La Valette landslide case history. J. Appl. Geophys. 2012, 86, 120–132. [Google Scholar] [CrossRef]
  5. Ben-Zion, Y.; Lee, W.; Kanamori, H.; Jennings, P.; Kisslinger, C. Key formulas in earthquake seismology. Int. Handb. Earthq. Eng. Seismol. 2003, 81, 1857–1875. [Google Scholar]
  6. Sheriff, R.E. Encyclopedic Dictionary of Applied Geophysics; Society of Exploration Geophysicists: Houston, TX, USA, 2002. [Google Scholar]
  7. Posamentier, H.W.; Morris, W.R. Aspects of the stratal architecture of forced regressive deposits. Geol. Soc. London Spec. Publ. 2000, 172, 19–46. [Google Scholar] [CrossRef]
  8. Posamentier, H.W. Seismic Geomorphology: Imaging Elements of Depositional Systems from Shelf to Deep Basin Using 3D Seismic Data: Implications for Exploration and Development; Geological Society of London: London, UK, 2004. [Google Scholar]
  9. Zeng, H. Stratal Slicing Makes Seismic Imaging of Depositional Systems Easier: Search Discov. 2006. Available online: https://www.searchanddiscovery.com/documents/2006/06036zeng_gc/ (accessed on 1 August 2023).
  10. Steffens, G.; Shipp, R.; Prather, B.; Nott, J.; Gibson, J.; Winker, C. The use of near-seafloor 3D seismic data in deepwater exploration and production. Geol. Soc. London Mem. 2004, 29, 35–43. [Google Scholar] [CrossRef]
  11. Morgan, R. Structural controls on the positioning of submarine channels on the lower slopes of the Niger Delta. Geol. Soc. Lond. Mem. 2004, 29, 45–52. [Google Scholar] [CrossRef]
  12. Long, D.; Bulat, J.; Stoker, M. Sea Bed Morphology of the Faroe-Shetland Channel Derived from 3D Seismic Datasets; Geological Society of London: London, UK, 2004. [Google Scholar]
  13. An, Y.; Guo, J.; Ye, Q.; Childs, C.; Walsh, J.; Dong, R. A gigabyte interpreted seismic dataset for automatic fault recognition. Data Brief 2021, 37, 107219. [Google Scholar] [CrossRef]
  14. Li, X.; Li, K.; Xu, Z.; Huang, Z.; Dou, Y. Fault-Seg-Net: A method for seismic fault segmentation based on multi-scale feature fusion with imbalanced classification. Comput. Geotech. 2023, 158, 105412. [Google Scholar] [CrossRef]
  15. Hu, M.K. Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
  16. Teague, M.R. Image analysis via the general theory of moments. J. Opt. Soc. Am. 1980, 70, 920–930. [Google Scholar] [CrossRef]
  17. Papakostas, G.A.; Karakasis, E.G.; Koulouriotis, D.E. Accurate and speedy computation of image Legendre moments for computer vision applications. Image Vis. Comput. 2010, 28, 414–423. [Google Scholar] [CrossRef]
  18. Honarvar, B. New Moment Functions for Signal and Image Analysis. In Proceedings of the 5th International Conference on Advances in Signal Processing and Artificial Intelligence, Tenerife (Canary Islands), Spain, 7–9 June 2023; pp. 109–115. [Google Scholar]
  19. Kuijlaars, A.; Martínez-Finkelshtein, A. Strong asymptotics for Jacobi polynomials with varying nonstandard parameters. J. D’Anal. Math. 2004, 94, 195–234. [Google Scholar] [CrossRef]
  20. Belkasim, S.; Hassan, E.; Obeidi, T. Radial zernike moment invariants. In Proceedings of the Fourth International Conference on Computer and Information Technology, Wuhan, China, 14–16 September 2004; pp. 790–795. [Google Scholar]
  21. Shakibaei, B.H.; Paramesran, R. Recursive formula to compute Zernike radial polynomials. Opt. Lett. 2013, 38, 2487–2489. [Google Scholar] [CrossRef]
  22. Mukundan, R.; Ong, S.; Lee, P.A. Image analysis by Tchebichef moments. IEEE Trans. Image Process. 2001, 10, 1357–1364. [Google Scholar] [CrossRef]
  23. Yap, P.T.; Paramesran, R.; Ong, S.H. Image analysis by Krawtchouk moments. IEEE Trans. Image Process. 2003, 12, 1367–1377. [Google Scholar]
  24. Asli, B.H.S.; Flusser, J. Fast computation of Krawtchouk moments. Inf. Sci. 2014, 288, 73–86. [Google Scholar] [CrossRef]
  25. Asli, B.H.S.; Paramesran, R.; Lim, C.L. The fast recursive computation of Tchebichef moment and its inverse transform based on Z-transform. Digit. Signal Process. 2013, 23, 1738–1746. [Google Scholar] [CrossRef]
  26. Honarvar Shakibaei Asli, B.; Rezaei, M.H. Four-Term Recurrence for Fast Krawtchouk Moments Using Clenshaw Algorithm. Electronics 2023, 12, 1834. [Google Scholar] [CrossRef]
  27. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  28. Schuster, G.T. Seismic imaging, overview. In Encyclopedia of Solid Earth Geophysics; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–13. [Google Scholar]
  29. Abdelwahhab, M.A.; Abdelhafez, N.A.; Embabi, A.M. Machine learning-supported seismic stratigraphy of the Paleozoic Nubia Formation (SW Gulf of Suez-rift): Implications for paleoenvironment- petroleum geology of a lacustrine-fan delta. Petroleum 2023, 9, 301–315. [Google Scholar] [CrossRef]
  30. Luo, Y.; Higgs, W.; Kowalik, W. Edge detection and stratigraphic analysis using 3D seismic data. In SEG Technical Program Expanded Abstracts 1996; Society of Exploration Geophysicists: London, UK, 1996; pp. 324–327. [Google Scholar]
  31. Boersma, Q.; Athmer, W.; Haege, M.; Etchebes, M.; Haukås, J.; Bertotti, G. Natural fault and fracture network characterization for the southern Ekofisk field: A case study integrating seismic attribute analysis with image log interpretation. J. Struct. Geol. 2020, 141, 104197. [Google Scholar] [CrossRef]
  32. Roberts, L.G. Machine Perception of Three-Dimensional Solids. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1963. [Google Scholar]
  33. Prewitt, J.M.S. Object enhancement and extraction. Pict. Process. Psychopictorics 1970, 10, 15–19. [Google Scholar]
  34. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  35. Nahian, S.; Paheding, S.; Colin, E.; Vijay, D. U-Net and its variants for medical image segmentation: Theory and applications. arXiv 2011, 1118, v1. [Google Scholar]
  36. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  37. Abdelwahhab, M.A.; Radwan, A.A.; Mahmoud, H.; Mansour, A. Geophysical 3D-static reservoir and basin modeling of a Jurassic estuarine system (JG-Oilfield, Abu Gharadig basin, Egypt). J. Asian Earth Sci. 2022, 225, 105067. [Google Scholar] [CrossRef]
  38. Chevitarese, D.S.; Szwarcman, D.; da Gama e Silva, R.; Vital Brazil, E. Deep learning applied to seismic facies classification: A methodology for training. Eur. Assoc. Geosci. Eng. 2018, 2018, 1–5. [Google Scholar]
  39. Geng, Z.; Wang, Y. Automated design of a convolutional neural network with multi-scale filters for cost-efficient seismic data classification. Nat. Commun. 2020, 11, 3311. [Google Scholar] [CrossRef] [PubMed]
  40. Zhao, T. Seismic facies classification using different deep convolutional neural networks. In Proceedings of the 2018 SEG International Exposition and Annual Meeting, Anaheim, CA, USA, 14–19 October 2018. [Google Scholar]
  41. Souza, J.F.L.; Santos, M.D.; Magalhães, R.M.; Neto, E.; Oliveira, G.P.; Roque, W.L. Automatic classification of hydrocarbon “leads” in seismic images through artificial and convolutional neural networks. Comput. Geosci. 2019, 132, 23–32. [Google Scholar] [CrossRef]
  42. Waldeland, A.U.; Jensen, A.C.; Gelius, L.J.; Solberg, A.H.S. Convolutional neural networks for automated seismic interpretation. Lead. Edge 2018, 37, 529–537. [Google Scholar] [CrossRef]
  43. Wu, X.; Liang, L.; Shi, Y.; Fomel, S. FaultSeg3D: Using synthetic data sets to train an end-to-end convolutional neural network for 3D seismic fault segmentation. Geophysics 2019, 84, IM35–IM45. [Google Scholar] [CrossRef]
  44. Wrona, T.; Pan, I.; Gawthorpe, R.L.; Fossen, H. Seismic facies analysis using machine learning. Geophysics 2018, 83, O83–O95. [Google Scholar] [CrossRef]
  45. Pratama, H.; Latiff, A.H.A. Automated geological features detection in 3D seismic data using semi-supervised learning. Appl. Sci. 2022, 12, 6723. [Google Scholar] [CrossRef]
  46. Troccoli, E.B.; Cerqueira, A.G.; Lemos, J.B.; Holz, M. K-means clustering using principal component analysis to automate label organization in multi-attribute seismic facies analysis. J. Appl. Geophys. 2022, 198, 104555. [Google Scholar] [CrossRef]
  47. Zhao, T.; Zhang, J.; Li, F.; Marfurt, K.J. Characterizing a turbidite system in Canterbury Basin, New Zealand, using seismic attributes and distance-preserving self-organizing maps. Interpretation 2016, 4, SB79–SB89. [Google Scholar] [CrossRef]
  48. Puzyrev, V.; Elders, C. Deep convolutional autoencoder for unsupervised seismic facies classification. In Proceedings of the EAGE/AAPG Digital Subsurface for Asia Pacific Conference, European Association of Geoscientists & Engineers, Kuala Lumpur, Malaysia, 7–10 September 2020; Volume 2020, pp. 1–3. [Google Scholar]
  49. Sivagami, S.; Chitra, P.; Kailash, G.S.R.; Muralidharan, S. Unet architecture based dental panoramic image segmentation. In Proceedings of the 2020 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 4–6 August 2020; pp. 187–191. [Google Scholar]
  50. Camacho-Bello, C.; Rivera-Lopez, J.S. Some computational aspects of Tchebichef moments for higher orders. Pattern Recognit. Lett. 2018, 112, 332–339. [Google Scholar] [CrossRef]
  51. Malacara, D. Optical Shop Testing; John Wiley & Sons: Hoboken, NJ, USA, 2007; Volume 59. [Google Scholar]
  52. Bateman, H. Higher Transcendental Functions [Volumes i-iii]; McGRAW-HILL Book Company: New York, NY, USA, 1953; Volume 1. [Google Scholar]
  53. Bian, Y.; Yang, M.; Fan, X.; Liu, Y. A fire detection algorithm based on Tchebichef moment invariants and PSO-SVM. Algorithms 2018, 11, 79. [Google Scholar] [CrossRef]
  54. dGB Earth Sciences, Netherlands Offshore F3 Block Complete. 1987. Available online: https://terranubis.com/datainfo/Netherlands-Offshore-F3-Block-Complete (accessed on 20 January 2019).
  55. Miller, J. Utah FORGE: 2D and 3D Seismic Data; Technical Report, USDOE Geothermal Data Repository (United States), Energy and Geoscience; University of Utah: Salt Lake City, UT, USA, 2018. [Google Scholar]
  56. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
Figure 1. A horizontal slice with small depositional features [9].
Figure 1. A horizontal slice with small depositional features [9].
Electronics 12 03692 g001
Figure 2. Seismic Dataset: All seismic data are provided in SEG-Y format.
Figure 2. Seismic Dataset: All seismic data are provided in SEG-Y format.
Electronics 12 03692 g002
Figure 3. The flow diagram of classical seismic image segmentation.
Figure 3. The flow diagram of classical seismic image segmentation.
Electronics 12 03692 g003
Figure 4. U-net network [36].
Figure 4. U-net network [36].
Electronics 12 03692 g004
Figure 5. Resnet-50 Model architecture.
Figure 5. Resnet-50 Model architecture.
Electronics 12 03692 g005
Figure 6. The flow diagram of optimized SVM detection.
Figure 6. The flow diagram of optimized SVM detection.
Electronics 12 03692 g006
Figure 7. (ai) Candidate images: (ac) 2D blocky semicontinuous seismic images, (df) 2D chaotic discontinuous images, and (gi) 2D mounded semicontinuous seismic images.
Figure 7. (ai) Candidate images: (ac) 2D blocky semicontinuous seismic images, (df) 2D chaotic discontinuous images, and (gi) 2D mounded semicontinuous seismic images.
Electronics 12 03692 g007
Figure 8. No-reference image quality assessment: (a) BRISQUE, (b) PIQE, and (c) NIQE.
Figure 8. No-reference image quality assessment: (a) BRISQUE, (b) PIQE, and (c) NIQE.
Electronics 12 03692 g008
Figure 9. PSNR score for image quality assessment.
Figure 9. PSNR score for image quality assessment.
Electronics 12 03692 g009
Figure 10. SSIM score for image quality assessment.
Figure 10. SSIM score for image quality assessment.
Electronics 12 03692 g010
Figure 11. (ac) Original image, (df) corresponding preprocessing images following the procedure outlined in Figure 3, and (gi) corresponding edge detected Canny images.
Figure 11. (ac) Original image, (df) corresponding preprocessing images following the procedure outlined in Figure 3, and (gi) corresponding edge detected Canny images.
Electronics 12 03692 g011
Figure 12. The flow diagram for seismic image segmentation utilizing the U-Net network.
Figure 12. The flow diagram for seismic image segmentation utilizing the U-Net network.
Electronics 12 03692 g012
Figure 13. U-net training result for the candidate dataset with some reference images and corresponding U-net segmentation result represented in Figure 14.
Figure 13. U-net training result for the candidate dataset with some reference images and corresponding U-net segmentation result represented in Figure 14.
Electronics 12 03692 g013
Figure 14. Candidate image samples: (ac) original images, (df) corresponding edge detected Canny images (reference images), (gi) corresponding U-net segmentation.
Figure 14. Candidate image samples: (ac) original images, (df) corresponding edge detected Canny images (reference images), (gi) corresponding U-net segmentation.
Electronics 12 03692 g014
Figure 15. Training results for seismic image classification: (a) ResNet-50, (b) VGG-16, (c) VGG-19, and (d) Inception-v3.
Figure 15. Training results for seismic image classification: (a) ResNet-50, (b) VGG-16, (c) VGG-19, and (d) Inception-v3.
Electronics 12 03692 g015
Figure 16. Original images samples along with their corresponding predicted labels that align precisely with the input labels: (a) chaotic discontinuous seismic image, (b) mounded semicontinuous seismic image, (c) blocky semicontinuous seismic image.
Figure 16. Original images samples along with their corresponding predicted labels that align precisely with the input labels: (a) chaotic discontinuous seismic image, (b) mounded semicontinuous seismic image, (c) blocky semicontinuous seismic image.
Electronics 12 03692 g016
Figure 17. Seismic image selected for Tchebichef reconstruction: (a) Sample A, (b) Sample B, and (c) Sample C.
Figure 17. Seismic image selected for Tchebichef reconstruction: (a) Sample A, (b) Sample B, and (c) Sample C.
Electronics 12 03692 g017
Figure 18. Analysis of the NIRE with the noisy images.
Figure 18. Analysis of the NIRE with the noisy images.
Electronics 12 03692 g018
Figure 19. Candidate image samples: (ac) corresponding preprocessing images following the procedure outlined in Figure 3, (df) corresponding TM reconstruction images using a maximum order of 50, and (gi) corresponding TM reconstruction images using a maximum order of 100.
Figure 19. Candidate image samples: (ac) corresponding preprocessing images following the procedure outlined in Figure 3, (df) corresponding TM reconstruction images using a maximum order of 50, and (gi) corresponding TM reconstruction images using a maximum order of 100.
Electronics 12 03692 g019
Figure 20. (ai) Candidate blocky semicontinuous seismic image.
Figure 20. (ai) Candidate blocky semicontinuous seismic image.
Electronics 12 03692 g020
Figure 21. (ai) Candidate chaotic discontinuous seismic image.
Figure 21. (ai) Candidate chaotic discontinuous seismic image.
Electronics 12 03692 g021
Figure 22. (ai) Candidate mounded semicontinuous seismic image.
Figure 22. (ai) Candidate mounded semicontinuous seismic image.
Electronics 12 03692 g022
Figure 23. Effect of Training Dataset size on the recognition rate.
Figure 23. Effect of Training Dataset size on the recognition rate.
Electronics 12 03692 g023
Figure 24. Original images samples accompanied by their corresponding predicted labels through optimised SVM that exhibit a precise alignment with the input labels: (a) chaotic discontinuous seismic image, (b) mounded semicontinuous seismic image, (c) blocky semicontinuous seismic image.
Figure 24. Original images samples accompanied by their corresponding predicted labels through optimised SVM that exhibit a precise alignment with the input labels: (a) chaotic discontinuous seismic image, (b) mounded semicontinuous seismic image, (c) blocky semicontinuous seismic image.
Electronics 12 03692 g024
Table 1. A classification rate of the testing sample set.
Table 1. A classification rate of the testing sample set.
ParameterResNet-50VGG-16VGG-19Inception-v3
Classification accuracy100%100%100%100%
Table 2. The TMIs of the candidate blocky semicontinuous image samples.
Table 2. The TMIs of the candidate blocky semicontinuous image samples.
Image T ˜ 21 T ˜ 22 T ˜ 30 T ˜ 40 T ˜ 41 T ˜ 50
(a)−3.85194.9641−2.62162.9546−5.1129−3.2415
(b)−3.85194.9641−2.62172.9546−5.1131−3.2417
(c)−3.85184.9632−2.62182.9549−5.1132−3.2421
(d)−3.85174.9631−2.62182.9547−5.1128−3.2418
(e)−3.84844.9583−2.61762.9470−5.0992−3.2291
(f)−3.84854.9583−2.61762.9470−5.0992−3.2291
(g)−3.84854.9584−2.61762.9470−5.0992−3.2291
(h)−3.84404.9537−2.61092.9345−5.0780−3.2084
(i)−3.84404.9538−2.61092.9345−5.0782−3.2086
Table 3. The TMIs of the candidate chaotic discontinuous image samples.
Table 3. The TMIs of the candidate chaotic discontinuous image samples.
Image T ˜ 21 T ˜ 22 T ˜ 30 T ˜ 40 T ˜ 41 T ˜ 50
(a)−3.82504.8633−2.62042.9522−5.0744−3.2376
(b)−3.82464.8616−2.62042.9522−5.0739−3.2377
(c)−3.82514.8634−2.62042.9523−5.0747−3.2379
(d)−3.82304.8618−2.61702.9458−5.0640−3.2271
(e)−3.82274.86091−2.61692.9457−5.0635−3.2269
(f)−3.82224.8591−2.61672.9454−5.0624−3.2264
(g)−3.82274.8609−2.61692.9457−5.0635−3.2269
(h)−3.83744.9061−2.62262.9565−5.0960−3.2447
(i)−3.83764.9071−2.62262.9563−5.0961−3.2445
Table 4. The TMIs of the candidate mounded semi-continuous image samples.
Table 4. The TMIs of the candidate mounded semi-continuous image samples.
Image T ˜ 21 T ˜ 22 T ˜ 30 T ˜ 40 T ˜ 41 T ˜ 50
(a)−3.84714.9514−2.61852.9486−5.0994−3.2318
(b)−3.84734.9518−2.61862.9489−5.0999−3.2322
(c)−3.84744.9519−2.61882.9492−5.1004−3.2327
(d)−3.84754.9521−2.61892.9494−5.1008−3.2331
(e)−3.84744.9520−2.61882.9492−5.1005−3.2328
(f)−3.84724.9517−2.61862.9489−5.0999−3.2322
(g)−3.84534.9441−2.61872.9491−5.0976−3.2325
(h)−3.84554.9445−2.61892.9495−5.0984−3.2332
(i)−3.84544.9442−2.61882.9492−5.0978−3.2328
Table 5. The initialization parameter of multi-class SVM (RBF refers to the radial basis function).
Table 5. The initialization parameter of multi-class SVM (RBF refers to the radial basis function).
ParameterValueDescription
sepsilon-SVMSVM type
tGaussian (RBF) KernelKernel function of SVM
C0.01Penalty parameter of SVM
Table 6. The detection rate of the testing sample set.
Table 6. The detection rate of the testing sample set.
ParameterHuZernikeTchebichef
Detection amount202728
Detection rate64.52%87.10%90.32%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, A.; Honarvar Shakibaei Asli, B. Seismic Image Identification and Detection Based on Tchebichef Moment Invariant. Electronics 2023, 12, 3692. https://doi.org/10.3390/electronics12173692

AMA Style

Lu A, Honarvar Shakibaei Asli B. Seismic Image Identification and Detection Based on Tchebichef Moment Invariant. Electronics. 2023; 12(17):3692. https://doi.org/10.3390/electronics12173692

Chicago/Turabian Style

Lu, Andong, and Barmak Honarvar Shakibaei Asli. 2023. "Seismic Image Identification and Detection Based on Tchebichef Moment Invariant" Electronics 12, no. 17: 3692. https://doi.org/10.3390/electronics12173692

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop