Next Article in Journal
Polyp Generalization via Diversifying Style at Feature-Level Space
Previous Article in Journal
Performance Evaluation of Reconfiguration Policy in Reconfigurable Manufacturing Systems including Multi-Spindle Machines: An Assessment by Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

OmniSR-M: A Rock Sheet with a Multi-Branch Structure Image Super-Resolution Lightweight Method

1
Institute of Unconventional Oil & Gas Research, Northeast Petroleum University, Daqing 163318, China
2
Research Institute of Exploration and Development, Daqing Oilfield Company Ltd., Daqing 163453, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(7), 2779; https://doi.org/10.3390/app14072779
Submission received: 5 January 2024 / Revised: 13 February 2024 / Accepted: 14 February 2024 / Published: 26 March 2024
(This article belongs to the Special Issue Applications of Artificial Intelligence in Petroleum Geology)

Abstract

:
With the rapid development of digital core technology, the acquisition of high-resolution rock thin section images has become crucial. Due to the limitation of optical principles, thin section imaging involves a contradiction between resolution and field of view. In order to solve this problem, this paper proposes a lightweight, fully aggregated network with multi-branch structure for super resolution of rock thin section images. The experimental results on the rock thin section dataset demonstrate that the improved method, called OmniSR-M, achieves significant enhancement compared to the original OmniSR method and also surpasses other state-of-the-art methods. OmniSR-M effectively recovers image details while maintaining its lightweight nature. Specifically, OmniSR-M reduces the number of parameters by 26.56% and the computation by 27.66% compared to OmniSR. Moreover, this paper quantitatively analyzes both the facies porosity rate and grain size features in the application scenario. The results show that the images generated by OmniSR-M successfully recover key information about the rock thin section.

1. Introduction

Rock thin sections are an important research object in the field of geology. Geologists can determine the type, composition, and structure of rocks and reveal the conditions and processes of rock formation by observing the texture, color, and shape of the thin sections. In addition, the development of rock pores and throats can be obtained via thin sections, which in turn can be used to evaluate the storage capacity of oil and gas reservoirs. A rock thin section is a sample prepared by cutting a rock sample to a thickness of about 30 microns and going through preparatory steps such as grinding and polishing; the thin section is then observed and analyzed using an optical or electron microscope. However, the acquisition of rock thin section images is constrained by imaging equipment, leading to a trade-off between the field of view and resolution. To overcome this limitation, super-resolution reconstruction technology is considered a viable solution.
Traditional interpolation methods generally improve the resolution of an image by interpolating between pixels, but this method is more suitable for smoothing images. When dealing with images containing high-frequency information, image blurring and distortion occur because traditional interpolation methods mainly focus on color smoothing between pixels, ignoring high-frequency information such as edges and details. The problem has been effectively mitigated since a single-image super-resolution method based on a deep learning approach was proposed [1]. High-frequency information can be well preserved by introducing deep learning methods, which are widely used in nature images, medical images, satellite aerial images, military fields, and robotics [2,3,4,5]. This is also true in the field of geophysics [6,7,8,9,10]. However, the traditional pixel loss function in deep learning methods often results in the generated images looking too smooth and lacking details, which predisposes them to being less capable of reconstructing texture in high-frequency images such as thin slices. In order to solve this problem, some studies have proposed super-resolution methods based on perceptual loss [11,12,13,14,15,16], but since this class of methods focuses more on improving the quality of human-perceived images, it is more applicable to natural images.
The attention mechanism is better able to capture the details of the image [17], its structure, and its texture information, focusing more on pixel-level reconstruction. In order to achieve aggregation in both spatial and channel dimensions, omnidirectional self-attention is proposed for fully aggregated networks, which realizes the full fusion of multi-scale features, including local, mid-range, and global scales [18]. Therefore, this paper proposes a lightweight super-resolution method for fully aggregated networks with multi-branch structure, comparing several state-of-the-art super-resolution methods [16,18,19,20,21], and quantitatively evaluates them in terms of structural similarity, peak signal-to-noise ratio, and perceptual quality. In addition, this paper starts from the application of thin section images and proposes the use of the large model Segment Anything (SAM) for semantic segmentation to compute the granularity information and compare it with high-definition images in order to assess the quality of the model’s restoration at the edges of the grains [22]. The traditional threshold segmentation method for extracting the facies porosity rate was also used as a way of comparing the effectiveness of different algorithms.
In this work, the main contributions can be summarized as follows:
  • We propose a full aggregation network structure with a multi-branch structure, including a spatial attention mechanism and channel attention mechanism, which can learn the features of different scales better than the inherent full aggregation network structure.
  • The Omni-M method achieves a lighter weight in terms of Model FLOPs and Model Params by 27.66% and 26.56%, respectively, compared to the intrinsic method.
  • For the first time, the use of the conducted large-model SAM method is proposed for application to particle size analysis of rock thin sections, and it is proposed that the super-resolution results be evaluated through two application scenarios, namely, facies porosity rate and grain size.

2. Materials and Methods

2.1. Dataset

The dataset for this paper was derived from cores from 21 wells in the Xinjiang region, totaling 650 sheets comprising 430 sheets with a resolution of 2580 × 1944, and 220 sheets with a resolution of 4080 × 3072. All low-resolution images used for training were obtained by downsampling HR by a factor of 4. The dataset was randomly divided into two parts, with 520 images in the training set and 130 images in the test set. All the training and testing processes of this experiment were run on NVIDIA Tesla V100 GPUs, and PyTorch was used to implement the network structure of this paper.

2.2. Attention Mechanisms

Attention mechanisms include the spatial attention mechanism and the channel attention mechanism [23,24], both of which have been widely used in super-resolution tasks to improve the quality of image reconstruction since they were proposed.
The channel attention mechanism focuses on measuring the importance of each channel and dynamically adjusting the importance of each channel to highlight the more important ones, thus improving the reconstruction quality. The schematic diagram of the channel attention mechanism is shown in Figure 1. In this process, the model learns to obtain a weight vector that assigns different weights to different channels of the input feature map. These weights can be applied to the feature mapping on each channel to enhance or suppress the information on a particular channel. The implementation of the channel attention mechanism is usually calculated using global average pooling or global maximum pooling [25], and the calculation process can be expressed as follows:
Z c = A v g P o o l X c
where Z c is the feature statistic of the cth channel, A v g P o o l denotes the global average pooling operation, and X c denotes the cth channel of the input feature map.
W c = σ F Z c
where W c is the weight of the cth channel, F denotes a fully connected layer including the activation function, and σ denotes the activation function.
Y c = W c X c
where Y c is the adjusted feature map of the cth channel, W c is the weight of the corresponding channel, and X c is the cth channel of the input feature map.
Spatial attention mechanisms are different from channel attention mechanisms in that they focus more on the “where?”, which can better capture the dependencies between pixels, provide global contextual information, and provide a larger sensory field. It allows the model to focus on different areas of the input image when generating high-resolution images to better capture details and textures. The structure of the spatial attention mechanism is shown in Figure 2. Different from the channel attention mechanism, the spatial attention mechanism will first calculate the positional energy score then convert the energy score into weights, usually using the SoftMax function to ensure that the sum of the weights is 1, thus forming a probability distribution [26]. The computational process of the spatial attention mechanism can be expressed as follows:
E i , j = F X i , j
where E i , j is the energy fraction at location i , j and F is a learnable function that usually includes a convolutional layer and an activation function.
W i , j = e E i , j k , l e E k , l
where W i , j is the weight at position i , j , E k , l is the energy at the corresponding position, and the denominator part is normalized to the energy at all positions.
Y ( i , j ) = k , l W ( k , l ) · X ( i + k , j + l )
where Y ( i , j ) is the generated output, W k , l is the weights in the weight matrix and X ( i + k , j + l ) is the corresponding pixel or feature in the input feature map.
The channel attention mechanism and spatial attention mechanism have different focuses, and the effective integration of their respective features can help to improve the super-resolution quality of thin-film images, which has been effectively demonstrated in BAM and CBAM studies [5,10,21,23,24].

2.3. Omnidirectional Self-Attention Mechanism Module

In 2023, Wang et al. introduced a novel module called the Omnidirectional Self-Attention (OSA) mechanism [18]. Unlike the BAM and CBAM approaches, the OSA module aims to provide a more comprehensive self-attention mechanism to improve the performance of models. This attention mechanism module differs from existing self-attention mechanisms in that it considers both spatial and channel contextual information. The OSA module is able to transfer spatial and channel information compactly, which better solves the problem that important features will be distributed in different dimensions as the depth of the model deepens, and this mechanism is important for lightweight models.
The OSA module first embeds the input feature Χ R H W × C into the query matrix Q s , the key matrix K s , and the value matrix V s , and generates a spatial attention map (SAM) with a size of R ( H × W ) × ( H × W ) by calculating the correlation between the query matrix Q s and the key matrix K s , where H , W and C are the width, height, and the channel number of the input, respectively. At this stage, a windowing strategy can be used to reduce resource overhead. Subsequently, the spatial attention map is applied to the value matrix V s to compute the aggregated results for the spatial context, a step that typically involves multiplying the spatial attention map with the value matrix and summing the results to generate an intermediate aggregated feature map. Next, the input query matrix and key matrix are rotated to generate the transposed query matrix Q c , the transposed key matrix K c , and the value matrix V c , with dimensions changed to C × ( H × W ) , respectively, for the channel autodidactic operation. A channel attention map of size R C × C is generated by calculating the correlation between a query and a key, modeling the relationship between different channels. The channel attention map is then applied to the value matrix V c to compute the aggregation result of the channel context. This step typically involves multiplying the channel attention map with the value matrix and summing the results to generate features of the channel context. Finally, the final aggregation result Y O S A is obtained by reverse-rotating the output of the channel’s self-attention, which is a tensor of size R ( H × W ) × C containing the output of the entire OSA block. This process can be represented as follows:
Q s = Χ · W q
K s = Χ · W k
V s = Χ · W v
Y s = S o f t M a x ( Q s K s T ) · V s
where Q s is the spatial query matrix, K s is the spatial key matrix, V s is the spatial value matrix, Χ R H W × C denotes the input features, Τ is the transpose operation, Y s is the spatial aggregation features, and S o f t M a x is the activation function.
Q c = R ( Q )
K c = R ( K )
V c = R ( V )
Y c = S o f t M a x K c Q c T · V c
Y O S A = R 1 ( Y c )
where Q c is the channel query matrix; K c is the channel key matrix; V c is the channel value matrix; Q , K and V are obtained by Equations (11)–(13); R is the rotate operation, which is applied to Q , K , and V ; S o f t M a x is the activation function, which will be rotated to Q c , K c , and V c ; Y c is the channel aggregation feature; and Y O S A is the final aggregation result.
OSA is more capable of mining deeper features compared to BAM and CBAM, which only respond to channel-wise and spatial importance without further feature information exchange. In order to fully utilize the OSA block, a more compact OSAG module is constructed to achieve multi-scale hierarchical feature extraction and obtain semantic information at multiple scales (local, mid-range, and global) for progressive receiver domain feature aggregation. The OSAG module is divided into three blocks representing local-level, medium-level, and global-level feature extraction. The local convolution block (LCB) extracts local detail information using separable convolution, etc. In the medium and global phases, Meso-OSA and Global-OSA are introduced, which use local window self-attention and sparse sampling self-attention, respectively, to obtain their corresponding semantic information. The OSAG module is able to pay better attention to different scale information, which is able to enrich the features and expand the sensory field [4].

2.4. Multi-Branch Structure

In order to make the OSAG module more lightweight and efficient while retaining the multi-scale feature information, this paper proposes the omni-scale aggregation group (OSAG) with multi-branching structure, which firstly extracts the image features using the shallow feature extraction network layers Net64, Net32, and Net16, respectively, with 64, 32, and 16 being the number of feature channels outputted from each network. The first branch is Net64, which mainly retains global semantic information, and this branch handles 64-channel feature maps. The second branch is Net32, which is responsible for aggregating local region information, and this branch processes 32-channel feature maps. The third branch is Net16, which is used to learn local details, and this branch handles 16-channel feature maps. Three branches are operated in parallel for deep feature extraction, using residual structures to splice the original shallow features and the deep features extracted through the OSAG module. Inverse convolution usually refers to transposed convolution, also known as fractional step convolution or upsampling convolution, which is a commonly used upsampling method, the principle of which is shown in Figure 3 [23,24]. In this paper, this approach is used to resize the feature map in order to fuse features of different scales to form a multi-scale feature output containing global to local features. Such a design in this paper takes full advantage of the idea of extracting multi-scale features in parallel with multiple lightweight sub-networks, which retains the multi-scale modeling capability of OSAG and improves the computational efficiency, making the OSAG module more suitable for use in lightweight and speed-critical scenarios.
Specifically, the shallow features of the image are first extracted using a multilayer 3 × 3 convolutional kernel, and the output feature maps are of different sizes, 64 × H × W , 32 × H × W , and 16 × H × W , respectively. Then, the deep features on different scales are extracted separately by the OSAG module, the residual structure fuses the original shallow features, and finally, the features are fused using the inverse convolution operation. This process can be described as follows:
F 64 = C o n v 3 × 3 ( X )
F 32 = C o n v 3 × 3 ( X )
F 16 = C o n v 3 × 3 ( X )
where X is the input image and F 64 , F 32 and F 16 denote the 64-channel, 32-channel, and 16-channel feature maps, respectively.
F 64 = O S A G ( F 64 ) + F 64
F 32 = O S A G ( F 32 ) + F 32
F 16 = O S A G ( F 16 ) + F 16
F f u s i o n = C o n v 1 × 1 F 64 , C + C o n v 1 × 1 ( F 32 , C ) + C o n v 1 × 1 ( F 16 , C )
where, F 64 , F 32 , F 16 are the feature maps extracted by the shallow network; F 64 , F 32 , F 16 are the deep features extracted by OSAG module; C o n v 1 × 1 is the 1 × 1 convolution, which is mainly used for adjusting the number of channels; and C is the number of required channels.
In summary, the design of OSAG modules with a multi-branch structure proposed in this paper not only improves the richness of feature representations but also makes significant progress in terms of light weight and efficiency.

2.5. Network Architecture

The overall structure of the network incorporates a multi-branch structure and clusters based on the OSA attention mechanism to develop a set of higher-performance lightweight super-resolution models. The network structure is shown in Figure 4. It mainly includes a multi-branch shallow extraction module, a deep extraction module, an inverse convolutional upsampling module, and an image reconstruction module. By inputting the LR image, the three-branch structure first extracts the shallow features, and then enters into the respective OSAG module to extract the deep features; it then performs the feature fusion after adjusting the number of channels by convolution, and finally performs the super-resolution reconstruction of the image. OSAG mainly consists of LCB, Meso-OSA, Globa-OSA Block, and Enhanced Spatial Attention (ESA) modules [18].

2.6. Loss Function

The loss function used in this paper is L1, described by the following formula [27]:
L 1 ( I H R , I ^ H R ) = 1 N i = 1 N | ( I H R ( i ) I ^ H R ( i ) |
where I H R denotes a true high-resolution image and I ^ H R denotes a high-resolution image estimated by an algorithm or method.

3. Experimental Results and Their Analysis

In this section, we provide an exhaustive analysis and evaluation of the super-resolution (SR) results on the validation set. The purpose of this analysis is to gain detailed insight into the effectiveness of the proposed super-resolution technique on actual datasets. The experimental results will be explored from multiple perspectives, including comparison with state-of-the-art SR techniques, image quality assessment, aperture space segmentation effects, and thin-film granularity analysis, thus providing a comprehensive assessment of the lightweight super-resolution approach proposed in this paper.

3.1. Comparative Analysis of Different OSAG Structural Designs

In this paper, a test dataset consisting of 130 rock thin section data was used to quantitatively evaluate different OSAG structures, and the results are shown in Table 1, which mainly includes comparing the results of different OSAG layer count algorithms and the comparison of the results between different OSAG structures. As can be observed from Table 1, the performance metrics of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) experience an increasing and then decreasing trend as the number of OSAG layers increases in the OmniSR model. As the number of layers continues to increase to five, the performance decreases dramatically, indicating that too deep an OSAG structure is not conducive to generalization. This means that increasing the number of layers can improve the feature representation of the model, but too many layers may lead to overfitting, which in turn affects the performance. Among them, OmniSR obtained a PSNR of 35.715 and an SSIM of 0.8741 at with a four-layer OSAG, a result that suggests that the four-layer OSAG seems to be a good choice for this task and dataset, enabling the best reconstruction.
In this paper, we propose that the multi-branch structural design of OmniSR-M achieves a performance close to that of a four-layer OmniSR when using only two layers of OSAG, which highlights the effectiveness of structural optimization. In addition, the new model achieves a significant improvement in reducing the computational and parametric counts by 27.66% and 26.56%, respectively, relative to the four-layer OmniSR model, an observation that highlights the importance of the trade-off between resource efficiency and performance, providing strong support for the lightweight and efficient nature of the model.

3.2. Comparison with the SOTA SR Methodology

In this paper, we evaluate performance by training several state-of-the-art super-resolution models and quantitatively evaluating them using metrics such as PSNR, SSIM, and the natural image quality evaluator (NIQE). PSNR and SSIM are important criteria commonly used to evaluate image quality and are particularly suitable for the performance evaluation of super-resolution algorithms. Based on the results of the data in Table 2, it is possible to observe the scores of the different models in terms of PSNR and SSIM. The OmniSR and OmniSR-M models performed well in terms of the highest PSNR and SSIM. The PSNR and SSIM of the OmniSR reached 35.7151 and 0.8741, respectively, and those of the OmniSR-M reached 35.7139 and 0.8734, respectively, which demonstrates their excellent ability to preserve image detail and structure. In addition, the PSNR and SSIM metrics of OmniSR-M and OmniSR are very close to each other, indicating that the multi-branch design does not degrade the reconstruction quality.
Using sample 0003 in Figure 5 as an example, the RCAN, Omni SR, and OmniSR-M algorithms’ texture reconstruction is better, which can be seen in the break in the upper left corner; the RCAN, Omni SR, and OmniSR-M algorithms retained the break features, while the other algorithms almost ignored the break features in the upper left corner. From the quantitative metrics, the RCAN, Omni SR, and OmniSR-M achieved PNSR values of 26.52, 26.71, and 26.71, respectively.
NIQE is another image quality assessment metric. Lower NIQE scores usually indicate higher image quality. From the NIQE metric, OmniSR-M achieved the lowest value of 7.1823, indicating that it generates images with the highest visual quality. The OmniSR models obtained a score of 7.3200, indicating that they also perform well in generating high-quality images. Compared to other models, OmniSR and OmniSR-M achieved optimal results on both objective and subjective metrics.
In summary, it can be concluded that the OmniSR and OmniSR-M models achieved significant performance enhancement compared to the existing models ESPCN, IDN, RCAN, and RDN, which are advanced and effective. The multi-branch structure design enables OmniSR-M to improve the subjective visual quality to a certain extent. Although it fails to surpass Omni SR in objective quantitative indexes, it has achieved similar quantitative results, which indicates that the super-resolution method with the multi-branch structure proposed in this paper has a certain degree of sophistication.

3.3. Pore Space Segmentation Analysis

To focus on a specific application scenario involving thin section images, this paper extracts the facies porosity rate of the thin section image using threshold segmentation to evaluate the reconstruction effects of different methods, Threshold segmentation is a basic image segmentation method, and its main idea is to optimize the threshold value by using a grayscale histogram and then divide the image into different regions so that the pixels within each region have similar attributes or features [28]. In applications of thin section images for finding the facies porosity rate, threshold segmentation is used to separate the pore portion of the image from the rest of the region in order to quantify the proportion of the pore portion within the whole image. Since only a part of the region of the whole thin section image was cropped for the calculation in the experiment, it leads to a large facies porosity rate, but this does not affect the absolute error of comparing the facies porosity rate of the reconstructed images from multiple models with the HR.
In terms of the evaluation of the facies porosity rate, the rock thin section data of four samples were randomly selected in the experiment for the comparison of facies porosity rate calculation, which was 0003, 0054, 0311, and 0617, and the detailed calculation results are shown in Table 3. The HR is used as the baseline to extract the facies porosity rate, which is then compared with the porosity of the reconstruction results of each advanced super-resolution method, and the relative error is calculated to quantify the different super-resolution methods. Taking sample 0054 as an example, as shown in Figure 6, the HR facies porosity rate is 24.68%, and from the LR results, it can be seen that the downsampling caused an error in the image porosity occupancy. In contrast, Omni SR and OmniSR-M have the smallest relative errors in the facies porosity rate (24.78% and 24.72%, respectively) compared with the facies porosity rate of the original HR image.
The pore occupancy results for OmniSR and OmniSR-M are closest overall to the value of HR. As in sample 0617, the pore occupancy error for the OmniSR series of models is very small and accurately recovers the value of HR. The reconstructed pore occupancy ratios of other models ESPCN, IDN, RCAN, etc. are not accurate enough and differ somewhat from the HR values. This once again validates that the OmniSR family of models can recover high-quality pore features in rock thin section image samples. The effect of multi-branch OmniSR-M was close to that of single-branch OmniSR, maintaining the accurate recovery of the pores. Test results on four samples demonstrate that the OmniSR series of models can achieve excellent performance in super-resolution applications of rock thin section pore images.

3.4. Particle Analysis

In order to better evaluate the differences between different models, this paper introduces the semantic segmentation macro model SAM to randomly select four samples to evaluate the super-resolution results of different models [22]. It is mainly used to evaluate the reconstruction of different models at the edge of the grain. As shown in Figure 7, firstly, the rock thin section grain size statistics were attained from input HR images as well as super-resolution SR maps of each model using the image method of semantic segmentation. The Udden–Wentworth particle size classification criterion (φ) was then used to plot the frequency diagram of the particle size distribution.
The detailed calculation results are shown in Table 4, and from the comparison results of the statistical indexes of particle size distribution in this table, it can be seen that the OmniSR-M model proposed in this paper shows significant advantages and stability in recovering the details of particle size and the overall distribution characteristics of different samples. Specifically, the recovery of the average particle size by each model is relatively close to each other; however, in the recovery of the maximum and minimum particle size ranges, OmniSR-M shows a small deviation from the true value of HR, while the other methods such as RDN show a large estimation bias, which is beyond the true range of values of HR. This indicates that OmniSR-M more accurately and completely recovers the full granularity interval contained in the image, preventing range reduction or expansion. Additionally, in terms of the particle size standard deviation, which is an indicator of the diversity of the particle size distribution, the OmniSR-M results are closest to the value of HR, implying that it recovers a degree of particle size variation that more closely approximates that of the original image, whereas the other methods are more or less guilty of omitting some of the particle size components. It should be noted that different sample images have their own particle size distribution characteristics, such as more or denser fine particle sizes, etc., but the OmniSR-M model always performs optimally and stably on all samples, which proves that it possesses strong adaptability and is able to flexibly deal with a wide range of particle size distribution situations. This can also be verified from the frequency map of particle size distribution in Figure 8, where the distribution of the OmniSR-M reconstruction results and the frequency map of HR’s particle size distribution are basically the same, with the highest similarity. Finally, the metrics of OmniSR and its multi-branch structure version, OmniSR-M, almost completely overlap, which indicates that the introduction of the branch structure does not adversely affect the recovery of critical granularity details, and of its feature of light weight is thus validated.
In addition, in this paper, the validity of the methods was verified on 130 test sets; specifically, the L version of SAM was used to segment the particles of all the method’s super-resolution results. Subsequently, the number of particles and the particle size were computed statistically, and the detailed results of the computation are shown in Appendix A. We have also statistically visualized the number of particles in the above results, as shown in Figure 9, and the particle size information in the above results, as shown in Figure 10.
From Figure 9a, which shows the results of the super-resolution results of different methods followed by the statistics of the number of particles, it can be found that the OmniSRu method proposed in this paper is basically the same as the statistics of the other methods in terms of particle size distribution, but according to the error statistics of the number of particles of the different super-resolution methods with the HR image in Figure 9b, it can be found that the OmniSRu method had an average error of only 1.5, while the other advanced methods have an average particle number error greater than 2. This proves the effectiveness of the proposed method in this paper, and the super-resolution results are closer to those of the HR image. In analyzing these details, we observed that the average particle size curve of the OmniSRu method proposed in this paper has the highest overlap with the HR average particle size curve (according to Figure 10a and also verified in Figure 10b), where the average error of the particle size of the OmniSRu method proposed in this paper is the smallest. This indicates that the method proposed in this paper is more advanced in edge detail recovery.
In summary, it can be fully proven by the comprehensive granularity analysis statistical comparison results that the OmniSR-M model proposed in this paper achieves the best reconstruction results, successfully recovers the high-fidelity information of individual samples in terms of both granularity details and overall distributional features, and demonstrates its advantages in solving image super-resolution problems.

4. Conclusions

In order to solve the conflict between field of view and resolution in rock thin section image acquisition, this paper proposes a lightweight image super-resolution network, OmniSR-M, based on omnidirectional feature aggregation with a multi-branch structure. The Omni self-attention mechanism is used to achieve synchronous interactions of spatial channel features and enhance the capability of contextual modeling. Multi-branching structures allow for the extraction of multi-scale characterizations, thereby enriching the sensory field coverage.
A large number of experimental results show that OmniSR-M obtains significant improvements in the image quality metrics of PSNR, SSIM, and NIQE, demonstrating excellent super-resolution reconstruction compared with other state-of-the-art models. Compared with the original single-branch OmniSR model, the computational and parametric quantities of OmniSR-M are reduced by 27.66% and 26.56%, respectively, meaning it is more lightweight and efficient; the training process also proves that OmniSR-M converges faster.
In addition, OmniSR-M demonstrated advanced performance in recovering key information in the rock thin section image, such as the facies porosity rate and statistical features of grain size distribution, and the reconstruction results were highly close to the original image. This proves that OmniSR-M successfully preserves the details and overall feature information of the rock thin section image. In the experiments, we also found that OmniSR-M also had some denoising effect, and in this paper, the large model SAM was introduced for the first time to carry out particle size identification. The application of SAM in the digital rock thin section field is relatively novel, but it shows great potential. The trend in this field is leaning towards developing larger models to improve performance. Additionally work to fine-tune the SAM model for particle size identification is ongoing. In the future, we will continue to explore lightweight model designs for efficient image recovery. We hope that this work will provide valuable insights and inspiration to the rock thin section image super-resolution community.

Author Contributions

All authors contributed to the study conception and design. T.L. independently completed the study design and data analysis, while C.X. coordinated the sample data and computational resources. T.L. took the lead in writing the article, L.T. assisted in writing the article. Y.M., W.X., J.W. and J.X. proofread all drafts, and C.X. made the first guidance for the paper and all authors commented on previous versions of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Nos. 42172163).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors, and the dataset was jointly completed by the team, so the data are not publicly available.

Conflicts of Interest

Author Weijia Xu was employed by the company Daqing Oilfield Company Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A

Sample IDHRESPCNIDNRCANRDNOmniSRu
DNum ParticlesDNum ParticlesDNum ParticlesDNum ParticlesDNum ParticlesDNum Particles
30.13233340.13333120.12993110.12953000.13022960.1311307
110.27103380.28773340.28603410.27393370.27253340.2872331
120.26223410.27733400.25963390.27633410.26043480.2755338
130.26173620.27123600.26553580.28173600.26633570.2678351
140.24243290.24113450.24003430.24133380.24183370.2405335
150.24833750.26653590.25743570.25753480.25833540.2581353
260.22414970.23064710.22604930.22604880.22804840.2240493
270.19775210.20025220.19985170.19875120.19935110.1998515
330.23433530.24033510.23393670.23693620.23803650.2357364
370.19305660.19535420.19385570.19655610.19555590.1948557
490.12303840.12863810.12513830.12403850.12213880.1248390
500.36451570.35171530.37241490.36331520.36561500.3735148
540.14835850.15145730.14835840.14765830.14795830.1474586
570.14766300.14906400.14666340.14576340.14656320.1456635
600.31092190.32572020.31482160.30592150.30962030.3124212
620.35564230.35534230.35554220.35684260.35804210.3548420
660.46692310.47002380.46502310.46612350.46442300.4719230
670.6253910.6412880.6443890.6519880.6463890.638890
690.61351240.61891220.61941170.61701250.61561160.6059121
720.67131260.67411230.67201250.66811280.67201240.6756119
730.61121090.58991160.61331070.61961040.61091090.6240106
780.64511410.64071410.62951490.63621460.63681440.6252146
820.8107790.7921810.7823790.7757790.7723800.790180
860.6697880.6783870.6613900.6810870.6426890.675786
890.33914020.34593970.34463990.33864030.33843970.3423397
900.35394540.34934620.35184550.35334600.35444580.3504461
1000.17794960.18044980.17674990.17665020.17595020.1756499
1140.7402270.7187280.7042290.7235280.7237280.718927
1160.25563230.25433270.25083460.25193300.25393310.2524332
1300.38341500.38471570.38221510.38441520.38621480.3750152
1340.23093080.23333120.23133070.23183030.22983060.2285306
1350.4148920.4172860.4107910.4135920.4005930.399993
1450.43121070.40221040.41501160.41061130.41011150.4048107
1540.3446540.3554530.3431630.3446550.3474540.369851
1580.16707180.17657090.16727230.16697200.16697230.1670722
1600.19514840.19734920.19694920.19714840.19924940.1975486
1610.22014600.22114720.22274760.22244670.22194660.2201467
1680.41363450.41223520.42223530.41923440.41693550.4049357
1690.46123370.46483340.46353390.45833320.45953300.4625327
1700.46743210.46433250.45793340.46323240.46643230.4668323
1720.41253630.41703640.41213670.41503650.41213630.4105365
1760.43043590.43593590.43683550.43593560.43273550.4349356
1790.43552440.43672440.44402430.43812410.43562470.4391239
1810.42662250.42982190.43422150.42422290.43162290.4308225
1950.19094750.19164550.18794710.18964620.18934610.1889460
1960.23684870.23554880.23364850.23464810.23524800.2334485
2100.8565640.8008680.8258600.8217670.8149630.824362
2150.29694120.30004110.29874120.29894110.29674090.3004410
2180.31105130.31095110.31075120.31235080.31195130.3101505
2210.18815420.19145460.19185520.19055490.19165450.1910546
2320.64241390.66371350.67511310.66591360.65311380.6690133
2330.59701690.60311600.60851620.58471680.60601660.5969164
2360.30292580.29632650.29132680.29392680.29592760.2982268
2410.19646280.19936170.19676350.19626230.19736210.1973627
2430.21034910.21264800.20874780.20874770.20974690.2093478
2460.20384640.20394810.20144820.20164760.20234820.2014480
2610.41173780.40913850.40363850.40553880.40753790.4086380
2660.49891970.50101930.49551990.49492000.49301940.4959195
2770.36351780.40501730.39721720.39711740.39221740.3863177
2790.21713460.22043450.21603400.21733470.21653490.2158346
2870.23003520.22873530.23243430.23093460.22683440.2348349
2950.20554360.20734500.20394420.20434420.20314400.2055445
2960.22235340.22525170.22205330.21985380.21875420.2185534
3040.35621560.35701580.37001500.37361480.37871480.3723150
3050.38231440.39061420.39071520.39811470.38401500.3870151
3070.14425420.14445490.14325440.14365420.14345470.1433542
3080.14416290.14446270.14296250.14376270.14356330.1432630
3110.21255020.21364970.21115040.21215090.21075070.2117503
3170.25063310.24653340.25853350.24683320.26073290.2460334
3250.17365180.17835260.17485180.17385220.17335240.1722520
3340.22233370.22993310.22183370.22533380.22293380.2204344
3390.27783330.28123240.27443290.27503160.27623150.2746316
3510.23364090.23894150.23204070.23134050.23174040.2343403
3600.21943150.22313180.22743210.22183170.22423180.2247320
3640.35991570.37781630.36821690.36561680.36661590.3705163
3660.24124030.24594070.24324110.24324030.24544060.2413404
3860.29442010.28732160.28572150.29122110.29272080.2926210
3870.4167690.4226690.4110750.4231680.4200680.415968
3970.19744720.19754850.19484750.19674740.19454820.1952477
3980.17275500.18635470.18205500.18225510.18245520.1819549
4100.37941280.39921320.39421350.38891330.39451310.3906128
4110.39041370.38281460.39311390.38321360.38661350.3812134
4160.46151110.49081000.47061070.47601070.47671110.4783107
4190.28893110.30013080.28983130.29113120.28743160.2942309
4240.3706600.4048520.3938590.3900560.3951590.384960
4320.26413550.26873610.26733580.26373590.26813500.2653350
4370.29195480.29315470.29355540.29285530.29305480.2928553
4380.29985580.30055520.30085480.30115450.30045460.2996553
4480.7626940.7460980.73091020.7623930.7552950.773892
4550.4726680.4851640.4535690.4569710.4755630.467269
4560.5148660.5680640.5162670.5621620.5633620.533365
4660.19935000.20285050.19945060.20185110.20185030.2020503
4690.4425940.43711000.4575990.4726950.4582970.446499
4720.16634840.17825040.17434830.17604860.17334840.1753485
4740.33471580.32631770.32061790.32421740.32591710.3295171
4770.31121400.36321610.35971600.36811570.36381600.3637154
4830.16766200.17916170.16856200.16746210.16846210.1679620
4890.19224640.19534540.19004670.19304490.19004590.1917456
4910.25233640.25443680.25213680.25263770.25073640.2526363
4980.6718920.6646990.6648950.6703950.6548970.678995
5080.23074030.24024130.24194060.24064120.23914060.2380408
5110.21814560.21874560.21694640.21374600.21504620.2176455
5140.16876140.16986060.16926070.16886040.17066090.1700610
5160.33881920.33581990.33831980.33821930.34261950.3437195
5190.21994040.21814170.22084080.22214030.22194140.2213411
5210.17745400.18825380.18695390.18765370.18695420.1873540
5270.26753180.27543310.26973310.26813330.26753240.2704327
5310.17305310.17435250.17195190.17325130.17225240.1729528
5330.24073160.24173140.24063160.24133140.24023140.2408310
5350.29984980.29894970.29965010.29934960.29905060.2985501
5370.29852760.29932670.29582760.29822750.29782720.2963273
5380.29744880.29554810.29634820.29724820.29744830.2973482
5410.36434210.36454240.36784210.36494220.36644280.3668426
5450.17856210.17776490.17536430.17616250.17556280.1764621
5470.17716350.18126500.18076450.18076550.18196540.1792651
5510.17826270.18046590.17886570.17906600.17786570.1785651
5520.18146090.18476460.18296350.18216370.18296450.1821637
5690.18566150.18706420.18456370.18536450.18566370.1835650
5710.18176360.18316570.18276540.18176600.18226630.1827649
5730.18276340.18206650.17956590.18076600.18076660.1794664
5770.17896320.18646620.18356560.18356480.18276470.1839644
5840.18366340.18186610.17996490.17996550.18066600.1806658
5930.18106180.18266300.18096270.18026270.17956200.1803628
6000.18106240.18366530.18116470.17886600.18036520.1801658
6030.17856480.17706740.17676780.17556780.17706770.1763673
6170.20294990.20804870.20354850.20144870.20164940.2026485
6180.19854360.21104350.21014300.20934410.21154400.2113437
6360.21214190.21454100.21464170.21564150.21474150.2172408
6470.25542910.25362990.25742950.25822990.25272920.2544298
6480.25772100.28162220.27832240.27402180.28012130.2791213

References

  1. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. In Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2014; Volume 8692, pp. 184–199. ISBN 978-3-319-10592-5. [Google Scholar]
  2. Chen, S.; Ogawa, Y.; Zhao, C.; Sekimoto, Y. Large-Scale Individual Building Extraction from Open-Source Satellite Imagery via Super-Resolution-Based Instance Segmentation Approach. ISPRS J. Photogramm. Remote Sens. 2023, 195, 129–152. [Google Scholar] [CrossRef]
  3. Nascimento, V.; Laroca, R.; Lambert, J.D.A.; Schwartz, W.R.; Menotti, D. Super-Resolution of License Plate Images Using Attention Modules and Sub-Pixel Convolution Layers. Comput. Graph. 2023, 113, 69–76. [Google Scholar] [CrossRef]
  4. Sun, J.; Li, Z.-Y.; Li, P.-C.; Li, H.; Pang, X.-W.; Wang, H. Improving the Diagnostic Performance of Computed Tomography Angiography for Intracranial Large Arterial Stenosis by a Novel Super-Resolution Algorithm Based on Multi-Scale Residual Denoising Generative Adversarial Network. Clin. Imaging 2023, 96, 1–8. [Google Scholar] [CrossRef] [PubMed]
  5. Xu, M.; Wang, Z.; Zhu, J.; Jia, X.; Jia, S. Multi-Attention Generative Adversarial Network for Remote Sensing Image Super-Resolution 2021.
  6. Guo, C.; Gao, C.; Liu, C.; Liu, G.; Sun, J.; Chen, Y.; Gao, C. Super-Resolution in Thin Section of Lacustrine Shale Reservoirs and Its Application in Mineral and Pore Segmentation. Appl. Comput. Geosci. 2023, 19, 100133. [Google Scholar] [CrossRef]
  7. Jackson, S.J.; Niu, Y.; Manoorkar, S.; Mostaghimi, P.; Armstrong, R.T. Deep Learning of Multi-Resolution X-Ray Micro-CT Images for Multi-Scale Modelling. Phys. Rev. 2022, 17, 054046. [Google Scholar]
  8. Liu, Y.; Zhang, Q.; Zhang, N.; Lv, J.; Gong, M.; Cao, J. Enhancement of Thin-Section Image Using Super-Resolution Method with Application to the Mineral Segmentation and Classification in Tight Sandstone Reservoir. J. Pet. Sci. Eng. 2022, 216, 110774. [Google Scholar] [CrossRef]
  9. Wang, Y.D.; Armstrong, R.T.; Mostaghimi, P. Enhancing Resolution of Digital Rock Images with Super Resolution Convolutional Neural Networks. J. Pet. Sci. Eng. 2019, 182, 106261. [Google Scholar] [CrossRef]
  10. Yuan, B.; Li, H.; Du, Q. Enhancing Identification of Digital Rock Images Using Super-Resolution Deep Neural Network. Geoenergy Sci. Eng. 2023, 229, 212130. [Google Scholar] [CrossRef]
  11. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  12. Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  13. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  14. Liu, Y.; Guo, C.; Cao, J.; Cheng, Z.; Ding, X.; Lv, L.; Li, F.; Gong, M. A New Resolution Enhancement Method for Sandstone Thin-Section Images Using Perceptual GAN. J. Pet. Sci. Eng. 2020, 195, 107921. [Google Scholar] [CrossRef]
  15. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Computer Vision—ECCV 2018 Workshops; Leal-Taixé, L., Roth, S., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2019; Volume 11133, pp. 63–79. ISBN 978-3-030-11020-8. [Google Scholar]
  16. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  17. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st Conferenceon Neura lInformation Processing Systems (NIPS2017), Long Beach, CA, USA, 4–9 December 2023. [Google Scholar]
  18. Wang, H.; Chen, X.; Ni, B.; Liu, Y.; Liu, J. Omni Aggregation Networks for Lightweight Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
  19. Hui, Z.; Wang, X.; Gao, X. Fast and Accurate Single Image Super-Resolution via Information Distillation Network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 723–731. [Google Scholar]
  20. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  21. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  22. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y.; et al. Segment Anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–3 October 2023. [Google Scholar]
  23. Park, J.; Woo, S.; Lee, J.-Y.; Kweon, I.S. BAM: Bottleneck Attention Module. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  24. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Computer Vision—ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11211, pp. 3–19. ISBN 978-3-030-01233-5. [Google Scholar]
  25. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  26. Piao, Y.; Ji, W.; Li, J.; Zhang, M.; Lu, H. Depth-Induced Multi-Scale Recurrent Attention Network for Saliency Detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7253–7262. [Google Scholar]
  27. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
  28. Shi, J.; Zhang, H. Adaptive Local Threshold with Shape Information and Its Application to Object Segmentation. In Proceedings of the 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guilin, China, 13–19 December 2009; pp. 1123–1128. [Google Scholar]
Figure 1. Channel attention mechanism.
Figure 1. Channel attention mechanism.
Applsci 14 02779 g001
Figure 2. Spatial attention mechanism.
Figure 2. Spatial attention mechanism.
Applsci 14 02779 g002
Figure 3. Schematic diagram of the inverse convolution. Blue is input (5 × 5), gray is the convolution kernel (3 × 3), green is output (5 × 5); padding = 1, strides = 1.
Figure 3. Schematic diagram of the inverse convolution. Blue is input (5 × 5), gray is the convolution kernel (3 × 3), green is output (5 × 5); padding = 1, strides = 1.
Applsci 14 02779 g003
Figure 4. Fully aggregated network structure with multi-branch structure.
Figure 4. Fully aggregated network structure with multi-branch structure.
Applsci 14 02779 g004
Figure 5. Individual model reconstruction results. As shown in figure, taking sample 0003 as an example, (a) is the original HR image of the rock thin section; (b) is the cropped 200 × 200 area image from the original HD image, and the red box is the sampling area; (c) is the LR image after downsampling by four times; (d) is the super-resolution result map of the ESPCN model; (e) is the super-resolution result map obtained by the IDN model; (f) is the super-resolution result map obtained by the RCAN model; (g) is the super-resolution result map obtained by RDN model; (h) is the super-resolution result map obtained by OmniSR model; and (i) is the super-resolution result map obtained by OmniSR-M model.
Figure 5. Individual model reconstruction results. As shown in figure, taking sample 0003 as an example, (a) is the original HR image of the rock thin section; (b) is the cropped 200 × 200 area image from the original HD image, and the red box is the sampling area; (c) is the LR image after downsampling by four times; (d) is the super-resolution result map of the ESPCN model; (e) is the super-resolution result map obtained by the IDN model; (f) is the super-resolution result map obtained by the RCAN model; (g) is the super-resolution result map obtained by RDN model; (h) is the super-resolution result map obtained by OmniSR model; and (i) is the super-resolution result map obtained by OmniSR-M model.
Applsci 14 02779 g005
Figure 6. Different model reconstruction surface porosity calculation errors. (a) shows the original image, (b) shows a 200 × 200 sub-image cropped from the red boxed region, and (c) shows the facies porosity rate for HR, LR, and the super-resolution structure of each advanced model.
Figure 6. Different model reconstruction surface porosity calculation errors. (a) shows the original image, (b) shows a 200 × 200 sub-image cropped from the red boxed region, and (c) shows the facies porosity rate for HR, LR, and the super-resolution structure of each advanced model.
Applsci 14 02779 g006aApplsci 14 02779 g006b
Figure 7. (a) shows the image of input SAM, and (b) shows the result of semantic segmentation of SAM.
Figure 7. (a) shows the image of input SAM, and (b) shows the result of semantic segmentation of SAM.
Applsci 14 02779 g007
Figure 8. Frequency plot of particle size distribution for four samples.
Figure 8. Frequency plot of particle size distribution for four samples.
Applsci 14 02779 g008
Figure 9. Statistical chart of the number of particles. (a) Number of particles for different super-resolution methods for 130 samples. (b) Error in the number of particles for different methods with HR.
Figure 9. Statistical chart of the number of particles. (a) Number of particles for different super-resolution methods for 130 samples. (b) Error in the number of particles for different methods with HR.
Applsci 14 02779 g009
Figure 10. Particle size error statistical diagram. (a) Average particle size of 130 samples with different super-resolution methods. (b) Particle size error of different methods with HR.
Figure 10. Particle size error statistical diagram. (a) Average particle size of 130 samples with different super-resolution methods. (b) Particle size error of different methods with HR.
Applsci 14 02779 g010
Table 1. Comparative experimental results of different structural designs of OSAG modules.
Table 1. Comparative experimental results of different structural designs of OSAG modules.
ModelOSAG NumberModel FLOPsModel ParamsPSNRSSIM
OmniSR1846.998 M211.632 K35.4040.8716
OmniSR21.423 G356.848 K35.3170.8727
OmniSR31.998 G502.064 K34.8510.8712
OmniSR42.574 G647.280 K35.7150.8741
OmniSR53.150 G792.496 K33.7010.8492
OmniSR-M11.069 G271.600 K35.6250.8689
OmniSR-M21.861 G475.440 K35.7140.8734
Table 2. Comparison results between advanced models.
Table 2. Comparison results between advanced models.
ModelPSNRSSIMNIQE
ESPCN33.45770.85407.4393
IDN34.19200.86327.5113
RCAN34.58030.87007.5288
RDN35.04150.87287.5041
OmniSR35.71510.87417.3200
OmniSR-M35.71390.87347.1823
Table 3. Comparison of relative errors of different methods of determining the facies porosity rate.
Table 3. Comparison of relative errors of different methods of determining the facies porosity rate.
ModelSample No. 0003Sample No. 0311Sample No. 0054Sample No. 0617
Surface Porosity (%)Relative Error (%)Surface Porosity (%)Relative Error (%)Surface Porosity (%)Relative Error (%)Surface Porosity (%)Relative Error (%)
HR40.7424.6836.2332.72
LR41.430.6924.31−0.3735.84−0.3933.891.17
ESPCN42.251.5124.770.0936.07−0.1633.761.04
IDN42.17−0.0824.820.1436.280.0533.70.98
RCAN40.820.0824.820.1436.330.133.741.02
RDN40.920.1824.930.2536.40.1733.758.97
OmniSR40.810.0724.780.136.23033.250.53
OmniSR-M40.71−0.0324.720.0436.18−0.0533.170.45
Table 4. Quantitative results of D (particle size) and φ values for different models on four samples.
Table 4. Quantitative results of D (particle size) and φ values for different models on four samples.
Sample No.ModelDφ
avgmaxminstdavgmaxminstd
0003HR0.13530.76280.00250.07773.09618.67120.39060.9050
ESPCN0.13430.75920.00250.07983.13568.67120.39740.9938
IDN0.13180.75770.00250.07803.15778.67120.40040.9855
RCAN0.13150.75820.00170.08153.22049.17120.39931.1599
RDN0.13220.55790.00170.07553.18059.17120.84191.0612
OmniSR0.12980.74900.00170.07783.20929.17120.41701.0474
OmniSR-M0.13310.76160.00250.07963.15288.67120.39290.9910
0054HR0.14875.40740.00300.22492.96768.3975−2.43490.7508
ESPCN0.15175.40450.00420.22652.90997.8975−2.43420.6340
IDN0.14865.40570.00480.22512.95617.6899−2.43450.6921
RCAN0.14805.40620.00240.22522.97478.6899−2.43460.7495
RDN0.14825.40620.00240.22532.97638.6899−2.43460.7608
OmniSR0.14875.40650.00420.22582.96457.8975−2.43470.7345
OmniSR-M0.14775.40650.00380.22472.97468.0290−2.43470.7342
0311HR0.21255.39390.00170.24502.43799.1930−2.43130.7815
ESPCN0.21365.39320.00170.24612.42699.1930−2.43110.7744
IDN0.21115.39340.00340.24502.44748.1930−2.43120.7563
RCAN0.21215.39340.00240.24402.43858.6930−2.43120.7526
RDN0.21075.39320.00420.24432.45357.9005−2.43110.7710
OmniSR0.21205.39320.00480.24442.43837.6930−2.43110.7473
OmniSR-M0.21175.39320.00340.24482.43638.1930−2.43120.7345
0617HR0.20290.70800.00170.08782.47909.19300.49810.8907
ESPCN0.20800.86070.00480.08942.39627.69300.21640.6787
IDN0.20350.85730.00340.09002.47678.19300.22210.8941
RCAN0.20140.64970.00420.08322.45457.90050.62210.7421
RDN0.20160.64790.00660.08422.45907.23960.62610.7593
OmniSR0.19930.64840.00170.08272.48269.19300.62500.8130
OmniSR-M0.20260.86030.00340.08972.47108.19300.21710.8390
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, T.; Xu, C.; Tang, L.; Meng, Y.; Xu, W.; Wang, J.; Xu, J. OmniSR-M: A Rock Sheet with a Multi-Branch Structure Image Super-Resolution Lightweight Method. Appl. Sci. 2024, 14, 2779. https://doi.org/10.3390/app14072779

AMA Style

Liu T, Xu C, Tang L, Meng Y, Xu W, Wang J, Xu J. OmniSR-M: A Rock Sheet with a Multi-Branch Structure Image Super-Resolution Lightweight Method. Applied Sciences. 2024; 14(7):2779. https://doi.org/10.3390/app14072779

Chicago/Turabian Style

Liu, Tianyong, Chengwu Xu, Lu Tang, Yingjie Meng, Weijia Xu, Jinhuan Wang, and Jian Xu. 2024. "OmniSR-M: A Rock Sheet with a Multi-Branch Structure Image Super-Resolution Lightweight Method" Applied Sciences 14, no. 7: 2779. https://doi.org/10.3390/app14072779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop