Next Article in Journal
Smart Partitioned Blockchain
Previous Article in Journal
Robots as Mental Health Coaches: A Study of Emotional Responses to Technology-Assisted Stress Management Tasks Using Physiological Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Simulation Method for 3D Digital-Image Correlation: Combining Virtual Stereo Vision and Image Super-Resolution Reconstruction

College of Civil Engineering and Architecture, Zhejiang University, Hangzhou 310058, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4031; https://doi.org/10.3390/s24134031
Submission received: 14 May 2024 / Revised: 14 June 2024 / Accepted: 18 June 2024 / Published: 21 June 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
3D digital-image correlation (3D-DIC) is a non-contact optical technique for full-field shape, displacement, and deformation measurement. Given the high experimental hardware costs associated with 3D-DIC, the development of high-fidelity 3D-DIC simulations holds significant value. However, existing research on 3D-DIC simulation was mainly carried out through the generation of random speckle images. This study innovatively proposes a complete 3D-DIC simulation method involving optical simulation and mechanical simulation and integrating 3D-DIC, virtual stereo vision, and image super-resolution reconstruction technology. Virtual stereo vision can reduce hardware costs and eliminate camera-synchronization errors. Image super-resolution reconstruction can compensate for the decrease in precision caused by image-resolution loss. An array of software tools such as ANSYS SPEOS 2024R1, ZEMAX 2024R1, MECHANICAL 2024R1, and MULTIDIC v1.1.0 are used to implement this simulation. Measurement systems based on stereo vision and virtual stereo vision were built and tested for use in 3D-DIC. The results of the simulation experiment show that when the synchronization error of the basic stereo-vision system (BSS) is within 10 3  time steps, the reconstruction error is within 0.005 mm and the accuracy of the virtual stereo-vision system is between the BSS’s synchronization error of 10 7  and 10 6  time steps. In addition, after image super-resolution reconstruction technology is applied, the reconstruction error will be reduced to within 0.002 mm. The simulation method proposed in this study can provide a novel research path for existing researchers in the field while also offering the opportunity for researchers without access to costly hardware to participate in related research.

1. Introduction

Digital image correlation (DIC) is a non-contact image-based optical method for full-field shape, displacement and deformation measurements [1]. DIC techniques acquire digital images of an object at different loadings using digital-imaging devices and then perform image analysis with correlation-based matching (tracking or registration) algorithms and numerical differentiation approaches to quantitatively extract full-field displacement and strain responses of materials, components, structures, or biological tissues [2,3]. DIC approaches can be classified as 2D-DIC or 3D-DIC according to the measurement dimensions. In 2D-DIC, typically the light axis of the camera is expected to be perpendicular to the component surface during the measurements, but it is often challenging to maintain this orientation in practice. Furthermore, 2D-DIC can measure only the in-plane displacement of the specimen and is unable to measure out-of-plane displacement. However, when specimens are under stress, they can exhibit out-of-plane displacement due to the Poisson effect, which at times can cause the 2D-DIC measurements to deviate severely from reality. Therefore, researchers combined DIC with stereoscopic vision principles, extending the measurement range of DIC from 2D to 3D. Compared to 2D-DIC, 3D-DIC possesses greater capabilities, a broader application range, and more accurate measurements, earning it more extensive applications in various fields. Zhao et al. [4] conducted an in-depth study of split-disk testing methods to evaluate the mechanical properties of filament-wound composites. In their study, they used 3D-DIC to measure hoop strain on the outer surface of ring specimens. Huang et al. [5] explored the fracture behavior of sharp V-notched specimens using a combination of 3D-DIC and caustics. Wei et al. [6] proposed a multi-camera 3D-DIC approach to monitor the deformation of a cassette structure subjected to substantial vibrations from a shaking table. This methodology enabled the measurement of full-field, high-resolution displacement fields for the cassette structure under significant shaking conditions, marking a significant milestone in the field. However, in practical applications, traditional 3D-DIC also has some obvious limitations [1,7], as follows: (1) at least two synchronously triggered cameras are required for measurement, which greatly increases the hardware cost; (2) precise synchronization of cameras is often complex and challenging in scenarios demanding high-speed or high-precision measurements; (3) the unavoidable variations in intensity and non-linear geometric distortions between image pairs may pose difficulty in accurately matching the two images in stereo, hence reducing the measurement accuracy. Therefore, some researchers have applied the principles of virtual stereo vision to 3D-DIC [8,9,10].
Based on the principles of optics such as reflection and refraction, virtual stereo vision, utilizing a single camera coupled with additional optical components, can capture images of an object from at least two different views in a single shot [11,12,13,14]. Recently, virtual stereo-vision techniques have been successfully combined with 3D-DIC. For instance, Genovese et al. [15] established a bi-prism-based virtual stereo-vision system for 3D-DIC and verified its accuracy for measurement of 3D shapes and deformation. Pankow et al. [16] combined a four-mirror adapter-assisted virtual stereo-vision system with DIC and measured the out-of-plane deformations of an aluminum plate under a shock-wave impact. Mohammed et al. [17] established a miniature portable virtual stereo-vision system using a triangular-prism mirror and two mirrors for 3D-DIC. Compared to the traditional 3D-DIC methods, these approaches present at least the following advantages [8,18,19]: (1) complete elimination of camera-synchronization errors; (2) reduction of the hardware cost by the cost of at least one camera; (3) a typically smaller overall volume of the measurement system, with a compact structural design. However, it is undeniable that these methods also lead to the issue of loss of image resolution. In a measurement system with n virtual cameras, each virtual camera experiences an average loss of (n − 1)/n in image resolution, and the loss of image resolution can impact the subsequent measurement accuracy. Image super-resolution reconstruction techniques are a potential solution to this problem. Image super-resolution reconstruction is a classic research subject in the field of computer vision; it is used with the goal of transforming low-resolution images into high-resolution versions and filling in missing information [20,21,22]. A wealth of literature confirms that image super-resolution reconstruction techniques based on various approaches can enhance image quality to a certain degree [23,24,25,26,27,28]. The primary purpose of applying image super-resolution reconstruction to images captured by the virtual stereo-vision system is to compensate for the precision decrease caused by loss of image resolution.
3D-DIC based on the principles of virtual stereo vision has effectively reduced hardware costs, making it more economical compared to traditional 3D-DIC. However, for dynamic measurement scenarios with high-precision demands, high-precision high-speed cameras and finely designed optical components are needed, and the hardware cost remains considerable. In addition, the precision of DIC is influenced by various factors such as system structural parameters, environmental lighting conditions, surface speckle settings, pixel-matching algorithms, etc. [7,29,30,31,32]. Therefore, the development of high-fidelity simulations of 3D-DIC is of significant value, as they can assist researchers in device selection, system design, algorithm testing, precision assessment, etc., at a low cost. However, existing research on DIC simulation remains limited and is mostly carried out carried out by generating random speckle images [33,34]. For example, Estrada and Franck [35] proposed a DIC random speckle image generator based on MATLAB. This generator first produces a reference image with random speckles and then deforms this reference image according to different displacement-field functions to attain a deformed image. It then uses the reference and deformed images to implement 2D-DIC. Wang et al.’s [36] research went even further, integrating the principles of stereo vision and developing a stereo speckle image generator. The stereo speckle images produced by this generator can be used for 3D-DIC simulation. There are also similar studies that mainly differ in the use of various speckle patterns [37,38,39,40]. These methods consider overly ideal situations, staying at the level of mathematical derivation and basic image processing without considering the potential noise impact caused by system structure design, environmental lighting conditions, sample optical attributes, and camera imaging, among other factors.
The core purpose of this research is to propose a completely simulation-based experimental method. It integrates optical simulation, mechanical simulation, a 3D-DIC algorithm, and image super-resolution reconstruction model and considers many factors that exist in real experiments, which greatly enhances the fidelity of the simulation. Not only does this approach offer a new perspective for existing researchers in this field, but it also enables researchers who are without access to costly 3D-DIC hardware facilities to participate in related research. The main contributions of this research are as follows:
  • The research innovatively integrates optical and mechanical simulations, establishing a full 3D digital image correlation (3D-DIC) simulation method. This integration offers more comprehensive and accurate analysis.
  • The research combines 3D-DIC, virtual stereo vision, and image super-resolution reconstruction technology. This combination brings together the advantages of both technologies and expands the application scenarios.
  • The application of image super-resolution reconstruction in virtual stereo vision is in its infancy, and this research explores the feasibility of this approach, aiming to compensate for the reduction in accuracy caused by loss of image resolution.
  • The novel method amalgamates multiple software tools for different parts of the simulation. It utilizes ANSYS SPEOS 2024R1 [41] and ZEMAX 2024R1 [42,43] for optical simulation and uses ANSYS MECHANICAL 2024R1 [44] for mechanical simulation, with 3D-DIC carried out via MULTIDIC v1.1.0 [45,46].
The rest of this paper is organized as follows. Section 2 provides an overview of related theories, Section 3 introduces the setup of the simulation experiment, Section 4 elaborates on the experimental results, and Section 5 gives the conclusions.

2. Methods

2.1. 3D Digital Image Correlation

2.1.1. Correlation Function

The sample surface is usually filled with speckle patterns, which serve as carriers of deformation information and deform with the surface. As shown in Figure 1, a fraction of the initial speckle pattern is designated as the reference subset, while the corresponding section of the speckle pattern subsequent to deformation is identified as the target subset. The key to digital-image correlation is to establish the matching relationship between these two subsets. Zero-mean normalized cross-correlation [47,48] (ZNCC) is a widely used method for evaluating the similarity between a reference subset and a target subset. The formulation of ZNCC is as follows [49]:
C Z N C C = F x , y F G x , y G [ F ( x , y ) F ] 2 × G x , y G 2
where F x , y is the gray-scale function of the reference subset, G x , y is the gray-scale function of the target subset; and F and G , respectively, represent the average gray-scale values of the reference subset and the target subset. The larger the C Z N C C , the more similar the reference subset and the target subset are.
Although ZNCC has been widely used in DIC, it also has some problems in practical use, among which the more prominent is the influence of the choice of subset size on the final result. Traditional ZNCC is sensitive to the subset size, and improper choice of subset size can easily lead to a decrease in measurement accuracy. Therefore, this article adopts an optimization method based on the Gaussian window function, which combines traditional ZNCC with the Gaussian window function, aiming for better robustness. Firstly, a two-dimensional Gaussian window function is introduced, as follows:
G w x , y ; D 0 = e D x , y 2 2 D 0 2
where D ( x , y ) = x x 0 2 + y y 0 2 represents the distance from any location x , y in the subset to the center of the subset x 0 , y 0 . D 0 is directly related to the size of the effective subset, as shown in Figure 2.
The method used to calculate the optimization method based on the Gaussian window function is as follows [32]:
C G Z N C C = F x , y F G w x , y ; D 0 G x , y G G w x , y ; D 0 G w x , y ; D 0 [ F ( x , y ) F ] 2 × G w x , y ; D 0 G x , y G 2
It can be seen that C G Z N C C is based on ZNCC, and a two-dimensional Gaussian-window-function term is added.

2.1.2. Shape Function

When the shape changes significantly, the correlation between the reference subset and the target subset will degrade and the shape function can map the coordinate set of the reference subset before deformation to the coordinate set of the target subset after deformation with the purpose of restoring the high correlation between the reference subset and the target subset. When considering the expansion deformation, shear deformation, and rigid body rotation of the target subset, this mapping relationship can be expressed as follows [49,50]:
x = x + u 0 + u 0 x d x + u 0 y d y y = y + v 0 + v 0 x d x + v 0 y d y
where ( x , y ) represent the coordinates of any point in the reference subset and ( x , y ) represent the coordinates in the corresponding target subset; ( u 0 , v 0 ) represent the rigid-body displacement in the x and y directions, respectively; and u 0 x , u 0 y , v 0 x , and v 0 y symbolize the displacement gradients of the subset. The mapping relationship of the above formula is also called the first-order shape function. Figure 3 shows the schematic diagram of the geometric model of the first-order shape function.

2.2. Virtual Stereo Vision

Virtual stereo vision is typically composed of a camera and structurally designed optical components [51,52]. By utilizing the principles of light reflection and refraction, an image collected at a given time contains images of the same test object from different angles. The advantages of this design include complete elimination of the synchronization error present in traditional stereo vision, which makes it highly suitable for the measurement of dynamic objects; a measurement system that is more compact and conducive to the development of portable devices; saving at least the hardware cost of a camera and a synchronization-triggering device. This section mainly introduces the basic principles of two types of virtual stereo vision.

2.2.1. Virtual Stereo Vision Based on a Planar Mirror

Figure 4’s geometric optics model describes a virtual stereo-vision system composed of a camera and four planar mirrors. Here, the four planar mirrors use optical reflections to guide light rays from the object to the same camera from two different angles. Specifically, the camera generates virtual cameras C 1 L and C 1 R through the reflections of planar mirrors M 1 L and M 1 R . C 1 L and C 1 R create virtual cameras C 2 L and C 2 R through the reflections of planar mirrors M 2 L and M 2 R . Eventually, C 2 L and C 2 R can capture images from two perspectives through a single shot to achieve stereo vision.
To ensure the appropriate overlap of the fields of view of the left and right virtual cameras, it is necessary to calculate and design the geometric construction of the system. Considering the system to be symmetrical, we assume the intersection point O of the planar mirrors M 1 L and M 1 R as the origin. The field of view of the real camera is the x-axis in the horizontal direction and the z-axis in the depth direction, and then the relative positions of the camera and the four planar mirrors in the system can be uniquely determined by four structural parameters: the angles α and β at which the planar mirrors are placed, the distance d between the real camera’s apex and the origin, and the coordinate L of the intersection of the line on which the planar mirror lies with the x-axis. The placement angles Φ , the baseline distance B , and the fields of view σ z and σ x in the z-axis and x-axis directions of the virtual cameras C 2 L and C 2 R can be obtained as follows [8]:
ϕ = 90 + 2 α 2 β
B = 2 d sin 2 β α + 4 L sin 2 β
σ z = H m i n = B 2 tan ϕ d + d H m a x = B 2 tan ϕ + θ d + d
σ x = H + d + d cot β B 2 B 2 tan ϕ H B cot ϕ + cot ϕ + θ d + d B 2 H + d + d cot ϕ + θ H > B cot ϕ + cot ϕ + θ d + d
where H m i n and H m a x represent the coordinate range of the effective field of view in the z-axis direction, H is the coordinate describing the effective field of view in the z-axis direction, and d is the distance between the virtual camera C 2 L ( C 2 R ) and the real camera in the z-axis direction.
In theory, the model of the aforementioned virtual stereo vision can be fully determined by the internal parameters of the camera and the four structural parameters α , β , d , and L . However, in reality, it is difficult to ensure the precise placement of each component in the system, so the above geometric derivation is mainly used for the preliminary determination of the system’s structural parameters. To ensure measurement accuracy, it is still necessary to calibrate the system subsequently. The calibration of the virtual stereo-vision system must incorporate the following two considerations:
(1)
As shown in Figure 5, first divide all collected calibration images into two parts, use pure black pixels to fill in the lost part of the image, and finally treat the obtained images as images collected by the general stereo-vision system and complete single-camera calibration and stereo calibration;
(2)
The internal parameters of the two virtual cameras should theoretically be the same, but the implementation of the step above will most likely result in inconsistent calibration results for the internal parameters of the two virtual cameras. In view of this, a single-camera calibration can be performed with all image pairs, and the result can serve as the common internal parameters of the two virtual cameras.

2.2.2. Virtual Stereo Vision Based on a Prism

By placing a prism in front of the camera lens, due to the presence of refraction effects, the camera can also capture images from different perspectives in a single shot [18].
Figure 6 is a schematic diagram of the geometric optical model of virtual stereo vision based on a bi-prism. For easy differentiation, the red line represents the light path of the left virtual camera, the blue line represents the light path of the right virtual camera, and the thick line represents the bi-prism. Considering the system symmetry, the geometric relationship of the model can be determined by the following three structural parameters: the base angle β of the bi-prism, the height h , and the distance D between the camera’s optical center and the vertex of the bi-prism. If the actual camera’s field-of-view angle θ is known, the effective field-of-view range of the system can be determined by the angles α and φ between the light ray and the normal to the base of the bi-prism. According to the law of refraction, the refraction behaviors of the bi-prism’s base and oblique edge can be described by the following formula [53]:
sin α = k sin α sin φ = k sin φ sin θ + β = k sin α + β sin β = k sin β φ
where k is the coefficient of the bi-prism material. Combining the above two equations can solve α and φ , as follows [18]:
α = arcsin k sin arcsin 1 k sin θ + β β φ = arcsin k sin β arcsin 1 k sin β
Furthermore, through geometric derivation, the field-of-view width P S and Q S at the position g from the bottom edge of the bi-prism can be calculated by the following formula [18,54]:
P S = m + gtan α Q S = htan φ + gtan φ m = Dsin θ cos β sin β tan α sin 90 θ β + htan α
In addition, changing the geometric shape of the prism, as by using a triangular prism or multi-pyramid prism, can increase the number of virtual cameras. Figure 7 shows the schematic of virtual stereo vision based on a triangular prism. With the derivation of more complex geometric optics, corresponding system-design schemes can be given, which is a method not discussed further here. However, it should be noted that although the virtual stereo-vision system based on a triangular prism loses at least 2/3 of the field of view and image resolution, it also has a distinct advantage: each camera pair used in virtual stereo vision can correct each other’s posture through algorithms [55]. In other words, since any two virtual cameras in it are sufficient to perform the task of three-dimensional measurement, the redundant information can be used to establish an overdetermined system of equations and thus realize more robust measurements.

2.3. Image Super-Resolution Reconstruction

Even though virtual stereo vision can completely avoid the problem of camera synchronization that exists in traditional stereo vision, it is undeniable that it also leads to the problem of loss of image resolution. For virtual stereo vision with n virtual cameras, each virtual camera loses n − 1/n image resolution on average. The loss of image resolution will affect the accuracy of subsequent measurements. The application of image super-resolution reconstruction technology to the images captured by virtual stereo vision aims to improve the accuracy of system measurement.

2.3.1. Network Architecture

The image super-resolution network used in this research was optimized based on the enhanced super-resolution generative adversarial networks [25] (ESRGAN). The three main improvements made to the network are as follows:
  • The use of an Adaptive Weighted Basic Block with noise inputs allows for more effective residual learning of local noise. This helps the network better understand and mitigate the impact of noise in the image, which enhances the clarity and quality of the final output.
  • The adoption of a multi-scale reconstruction enables the full utilization of both low-frequency and high-frequency residuals. This means that the network can capture both the larger structures and the finer details in the image, leading to a higher-resolution output.
  • The fine-tuning of the loss function balances the perceptual quality and synthetic artifacts of the super-resolution network. This ensures that the output image not only has a high resolution, but also appears natural and realistic.
The improved network was named the multi-scale multi-adaptive-weights residual super-resolution generative adversarial network (MMRSRGAN). The overall framework of MMRSRGAN is shown in Figure 8.
The MMRSRGAN generator starts from a single convolutional layer (3 × 3 kernels, 64 feature maps, stride 1) for initial feature extraction, which can be formulated as follows [26]:
x 0 = f 0 I L R
where f 0 is the initial feature extraction function of the low-resolution image I L R and x 0 is the output feature map of the initial convolution layer.
In the MMRSRGAN, a basic residual unit for non-linear feature mapping called multi-adaptive-weights residual in residual dense block (MRRDB) is adopted. This basic residual unit is based on the residual dense block (RDB) structure and represents an improvement on the residual in residual dense block (RRDB) basic residual unit previously used in ESRGAN. The ESRGAN uses a fixed residual scaling of 0.2 in each DB, while this research, for each DB, uses 11 independent weights instead. These weights can be adaptively learned after the algorithm is given an initial value, thus more effectively transmitting information and gradients. Based on the above improvements, for the n -th MRRDB unit, the input x n and the output x n + 1 can be expressed as follows:
x n + 1 = f M R R D B x n = λ b n x n 6 + λ a n x n
where λ a n and λ b n are the two independent weights of the n -th MRRDB unit and x n 6 can be solved by the following equation:
x n 1 = λ r n 0 f R R D B x n   + λ x n 0 x n   + λ n n 0 G n x n 2 = λ r n 1 f R R D B x n 1 + λ x n 1 x n 1 + λ n n 1 G n x n 3 = λ r n 2 f R R D B x n 2 + λ x n 2 x n 2 + λ n n 2 G n x n 4 = λ r n 3 f R R D B x n 3 + λ x n 3 x n 3 + λ n n 3 G n x n 5 = λ r n 4 f R R D B x n 4 + λ x n 4 x n 4 + λ n n 4 G n x n 6 = λ r n 5 f R R D B x n 5 + λ x n 5 x n 5 + λ n n 5 G n
where λ r n k , λ x n k , and λ n n k are three independent sets of weights for the MRRDB unit and G n is the added Gaussian noise input.
To better utilize the features from the non-linear mapping module, the AWMSR module [57] has been referenced. The output x n + 1 serves as the input to the AWMSR unit, and the super-resolution image I S R can be represented as follows:
I S R = f r e c i = 0 n α i f A W M S R i x n + 1
where f r e c represents the final convolution layer, f A W M S R i represents the i -th scale branch in the AWMSR unit, and α i is the adaptive weight of the i -th scale branch.
For the discriminator, we use the relativistic discriminator, which estimates the probability from an average perspective, assuming that the given real data are more realistic than the fake data, rather than simply predicting the real or fake data. This kind of relativistic discriminator can be expressed as follows: [25,58]:
D R a I r , I f = σ C I r E I f C I f     1   m o r e   r e a l   t h a n   f a k e D R a I f , I r = σ C I f E I r C I r     0   l e s s   r e a l   t h a n   f a k e
where E I f represents the average operation of all fake data in a mini-batch sample and E I r represents the average operation of all real data.

2.3.2. Loss Functions

Loss functions play crucial roles in deep learning-based image super-resolution. Classic super-resolution models mostly aim to minimize the peak signal-to-noise ratio (PSNR) between the super-resolution images and the ground-truth high-resolution image to achieve convergence. Given the ill-posed nature of the image super-resolution problem, it is usually difficult to match the textural details of super-resolution images with their counterparts in real high-resolution images in a “pixel-to-pixel” manner. As a result, the PSNR-driven models tend to make the image smoother. In developing the SRGAN [27], the authors replaced the PSNR-driven pixel-space loss with a loss in the feature space defined by feature maps, denoted as φ ( i , j ) , which represents the j -th convolution (after activation) before the i -th maxpooling layer within the pre-trained VGG19. The authors used φ 2,2 and φ 5,4 in the following experiments. Building upon SRGAN, ESRGAN uses VGG features before the activation layers to obtain denser feature representations (before activation) while keeping the consistency of the super-resolution image brightness. Moreover, ESRGAN keeps the l 1 norm-based content-loss term (weighted by a factor of 0.01) to balance the perceptual-driven solutions. Due to the very small weight of the content loss, ESRGAN proposed the “network interpolation” method, which is a weighted average of the two networks trained with perceptual loss and l 1 loss to balance the perceptual-driven and PSNR-driven solutions. Let l S R l 1 and l S R M S E represent the content loss based on the l 1 norm and mean squared error (MSE), respectively. They can be calculated using the following equations [25,27]:
l S R l 1 = 1 W H x = 1 W   y = 1 H I H R x , y I S R x , y l S R M S E = 1 W H x w s = 1 W   y = 1 H I H R x , y I S R x , y 2
where W and H represent the width and height of the I H R image, respectively. The VGG-based perceptual loss l S R V G G / φ ( i , j ) can be expressed as follows: [59]:
l S R V G G φ i , j = 1 W i , j H i , j x = 1 W i , j y = 1 H i , j φ i , j I H R x , y φ i , j I S R x , y 2  
where W i , j and H i , j represent the dimensions of the feature map φ i , j within the VGG network. The adversarial loss for the generator, denoted as l S R R a , can be expressed as follows:
l S R R a = E I r log 1 D R a I r , I f E I f log D R a I f , I r
Considering the specific application scenario of the image super-resolution in this research, we made slight adjustments to the total loss function in ESRGAN. First, we rebalanced the lower-level and higher-level perceptual losses derived from the VGG network to act collectively as the perceptual-loss component. Then, we replaced the l 1 norm-based content loss with the MSE-based content loss, thereby reducing the generation of “hallucinations” or artifacts.
The total loss function in ESRGAN, denoted as l S R E S R G A N , can be expressed as follows: [25]:
l S R E S R G A N = l S R V G G φ 5,4 + λ l S R R a + η l S R l 1
After fine-tuning, the total generator loss function in this research, denoted as l S R M M R S R G A N , can be expressed as follows:
l S R M M R S R G A N = α l S R V G G φ 2,2 + 1 α l S R V G G φ 5,4 + λ l S R R a + η l S R M S E
where α , λ and η are hyperparameters used to balance various loss terms.

2.3.3. Evaluating Indicator

The application of image super-resolution reconstruction technology to the images collected by the virtual stereo-vision system aims to improve the accuracy of the system measurement. To evaluate the performance of image super-resolution networks, we proposed an evaluation metric based on visual-measurement accuracy. The accuracy of visual measurements can be determined by the 3D reconstruction error [60], which is the distance between the reconstructed 3D coordinate points and the actual coordinate points, with the average value taken as a statistical measure.
Let us assume that the average 3D reconstruction error before the application of image super-resolution reconstruction is E p 1 and that the average 3D re-projection error after image super-resolution reconstruction is E p 2 . The relative value E p r between E p 1 and E p 2 is used as a quantitative indicator for assessing the performance of the image super-resolution model and is calculated as follows:
E p r = E p 1 E p 2 E p 1 × 100 %
where E p r < 1 . The closer E p r is to 1, the greater the precision of the visual-measurement results and thus the better the performance of the image super-resolution network. In an ideal scenario, if E p r = 1, it means that there is no error between the true value and the measurement value after image super-resolution reconstruction. When E p r = 0, it indicates that image super-resolution reconstruction has no effect on precision. When E p r < 0, it indicates that a negative effect has been produced.

3. Experimental Setup

All experiments in this research were simulation experiments, and the main software tools involved were ANSYS SPEOS 2024R1 [41] (SPEOS), ANSYS ZEMAX OPTIC STUDIO 2024R1 [42,43] (ZEMAX), ANSYS MECHANICAL 2024R1 [44] (MECHANICAL), and MULTIDIC v1.1.0 [45,46]. The specific experimental process is as follows (the main parameter settings can be found in Appendix A):
  • Simulation Preparation Works
As shown in Figure 9, a tensile test of a sample with one fixed end was simulated in MECHANICAL, and the coordinate information and deformation information of all nodes were exported. This information was not only used for geometric structure import in SPEOS, but was also considered to represent true values, which were subsequently compared with the measurement values obtained from optical simulation. Then, the camera lens, planar mirrors, and prisms were designed in ZEMAX and the designed parameters were imported into the simulation camera in SPEOS.
It should be noted that since this part of the content is not the focus of this research, a simplified mechanical simulation and lens design were used in the example. However, considering that MECHANICAL can simulate very complex mechanical behaviors and ZEMAX can design complex lenses and optical components, the use of a simplified example does not affect the innovation, completeness, and generalizability of the overall simulation method.
2.
Build the simulation model
In SPEOS, the indoor optical environment was simulated and necessary optical components such as simulation cameras, planar mirrors, and prisms were created. We built four simulation-based measurement systems based on the principle of stereo vision: Basic Stereo System (BSS), composed of two simulation cameras; Mirror-Based Virtual Stereo System (MVSS), composed of a simulation camera and four planar mirrors; Biprism-Based Virtual Stereo System (BVSS), composed of a simulation camera and bi-prism; and Virtual Stereo System Based on Quadrangular Pyramid Prism (QVSS), composed of a simulation camera and a quadrangular pyramid prism. In addition, the morphology of each time step before and after the deformation of the sample were imported into SPEOS and a random speckle texture was set for its surface. Examples of each measurement system and its collected images are shown in Figure 10.
3.
Calibration simulation experiment
All measurement systems were directed to simultaneously capture images of a calibration board with a checkerboard pattern, continuously changing the position of the calibration board until each system had gathered 1000 images of it. To reduce the uncertainty in the calibration experiment, for each stereoscopic vision system calibration, 40 sets of images were randomly selected. Of these, 20 sets were used for single-camera and stereo calibration of the system and the remaining 20 sets were used to verify the calibration accuracy. The method for verifying accuracy involved extracting all corner points in the 10 sets of image pairs. Based on the principles of stereoscopic vision, distances between all adjacent corner points can be determined. After the average value has been obtained and the true value of the corner-point distance has been subtracted, the result, represented as δ d , was used as a single verification result that reflected the precision of the measurement system. The above extraction, calibration, and verification steps were repeated 1000 times.
4.
3D-DIC Simulation Experiment
All measurement systems were directed to simultaneously capture images of a sample with a random speckle pattern, recording the entire process of tensile deformation of the sample in the optical environment according to the set time steps. The surface morphology of the sample and its deformation process were then reconstructed using MULTIDIC, and compared with the true values from step 1. In addition, considering the issue of camera synchronization time difference in the stereo-vision systems used in reality, for BSS, this was simulated by setting a different exposure time for the simulation cameras. As for the remaining three measurement systems, being virtual stereo, there were no issues of differences in camera-synchronization-time.
5.
Image Super-resolution Reconstruction Experiment
We collected all images gathered by the measurement systems from Step 4 (above). Multiple networks, including the MMRSRGAN proposed in this research, were separately trained and tested on the set composed of images of the sample.

4. Results

Figure 11 shows the results of the precision verification for each measurement system. Specifically, Figure 11a presents the average reprojection error e ¯ from single-camera calibration, while Figure 11b shows the difference δ d between the measured and true values of the corner-point distances. From the perspective of the average reprojection error, BSS yielded the best precision and stability for single-camera calibration, followed by MVSS, then BVSS, with QVSS being the least precise. On the other hand, from the perspective of δ d , the order of measurement precision is QVSS > BSS > MVSS > BVSS.
As can be seen, the two evaluation methods yielded quite different results. During camera calibration, to obtain more precise calibration information, it is generally required to capture the calibration board from different angles and distances and to cover the camera’s field of view as much as possible. As BSS consists of two cameras, it is the system in which it is easiest to meet the above requirements, while in the remaining measurement systems, the virtual cameras “occupy” 1/2 or 1/4 of the field of view and also lose a corresponding proportion of image resolution; hence, BSS has the best single-camera calibration effect, and QVSS the worst. Furthermore, prisms can change the light path, introducing more image distortion; this effect makes the single-camera calibration effect of MVSS, which is based on plane mirrors, better than that of BVSS and QVSS, which are based on prisms. However, from the perspective of δ d , the measurement precision and stability of QVSS are comparable to those of BSS. This is likely because, although QVSS is subject to the most loss in field of view and image resolution, it has leveraged the advantage of multi-viewpoint stereo vision in the measurement process, i.e., the four cameras can correct each other. From the calibration results of 1000 iterations for each measurement system, the δ d at the median was selected as the final calibration result for each measurement system.
Figure 12 represents the average relative error between the result parameters obtained through calibration by each measurement system and the actual values. It can be seen that the precision ranking of the internal parameter calibration of the four measurement systems is BSS, MVSS, BVSS, and QVSS, which is generally consistent with the reprojection-error results. It is worth noting that the internal parameter-calibration error of QVSS is significantly higher than those of the other three measurement systems, but its external parameter calibration error is lower than that of BVSS and is close to those of BSS and MVSS. This can once again be attributed to the advantages of multi-view stereo vision.
Figure 13 shows the 3D reconstruction cloud map of the sample surface, which is obtained by using images collected by various measurement systems based on the 3D-DIC principle and then fitting the reconstructed three-dimensional scatter points through a surface. The color scale bars reflect the distance from the points on the surface to the projection surface.
Figure 14 shows the box plot of reconstruction errors for each node of the sample. In combination with Figure 13, it reveals that BSS has the best reconstruction effect and the smoothest fitted surface, that BVSS has the worst effect, and that MVSS and QVSS have similar effects.
Figure 15a displays the deformation process of the test sample reconstructed by BSS using the 3D-DIC principle (four time steps were uniformly selected without considering camera-synchronization error), with the color scale indicating the degree of deformation of the test sample. Figure 15b, on the other hand, is a log-log plot of the node average reconstruction error of BSS in relation to the set camera-synchronization error. It can be seen that when the synchronization error is within 10 3 time steps, the error in the reconstructed deformation amount can still be controlled within 0.005 mm. However, when the synchronization error exceeds 10 3 , there is an uncontrolled increase in the amount of deformation error. In contrast, virtual stereo-vision systems can completely eliminate the impact of camera-synchronization error on measurement results during the measurement process. As shown by the three dashed lines in Figure 15b, the measurement accuracies of MVSS, BVSS, and QVSS lie between the BSS synchronization errors of 10 7 and 10 6 time steps. Therefore, it is clear that when the camera-synchronization error is difficult to control or the cost of achieving it is too high, a measurement system based on virtual stereo vision can not only significantly reduce the system construction cost but also maintain high precision, making it an excellent solution.
Figure 16 shows the scatter-diagram representation of the 3D-reprojection error after application of the MMRSRGAN model proposed in this study to images captured by QVSS. As the 3D-reprojection error is less than 0.002 mm, which is very small relative to the overall size of the sample, it is magnified 1000 times in the three-dimensional scatter diagram for easier observation. It can be seen that after the MMRSRGAN has been applied to QVSS, not only is the density of the scatter points increased, but the three-dimensional reprojection error is also significantly reduced.
Figure 17 displays the E p r metrics calculated after applying the MMRSRGAN model to four measurement systems. As shown in the figure, the MMRSRGAN model performs worst at improving the accuracy of the BSS system, even becoming negative at a ×8 scale. One possible reason for this could be that during the training process, the model lacks images of higher resolution than those collected by BSS and cannot learn the pattern of degradation from higher resolutions. Therefore, its accuracy improvement is not obvious at ×2 and ×4 scales, and at ×8 scale, an excessive amount of noise has been magnified, resulting in a negative E p r .
Note the E p r indices of the systems other than QVSS are all greater at the ×4 scale than at the ×8 scale. A possible reason could be that in QVSS, the effective resolution of each virtual camera accounts for only 1/4 of the E p r value on average, with that of BSS being four times greater than those of QVSS and MVSS and that of BVSS being twice that of QVSS. Consequently, during the training process, QVSS, compared to other systems, is more likely to learn the degradation pattern of the images captured by its own system.
The above results reveal that the MMRSRGAN model has a better accuracy-improving effect on virtual stereo-vision systems, indirectly indicating that image super-resolution reconstruction technology is particularly beneficial for countering the resolution loss associated with virtual stereo vision.

5. Discussion and Conclusions

The most remarkable contribution and value of this research lie in the innovative proposal of a complete set of simulation-based experimental methods. These methods integrate optical simulation, mechanical simulation, 3D-DIC algorithms, and image super-resolution reconstruction models. This approach not only provides a novel research pathway and means for existing researchers in the field, but also opens up the possibility for those researchers who lack the support of costly 3D-DIC hardware facilities to participate in related research. The remaining primary conclusions are as follows:
The experiments with 3D-DIC were simulated completely and with high quality by combining multiple software tools. Specifically, the stretch behavior of the specimen was simulated using MECHANICAL and its dynamic geometric deformation information was imported into the indoor optical environment in SPEOS. ZEMAX was then used to design camera lenses, plane mirrors, and prisms, which were imported into SPEO. Four measurement systems based on stereo vision and virtual stereo vision were designed: BSS, MVSS, BVSS, and QVSS. Within the optical-simulation environment, each system completed measurement-system calibration and then recorded the stretching process of the specimen. The acquired calibration images were then used for calibration calculation and analysis, and the specimen’s images were used for 3D-DIC computation and analysis. In addition, the image super-resolution reconstruction networks were applied to the specimen’s images.
The experimental results demonstrate the following points: BSS has the largest field of view, the highest effective resolution, the highest single-camera calibration accuracy, and if synchronization errors are not considered, it has the highest deformation-measurement accuracy. However, when the rate of errors triggering camera synchronization increases, its measurement accuracy drops sharply and the ability of image super-resolution reconstruction to effect accuracy improvement is relatively poor. When the synchronization error of BSS is within 10 3 time steps, the reconstruction error is within 0.005 mm and the accuracy of the virtual stereo-vision system is between the BSS’s synchronization error of 10 7 and 10 6 time steps. In addition, after image super-resolution reconstruction technology has been applied, the reconstruction error will be reduced to within 0.002 mm. QVSS benefits from multi-camera stereo vision, with the highest stereo calibration accuracy and good improvement in its accuracy from the image super-resolution reconstruction, especially when the super-resolution scale reaches ×8. However, QVSS has the smallest field of view and the lowest effective resolution. Neither MVSS nor BVSS perform best in various simulation-evaluation methods, but they show fewer camera-synchronization errors compared to BSS and image super-resolution reconstruction improves their accuracy. Compared to QVSS, MVSS and BVSS offer a larger field of view, higher effective resolution, and a less complex structure design.
Due to the limitations of camera hardware, the author is unable to directly compare the application of 3D-DIC technology between real-world and simulation scenarios. However, with sufficient hardware support, the simulation method proposed in this article can be deeply compared and analyzed with actual experiments. The initial goal of future research could be to establish certain connections between simulation scenarios and the real world, and even more ambitious goals could involve generating digital twins in high-precision experimental-measurement environments.

Author Contributions

Conceptualization, H.C.; Methodology, H.C.; Validation, H.C.; Formal analysis, H.C. and H.L.; Investigation, H.C.; Data curation, H.C. and H.L.; Writing—original draft, H.C.; Writing—review & editing, H.C.; Visualization, H.L.; Supervision, G.L.; Project administration, H.C., G.L. and Z.W.; Funding acquisition, G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 51979244.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

  • ANSYS MECHANICAL
Figure A1. Schematic diagram of three-dimensional dimensions of tensile specimen (a) and loading curve (b).
Figure A1. Schematic diagram of three-dimensional dimensions of tensile specimen (a) and loading curve (b).
Sensors 24 04031 g0a1
Table A1. Main parameters for simulation of tensile specimens.
Table A1. Main parameters for simulation of tensile specimens.
Main ParameterValue
Volume 4.789 × 10−5 m3
Mass 0.13266 kg
Nodes69,831
Elements14,197
Moment of Inertia Ip11.0619 × 10−4 kg·m2
Moment of Inertia Ip21.6147 × 10−5 kg·m2
Moment of Inertia Ip31.0619 × 10−4 kg·m2
Compressive Yield Strength280,000,000 Pa
Tensile Yield Strength280,000,000 Pa
Tensile Ultimate Strength 310,000,000 Pa
Young’s Modulus71,000,000,000 Pa
Poisson’s Ratio0.33
Bulk Modulus69,608,000,000 Pa
Shear Modulus26,692,000,000 Pa
2.
ANSYS ZEMAX OPTICSTUDIO
Table A2. Lens-design parameter.
Table A2. Lens-design parameter.
NumberSurface TypeRadiusThicknessMaterialSemi-DiameterMech Semi-Dia
0 StandardInfinityInfinity 0.000 0.000
1 StandardInfinity25.400 0.000 0.000
2 Standard−234.599 6.325 SK434.036 34.036
3 Standard32.756 2.337 24.892 34.036
4 Standard35.103 13.741 SF125.146 25.146
5 Standard−138.133 5.436 SK423.033 25.146
6 Standard22.672 10.490 16.257 25.146
7 Standard−46.805 3.531 SF116.099 16.289
8 Standard−38.717 5.258 SK416.210 16.289
9 Standard−55.461 28.854 16.289 16.289
10 StandardInfinity7.595 10.790 10.790
11 Standard−168.262 5.740 SK414.064 15.284
12 Standard−50.142 0.533 15.284 15.284
13 Standard651.985 8.636 SK416.230 18.335
14 Standard−26.373 3.708 SF116.662 18.335
15 Standard−106.728 19.253 18.335 18.335
16 Standard209.596 5.258 SF123.476 24.220
17 Standard61.450 0.533 24.220 24.220
18 Standard62.276 11.227 SK424.583 24.883
19 Standard−87.028 1.803 24.883 24.883
20 Standard55.115 8.052 SK424.996 24.996
21 Standard855.096 39.624 24.558 24.996
22 ImageInfinity 13.540 13.540
3.
ANSYS SPEOS
Table A3. Main parameters for simulation cameras, light sources and optical elements.
Table A3. Main parameters for simulation cameras, light sources and optical elements.
Camera
Resolution (pixel)5120 × 4320
Sensor size (mm)9.6 × 8.1
Focal length (mm)18.6
Temporal Dark Noise (e-)7.65
Saturation Capacity (e-) 6365
Dynamic Range (dB)75.85
Gain Range (dB)[0, 24]
Gamma correction2.2
transmittance (%)95
RGB sensitivesee Figure A2
Luminaire 1
Measured flux (lm)7124.6
Light output ratio luminaire (%)56.69
Downward flux fraction (%)47.66
Spectrum typewhite LED
UTE C71 -121 photometric0.27 J + 0.30 T
Luminaire 2
Measured flux (lm)6180.9
Light output ratio luminaire (%)52.19
Downward flux fraction (%)40.98
Spectrum typedaylight fluorescent
UTE C71 -121 photometric0.27 J + 0.30 T
Optical element 1 (mirror)
Surface type mirror
Reflectance (%)90
Optical element 2 (bi-prism)
Volume typeoptic
Surface typeoptical polished
Refractive index 1.5163
Constringence (Abbe number)63.96
Optical element 3 (quadrangular pyramid prism)
Volume typeoptic
Surface typeoptical polished
Refractive index 1.5168
Constringence (Abbe number)64.06
Figure A2. Simulated camera spectral curve. (a) Red sensitivity curve; (b) green sensitivity curve; (c) blue sensitivity curve.
Figure A2. Simulated camera spectral curve. (a) Red sensitivity curve; (b) green sensitivity curve; (c) blue sensitivity curve.
Sensors 24 04031 g0a2
Figure A3. Surface optical parameters of simulated calibration plates (a) and tensile specimens (b).
Figure A3. Surface optical parameters of simulated calibration plates (a) and tensile specimens (b).
Sensors 24 04031 g0a3
Figure A4. Schematic diagram of structural of BSS, MVSS, BVSS, and QVSS. See Table A4 for structural-parameter values. (a) BSS; (b) BVSS; (c) MVSS; (d) QVSS.
Figure A4. Schematic diagram of structural of BSS, MVSS, BVSS, and QVSS. See Table A4 for structural-parameter values. (a) BSS; (b) BVSS; (c) MVSS; (d) QVSS.
Sensors 24 04031 g0a4
Table A4. Structural parameters for BSS, MVSS, BVSS, and QVSS.
Table A4. Structural parameters for BSS, MVSS, BVSS, and QVSS.
BSS
B (mm)55
Objective distance (mm)100
MVSS
α (°)45.5
β (°)50.2
d (mm)15.5
L (mm)15.4
l 1 (mm)20.5
l 2 (mm)35.2
Objective distance (mm)150
BVSS
D (mm)10
β (°)22.5
h (mm)8.5
Objective distance (mm)160
QVSS
D (mm)10
b (mm)25.6
c (mm)20.2
Objective distance (mm)200
4.
MULTIDIC
Figure A5. DIC parameter settings. (a) BSS; (b) BVSS; (c) MVSS; (d) QVSS.
Figure A5. DIC parameter settings. (a) BSS; (b) BVSS; (c) MVSS; (d) QVSS.
Sensors 24 04031 g0a5

References

  1. Pan, B. Digital Image Correlation for Surface Deformation Measurement: Historical Developments, Recent Advances and Future Goals. Meas. Sci. Technol. 2018, 29, 082001. [Google Scholar] [CrossRef]
  2. Chouhan, K.; Chavda, J.T. A Review on Digital Image Correlation in Experimental Geotechnics. Indian Geotech. J. 2024, 54, 639–656. [Google Scholar] [CrossRef]
  3. Yoon, S.; Jung, H.-J.; Knowles, J.C.; Lee, H.-H. Digital Image Correlation in Dental Materials and Related Research: A Review. Dent. Mater. 2021, 37, 758–771. [Google Scholar] [CrossRef] [PubMed]
  4. Zhao, Y.; Druzhinin, P.; Ivens, J.; Vandepitte, D.; Lomov, S.V. Split-Disk Test with 3D Digital Image Correlation Strain Measurement for Filament Wound Composites. Compos. Struct. 2021, 263, 113686. [Google Scholar] [CrossRef]
  5. Huang, C.; Song, Y.; Yang, L.; Fan, P.; Wang, Q.; Liu, H. Study on Fracture Behavior for Sharp V-Notched Specimens Combined with 3D-DIC and Caustics. Eng. Fract. Mech. 2022, 260, 108190. [Google Scholar] [CrossRef]
  6. Wei, K.; Yuan, F.; Shao, X.; Chen, Z.; Wu, G.; He, X. High-Speed Multi-Camera 3D DIC Measurement of the Deformation of Cassette Structure with Large Shaking Table. Mech. Syst. Signal Process. 2022, 177, 109273. [Google Scholar] [CrossRef]
  7. Beberniss, T.J.; Ehrhardt, D.A. High-Speed 3D Digital Image Correlation Vibration Measurement: Recent Advancements and Noted Limitations. Mech. Syst. Signal Process. 2017, 86, 35–48. [Google Scholar] [CrossRef]
  8. Wang, R.; Li, X.; Zhang, Y. Analysis and Optimization of the Stereo-System with a Four-Mirror Adapter. J. Eur. Opt. Soc. Rapid Publ. 2008, 3, 08033. [Google Scholar] [CrossRef]
  9. Jailin, C. Full Field Modal Measurement with a Single Standard Camera. Opt. Lasers Eng. 2018, 107, 265–272. [Google Scholar] [CrossRef]
  10. Barone, S.; Neri, P.; Paoli, A.; Razionale, A.V. 3D Vibration Measurements by a Virtual-Stereo-Camera System Based on a Single Low Frame Rate Camera. Procedia Struct. Integr. 2018, 12, 122–129. [Google Scholar] [CrossRef]
  11. Chi, Y.; Yu, L.; Pan, B. Low-Cost, Portable, Robust and High-Resolution Single-Camera Stereo-DIC System and Its Application in High-Temperature Deformation Measurements. Opt. Lasers Eng. 2018, 104, 141–148. [Google Scholar] [CrossRef]
  12. Durand-Texte, T.; Simonetto, E.; Durand, S.; Melon, M.; Moulet, M.-H. Vibration Measurement Using a Pseudo-Stereo System, Target Tracking and Vision Methods. Mech. Syst. Signal Process. 2019, 118, 30–40. [Google Scholar] [CrossRef]
  13. Mouaddib, E.M.; Sagawa, R.; Echigo, T.; Yagi, Y. Stereovision with a Single Camera and Multiple Mirrors. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 800–805. [Google Scholar]
  14. Zhang, K.; Qu, Z.; Zhong, X.; Chen, Q.; Zhang, X. Design of Binocular Stereo Vision Optical System Based on a Single Lens and a Single Sensor. Appl. Opt. 2022, 61, 6690. [Google Scholar] [CrossRef] [PubMed]
  15. Genovese, K.; Casaletto, L.; Rayas, J.A.; Flores, V.; Martinez, A. Stereo-Digital Image Correlation (DIC) Measurements with a Single Camera Using a Biprism. Opt. Lasers Eng. 2013, 51, 278–285. [Google Scholar] [CrossRef]
  16. Pankow, M.; Justusson, B.; Waas, A.M. Three-Dimensional Digital Image Correlation Technique Using Single High-Speed Camera for Measuring Large Out-of-Plane Displacements at High Framing Rates. Appl. Opt. 2010, 49, 3418. [Google Scholar] [CrossRef] [PubMed]
  17. Mohammed, M.E.; Shao, X.; He, X. Portable Device for the Local Three-Dimensional Deformation Measurement Using a Single Camera. Sci. China Technol. Sci. 2018, 61, 51–60. [Google Scholar] [CrossRef]
  18. Cui, X.; Zheng, J. Biprism-Based Monocular Stereovision System Parameter Optimization. J. Electron. Imag. 2018, 27, 033020. [Google Scholar] [CrossRef]
  19. Shao, X.; Qu, J.; Chen, W. Single-Camera Three-Dimensional Digital Image Correlation with Enhanced Accuracy Based on Four-View Imaging. Materials 2023, 16, 2726. [Google Scholar] [CrossRef] [PubMed]
  20. Bashir, S.M.A.; Wang, Y.; Khan, M.; Niu, Y. A Comprehensive Review of Deep Learning-Based Single Image Super-Resolution. PeerJ Comput. Sci. 2021, 7, e621. [Google Scholar] [CrossRef]
  21. Wang, X.; Yi, J.; Guo, J.; Song, Y.; Lyu, J.; Xu, J.; Yan, W.; Zhao, J.; Cai, Q.; Min, H. A Review of Image Super-Resolution Approaches Based on Deep Learning and Applications in Remote Sensing. Remote Sens. 2022, 14, 5423. [Google Scholar] [CrossRef]
  22. Ye, S.; Zhao, S.; Hu, Y.; Xie, C. Single-Image Super-Resolution Challenges: A Brief Review. Electronics 2023, 12, 2975. [Google Scholar] [CrossRef]
  23. Angarano, S.; Salvetti, F.; Martini, M.; Chiaberge, M. Generative Adversarial Super-Resolution at the Edge with Knowledge Distillation. Eng. Appl. Artif. Intell. 2023, 123, 106407. [Google Scholar] [CrossRef]
  24. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Computer Vision—ECCV 2018 Workshops; Leal-Taixé, L., Roth, S., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2019; Volume 11133, pp. 63–79. ISBN 978-3-030-11020-8. [Google Scholar]
  26. Li, K.; Yang, S.; Dong, R.; Wang, X.; Huang, J. Survey of Single Image Super-resolution Reconstruction. IET Image Process 2020, 14, 2273–2290. [Google Scholar] [CrossRef]
  27. Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 105–114. [Google Scholar]
  28. Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2366–2369. [Google Scholar]
  29. He, X.; Zhou, R.; Liu, Z.; Yang, S.; Chen, K.; Li, L. Review of Research Progress and Development Trend of Digital Image Correlation. Multidiscip. Model. Mater. Struct. 2024, 20, 81–114. [Google Scholar] [CrossRef]
  30. Pan, B.; Yu, L.; Zhang, Q. Review of Single-Camera Stereo-Digital Image Correlation Techniques for Full-Field 3D Shape and Deformation Measurement. Sci. China Technol. Sci. 2018, 61, 2–20. [Google Scholar] [CrossRef]
  31. Yang, R.; Li, Y.; Zeng, D.; Guo, P. Deep DIC: Deep Learning-Based Digital Image Correlation for End-to-End Displacement and Strain Measurement. J. Mater. Process. Technol. 2022, 302, 117474. [Google Scholar] [CrossRef]
  32. Yuan, Y.; Huang, J.; Peng, X.; Xiong, C.; Fang, J.; Yuan, F. Accurate Displacement Measurement via a Self-Adaptive Digital Image Correlation Method Based on a Weighted ZNSSD Criterion. Opt. Lasers Eng. 2014, 52, 75–85. [Google Scholar] [CrossRef]
  33. Gu, G. A Comparative Study of Random Speckle Pattern Simulation Models in Digital Image Correlation. Optik 2015, 126, 3713–3716. [Google Scholar] [CrossRef]
  34. Lavatelli, A.; Balcaen, R.; Zappa, E.; Debruyne, D. Closed-Loop Optimization of DIC Speckle Patterns Based on Simulated Experiments. IEEE Trans. Instrum. Meas. 2019, 68, 4376–4386. [Google Scholar] [CrossRef]
  35. Estrada, J.B.; Franck, C. Intuitive Interface for the Quantitative Evaluation of Speckle Patterns for Use in Digital Image and Volume Correlation Techniques. J. Appl. Mech. 2015, 82, 095001. [Google Scholar] [CrossRef]
  36. Wang, G.; Zhang, L.; Yao, X. StrainNet-3D: Real-Time and Robust 3-Dimensional Speckle Image Correlation Using Deep Learning. Opt. Lasers Eng. 2022, 158, 107184. [Google Scholar] [CrossRef]
  37. Song, J.; Liu, Z.; Luo, C. Studies on the Influence of Speckle Density on the Accuracy of Digital Image Correlation Method Based on Numerical Simulation. J. Phys. Conf. Ser. 2021, 1971, 012072. [Google Scholar] [CrossRef]
  38. Thai, T.Q.; Ruesch, J.; Gradl, P.R.; Truscott, T.T.; Berke, R.B. Speckle Pattern Inversion in High Temperature DIC Measurement. Exp. Tech. 2022, 46, 239–247. [Google Scholar] [CrossRef]
  39. Crammond, G.; Boyd, S.W.; Dulieu-Barton, J.M. Speckle Pattern Quality Assessment for Digital Image Correlation. Opt. Lasers Eng. 2013, 51, 1368–1378. [Google Scholar] [CrossRef]
  40. Dai, M.; Wei, K.; Gao, B.; Zhou, B.; Shao, X. Experimental and Simulation Investigation of Stereo-DIC via a Deep Learning Algorithm Based on Initial Speckle Positioning Technology. Appl. Opt. 2024, 63, 1895. [Google Scholar] [CrossRef] [PubMed]
  41. Ansys Speos | Optical System Design Software. Available online: https://www.ansys.com/products/optics/ansys-speos (accessed on 13 April 2024).
  42. ZEMAX Tutorials & Applications—Knowledgebase. Available online: https://support.zemax.com/hc/en-us/categories/1500000770122 (accessed on 5 April 2024).
  43. Ansys Zemax OpticStudio | Optical Design and Analysis Software. Available online: https://www.ansys.com/products/optics/ansys-zemax-opticstudio (accessed on 13 April 2024).
  44. Ansys Mechanical | Structural FEA Analysis Software. Available online: https://www.ansys.com/products/structures/ansys-mechanical (accessed on 13 April 2024).
  45. Solav, D.; Moerman, K.M.; Jaeger, A.M.; Genovese, K.; Herr, H.M. MultiDIC: An Open-Source Toolbox for Multi-View 3D Digital Image Correlation. IEEE Access 2018, 6, 30520–30535. [Google Scholar] [CrossRef]
  46. MultiDIC/MultiDIC: Matlab 3D Digital Image Correlation Toolbox. Available online: https://github.com/MultiDIC/MultiDIC?tab=readme-ov-file (accessed on 13 April 2024).
  47. Dong, Y.L.; Pan, B. A Review of Speckle Pattern Fabrication and Assessment for Digital Image Correlation. Exp. Mech. 2017, 57, 1161–1181. [Google Scholar] [CrossRef]
  48. Su, Y.; Zhang, Q.; Xu, X.; Gao, Z. Quality Assessment of Speckle Patterns for DIC by Consideration of Both Systematic Errors and Random Errors. Opt. Lasers Eng. 2016, 86, 132–142. [Google Scholar] [CrossRef]
  49. Orteu, J.-J. 3-D Computer Vision in Experimental Mechanics. Opt. Lasers Eng. 2009, 47, 282–291. [Google Scholar] [CrossRef]
  50. Schreier, H.; Orteu, J.-J.; Sutton, M.A. Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications; Springer: Boston, MA, USA, 2009; ISBN 978-0-387-78746-6. [Google Scholar]
  51. Aggarwal, R.; Vohra, A.; Namboodiri, A.M. Panoramic Stereo Videos with a Single Camera. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 3755–3763. [Google Scholar]
  52. Liu, X.; Li, A. Multiview Three-Dimensional Imaging Using a Risley-Prism-Based Spatially Adaptive Virtual Camera Field. Appl. Opt. 2022, 61, 3619–3629. [Google Scholar] [CrossRef] [PubMed]
  53. Wu, L.F.; Zhu, J.G.; Xie, H.M.; Zhang, Q. An Accurate Method for Shape Retrieval and Displacement Measurement Using Bi-Prism-Based Single Lens 3D Digital Image Correlation. Exp. Mech. 2016, 56, 1611–1624. [Google Scholar] [CrossRef]
  54. Lim, K.B.; Kee, W.L.; Wang, D. Virtual Camera Calibration and Stereo Correspondence of Single-Lens Bi-Prism Stereovision System Using Geometrical Approach. Signal Process. Image Commun. 2013, 28, 1059–1071. [Google Scholar] [CrossRef]
  55. Xiao, Y.; Lim, K.B. A Prism-Based Single-Lens Stereovision System: From Trinocular to Multi-Ocular. Image Vis. Comput. 2007, 25, 1725–1736. [Google Scholar] [CrossRef]
  56. Chen, H.; Liu, G.; Wang, Z. A Stereovision-Based Efficient Measurement Approach for Surface Flatness of Concrete Members. Structures 2024, 63, 106374. [Google Scholar] [CrossRef]
  57. Wang, C.; Li, Z.; Shi, J. Lightweight Image Super-Resolution with Adaptive Weighted Learning Network. arXiv 2019, arXiv:1904.02358. [Google Scholar]
  58. Rakotonirina, N.C.; Rasoanaivo, A. ESRGAN+: Further Improving Enhanced Super-Resolution Generative Adversarial Network. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 3637–3641. [Google Scholar]
  59. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1646–1654. [Google Scholar]
  60. Xing, H.Z.; Zhang, Q.B.; Braithwaite, C.H.; Pan, B.; Zhao, J. High-Speed Photography and Digital Optical Measurement Techniques for Geomaterials: Fundamentals and Applications. Rock Mech. Rock Eng. 2017, 50, 1611–1659. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of reference subset and target subset.
Figure 1. Schematic diagram of reference subset and target subset.
Sensors 24 04031 g001
Figure 2. The influence of D 0  value on subset.
Figure 2. The influence of D 0  value on subset.
Sensors 24 04031 g002
Figure 3. Geometric diagram of first-order shape function.
Figure 3. Geometric diagram of first-order shape function.
Sensors 24 04031 g003
Figure 4. Geometric optical model of virtual stereo vision based on four planar mirrors.
Figure 4. Geometric optical model of virtual stereo vision based on four planar mirrors.
Sensors 24 04031 g004
Figure 5. Schematic diagram of image preprocessing for a virtual stereo-vision system.
Figure 5. Schematic diagram of image preprocessing for a virtual stereo-vision system.
Sensors 24 04031 g005
Figure 6. Geometric optical model of virtual stereo vision based on a bi-prism.
Figure 6. Geometric optical model of virtual stereo vision based on a bi-prism.
Sensors 24 04031 g006
Figure 7. Virtual stereo vision diagram based on a triangular prism [56].
Figure 7. Virtual stereo vision diagram based on a triangular prism [56].
Sensors 24 04031 g007
Figure 8. MMRSRGAN architecture.
Figure 8. MMRSRGAN architecture.
Sensors 24 04031 g008
Figure 9. Schematic diagram of the sample tensile simulation experiment (a) and simulated lens design (b).
Figure 9. Schematic diagram of the sample tensile simulation experiment (a) and simulated lens design (b).
Sensors 24 04031 g009
Figure 10. Schematic diagram of the measurement-systems simulation model. (a) BSS (b) MVSS (c) BVSS (d) QVSS.
Figure 10. Schematic diagram of the measurement-systems simulation model. (a) BSS (b) MVSS (c) BVSS (d) QVSS.
Sensors 24 04031 g010
Figure 11. Accuracy-verification results for four measurement systems. (a) e ¯ ; (b) δ d .
Figure 11. Accuracy-verification results for four measurement systems. (a) e ¯ ; (b) δ d .
Sensors 24 04031 g011
Figure 12. Bar chart of relative error between camera calibration results and true values.
Figure 12. Bar chart of relative error between camera calibration results and true values.
Sensors 24 04031 g012
Figure 13. 3D-reconstruction cloud map of the sample surface.
Figure 13. 3D-reconstruction cloud map of the sample surface.
Sensors 24 04031 g013
Figure 14. Sample node-reconstruction error.
Figure 14. Sample node-reconstruction error.
Sensors 24 04031 g014
Figure 15. Reconstruction (a) and error evaluation (b) of sample deformation process by BSS.
Figure 15. Reconstruction (a) and error evaluation (b) of sample deformation process by BSS.
Sensors 24 04031 g015
Figure 16. The effect of the application of the MMRSRGAN model in QVSS. (a) ×2 scale; (b) ×4 scale; (c) ×8 scale.
Figure 16. The effect of the application of the MMRSRGAN model in QVSS. (a) ×2 scale; (b) ×4 scale; (c) ×8 scale.
Sensors 24 04031 g016
Figure 17. E p r  index of four measurement systems after the application of the MMRSRGAN model.
Figure 17. E p r  index of four measurement systems after the application of the MMRSRGAN model.
Sensors 24 04031 g017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Li, H.; Liu, G.; Wang, Z. A Novel Simulation Method for 3D Digital-Image Correlation: Combining Virtual Stereo Vision and Image Super-Resolution Reconstruction. Sensors 2024, 24, 4031. https://doi.org/10.3390/s24134031

AMA Style

Chen H, Li H, Liu G, Wang Z. A Novel Simulation Method for 3D Digital-Image Correlation: Combining Virtual Stereo Vision and Image Super-Resolution Reconstruction. Sensors. 2024; 24(13):4031. https://doi.org/10.3390/s24134031

Chicago/Turabian Style

Chen, Hao, Hao Li, Guohua Liu, and Zhenyu Wang. 2024. "A Novel Simulation Method for 3D Digital-Image Correlation: Combining Virtual Stereo Vision and Image Super-Resolution Reconstruction" Sensors 24, no. 13: 4031. https://doi.org/10.3390/s24134031

APA Style

Chen, H., Li, H., Liu, G., & Wang, Z. (2024). A Novel Simulation Method for 3D Digital-Image Correlation: Combining Virtual Stereo Vision and Image Super-Resolution Reconstruction. Sensors, 24(13), 4031. https://doi.org/10.3390/s24134031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop