Next Article in Journal
Water Flow Boiling in Micro/Mini Channels Using Volume of Fluid Model
Next Article in Special Issue
Dual-Channel Mapping–Gas Column Concentration Inversion Method Based on Multispectral Imaging
Previous Article in Journal
Auditing the Risk of Financial Fraud Using the Red Flags Technique
Previous Article in Special Issue
Parafoveal and Perifoveal Accommodation Response to Defocus Changes Induced by a Tunable Lens
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Deep Transfer Learning Models for the Quantification of Photoelastic Images

Department of Civil Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(2), 758; https://doi.org/10.3390/app14020758
Submission received: 22 November 2023 / Revised: 8 January 2024 / Accepted: 10 January 2024 / Published: 16 January 2024
(This article belongs to the Special Issue Advanced Studies in Optical Imaging and Sensing)

Abstract

:

Featured Application

This research has pivotal applications in geotechnical and civil engineering fields, particularly in improving the reliability and precision of stress and strain analysis in granular materials, which can lead to more accurate predictions of soil behavior and further optimize the design and safety of infrastructure.

Abstract

In the realm of geotechnical engineering, understanding the mechanical behavior of soil particles under external forces is paramount. The main topic of this study is how to use deep learning image analysis techniques, especially transfer learning models like VGG, ResNet, and DenseNet, to look at response images from models of reflective photoelastic soil particles. We applied a total of six transfer learning models to analyze photoelastic response images. We then compared the validation results with existing quantitative evaluation techniques. The researchers identified the most outstanding transfer learning model by comparing the validation results with existing quantitative evaluation techniques using performance metrics such as the coefficient of determination, mean average error, and root mean square error.

1. Introduction

Photoelasticity is an experimental method that uses the phenomenon of birefringence in optics to measure stress and strain within a material [1]. This technique capitalizes on the property of birefringence, whereby transparent substances exhibit double refraction of light under stress. By employing transparent materials and polarized light, the photoelastic method enables researchers to visualize stress distribution patterns in a material. This approach’s popularity has grown due to its capability to offer comprehensive visual insights into the stress behaviors of materials.
Initially, engineers employed the photoelastic technique to analyze stress in various infrastructures, such as dams and bridges. However, over time, the scope of photoelastic experiments expanded beyond structural analysis to include particle-based materials, leading to their relevance in geotechnical engineering experiments. Dantu [2] and Wakabayashi [3] demonstrated the visualization of force transmission in packed particle assemblies using photoelasticity. This opened doors for simulating granular materials as two-dimensional assemblies of disks, allowing researchers like Drescher and De Jong [4] to use photoelastic techniques to determine contact forces between disks.
Photoelasticity has been used by researchers like Dyer [5] and Allersma [6] to study problems that involve large displacements. These problems include shear box testing, cone penetration testing, and reinforcement pullout experiments. As a result, photoelasticity became a valuable tool for understanding the behavior of granular materials under various conditions. After that, many more studies have been conducted on complicated topics like force-chain networks in granular materials [7,8,9], how shearing rate affects behavior [10], slip behavior at different packing densities [11], and jamming [12].
Despite the success of photoelasticity, a challenge emerged due to the requirement of using transparent materials for modeling. This limitation arose because the mechanical properties of transparent model grains differed from those of actual mineral grains found in soil particles. Mesnager [13] introduced the concept of reflective photoelastic coating to address this issue. This technique involves applying a photoelastic material with reflective properties to the model grains, enabling the use of non-transparent materials that better mimic the mechanical behavior of actual soil particles.
Zandman and others [14] made further advancements by exploring issues related to reinforcing the reflective photoelastic coating method. Ramesh [15] looked into a lot of different aspects of the reflective technique. He looked into the optical setup using reflection polariscopes, the relationship between stress and strain optics in the context of a reflective photoelastic coating, and the correction factors that are needed to use the method correctly. The researchers discovered that the choice of adhesive for attaching the reflective photoelastic material to the samples significantly influenced the photoelastic response. Furthermore, the wavelength band of the light source and the setup of the measurement camera were crucial considerations that were identified [16].
Recent research has seen progress with the emergence of digital photoelasticity, which enables the observation and analysis of RGB color information using polychromatic light. This replaces conventional monochromatic light and provides additional data for computing fringe orders accurately. Ajovalasit et al. [17,18,19] observed the photoelastic response using white light and analyzed it in terms of RGB colors. In addition to this, various methods have been employed in digital photoelasticity studies [20,21,22].
One notable recent study by Park and her colleague [23,24,25] focused on the contact force network within granular particles. They primarily utilized RGB color data obtained with the reflective photoelastic coating method. Their research sheds light on how the contact force network evolves during loading, providing valuable insights into granular material behavior. However, we noted limitations in quantitatively measuring contact forces for each particle due to fluctuations in the accuracy and precision of the reflective photoelastic coating measurements. This highlights the ongoing challenges and the need for further refinement in this area of research.
By employing the photoelastic technique, researchers analyze the changes in photoelastic response in relation to applied load levels and subsequently create calibration data from this analysis. Researchers later use this calibration data as a reference to interpret the photoelastic responses originating from complex problems. When using particle models with a reflective photoelastic coating for experiments, like in Park’s research, the photoelastic responses of different particles may change depending on how well they are attached. Considering the shear strength and viscosity of the adhesive used and checking the photoelastic responses for numerous particle models enhance the consistency of photoelastic responses in individual particle models. Perfect consistency in the production environment of photoelastic particle models, including factors such as cutting methods for the photoelastic sheet, the quantity of adhesive used, the pressure applied during attachment, and curing time, poses inherent challenges. Unless these factors are perfectly controlled, inconsistent photoelastic responses are likely to arise.
In addition to these methods for extracting phase variation information [26,27], Fourier transform methods [28,29,30], and twelve-fringe photoelasticity [31,32], various approaches have been proposed, including recent attempts at analysis using deep learning methods. Researchers in the field of image analysis have recognized the effectiveness of deep learning methods in extracting meaningful information from image data for various purposes, including classification, recognition, and detection. Feng and his research team [33] proposed a method for analyzing fringe patterns based on deep learning methods. Subsequently, Sergazinov and Kramar [34] introduced a new approach to secure the large, labeled dataset required for training. On the other hand, Brinez-de Leon and his research team [35] proposed a dynamic optical elasticity testing method and analyzed patterns changing over time using artificial neural networks. This research team developed a neural network model for analyzing digital photoelastic images in 2022 [36] and presented PhotoelastNet. Additionally, around the same time, Bo Tao’s research team [37] designed and presented a deep convolutional neural network model with an encoder–decoder structure.
In this study, we utilized transfer learning models to address the challenge of subtle differences that are difficult to visually confirm in images obtained through reflective photoelastic experiments. Factors such as the cutting method of the photoelastic material, the amount of adhesive used, the applied pressure during attachment, and the curing time contribute to these differences. Transfer learning has the advantage of utilizing knowledge from pre-trained models even when the number of training data is limited. The performance of transfer learning results is significantly affected by the similarity between the problem to be solved and the transfer learning model. We trained various transfer learning models on the same photoelastic response images and compared the results to identify the most suitable transfer learning model for the problem addressed in this research. We used different models, like VGG [38], Inception [39], ResNet [40], DenseNet [41], and EfficientNet [42], to learn from pictures of the photoelastic response taken at different load levels. Subsequently, we compared the validation results of these models. Finally, we evaluated the performance of the models using metrics such as the coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE). Table 1 is a compilation of abbreviations commonly used in this paper.

2. Materials and Method

2.1. Theoretical Background

Photoelastic materials exhibit the characteristic of changing from isotropic properties to anisotropic properties when subjected to external forces. When light passes through photoelastic materials, the refractive index changes, causing birefringence, or double refraction, even if the incident light has the same wavelength. This phenomenon is known as birefringence, or double refraction. Due to birefringence, the split beams of light have different propagation velocities, resulting in a relative time difference, often referred to as relative retardation. Relative retardation is expressed by the following equation:
δ = 2 π f t v 1 t v 2 = 2 π h c λ 1 v 1 1 v 2 = 2 π h λ n 1 n 2
where:
  • f = the frequency of incident light.
  • t = the thickness of the photoelastic material.
  • v 1 , v 2 = the velocities indices of refracted light waves.
  • c = the velocity of light.
  • λ = the wavelength of incident light.
  • n 1 , n 2 = the refractive indices of two light waves.
Maxwell [43] established the following definition for the relationship between refractive index and stress:
n 1 n 2 = C σ 1 σ 2
where:
  • σ 1 σ 2 = principal stress difference.
  • C = the relative stress-optic coefficient.
The two equations above can be summarized as follows:
δ = 2 π h λ C σ 1 σ 2
On the other hand, light intensity, I , can be calculated using the following formula:
I = I a s i n 2 δ 2
where:
  • I a = the brightness of incident light.
The light intensity, I, varies in a pattern of brightness and darkness that repeats according to the sine function. “Fringes” refers to the arrangement of the stripes that appear during this repetition, and the “fringe order” defines the number of fringes. For the light intensity, I, to become 0, the relative delay, δ, must be a multiple of half wavelength. The fringe order, N, expresses this relationship in the following manner:
I = 0 ;     δ = 2 N π N = 0 , 1 , 2
Ultimately, when Equation (5) is substituted into the previously mentioned Equation (3), we can summarize the relationship between the principal stress difference and the fringe order as follows:
σ 1 σ 2 = N λ h C
In other words, to calculate the changes in stress within photoelastic materials, it is necessary to evaluate the fringe order. Previously, observers quantified the photoelastic response by directly calculating the fringe order through visual inspection of the number of fringes. However, with the advancement of digital image analysis technology, it has become possible to acquire data at the pixel level of photoelastic images and quantify them. The following section introduces Wang’s method and Park and Jung’s method.

2.2. Wang’s Method

Wang [44] conducted experiments using disks of various sizes and shapes made from photoelastic material to analyze the response to shear. Researchers calculated the inter-particle forces using the squared gradient method, also known as the G2 method. The G2 method determines the discrete squared gradient of light intensity at a given pixel with coordinates as follows, when the information for the pixel is as shown in Figure 1:
G 2 x ,   y = 1 4 I x + 1 , y I x 1 , y 2 + 1 4 I x , y + 1 I x , y 1 2 + 1 8 I x + 1 , y + 1 I x 1 , y 1 2 + 1 8 [ I x + 1 , y 1 I ( x 1 , y + 1 ) ] 2
where:
  • I ( x , y ) = the intensity at the pixel position ( x , y ) .
Figure 1. Pixel point illustration for G2 calculation in a photoelastic image.
Figure 1. Pixel point illustration for G2 calculation in a photoelastic image.
Applsci 14 00758 g001
The results of applying the G2 method to the photoelastic response image obtained from the experiments conducted in this study are shown in Figure 2. Figure 2 (right) displays patterns of rings with the same color, indicating significant calculation of G2 values in regions where the difference in light intensity becomes pronounced due to color changes. The fringe order, N, appears periodically whenever the light intensity, I, becomes zero, and the order increases, reflecting this phenomenon.
The G2 method is applied only to monochrome images in accordance with traditional photoelastic methods, so experiments must be conducted using monochromatic light, or color images must be converted into monochrome images.

2.3. Park and Jung’s Method

Park and Jung [24] suggested using digital RGB photoelastic measurements to find out how strong the contact force chains are inside the particle assembly. By tracking the changes in R, G, and B under axial load, periodic fluctuations in intensity can be observed. Park and Jung divided this into three sections to obtain linear equations. To apply each linear equation, the mean and standard deviation of the R, G, and B values must be considered. Equations (8) to (10) show the linear equations derived based on the photoelastic experiment images obtained in this study, following the methods of Park and Jung.
F P = 1.676 μ R 175.9 ,   i f   μ G μ R > 23   a n d   σ B < 57
F P = 2.892 μ R + 813.7 ,   i f   μ G μ R < 23   a n d   σ B > 57
F P = 0.1143 μ R 1256.5 ,   i f   μ G μ R < 23   a n d   σ B < 57
where:
  • F P = the particle force.
  • μ R = the mean of the intensities of red.
  • μ G = the mean of the intensities of green.
  • σ B = the standard deviation of the intensities of blue.
Utilizing all RGB color information, Park and Jung’s method accurately calculates contact forces, but a new set of linear equations must be established if the experimental method changes. Additionally, there is a limitation that the three linear equations alone cannot track changes if the axial load exceeds the third section and new patterns of RGB value changes appear.

2.4. Deep Transfer Learning Model Method

In transfer learning, a neural network in a similar or new field can achieve high accuracy by copying some of the weights trained on a large dataset provided by ImageNet [45] and then retraining from that state. While deep learning methods generally require a large number of data for training, transfer learning has the advantage of achieving high accuracy even when the number of data is limited. When the number of photoelastic images that can be obtained experimentally is limited, transfer learning can be considered a good learning method to apply.
Table 2 shows the representative transfer learning models applied in this study to solve the regression problem based on photoelastic images. A research team from the University of Oxford first introduced the VGG model [38] in 2014. The model is utilized for various computer vision tasks, including image classification, object detection, and image segmentation. It is characterized by its simple structure, primarily consisting of 3 × 3 convolutional layers and 2 × 2 max-pooling layers. VGG-16 and VGG-19 indicate the number of weight layers, with 16 and 19 layers, respectively. However, the model requires a large amount of computational power and memory, making it challenging to run the model in resource-limited environments.
Google developed Inception [39], a convolutional neural network model, in 2015. It is also known as GoogLeNet. It uses a parallel convolutional structure to apply convolutional filters of various sizes in parallel, extracting features at different scales. While it is computationally efficient, its complex structure makes it difficult to interpret. It also requires a significant amount of memory and computational power, necessitating appropriate hyperparameter tuning. InceptionV3 is an improved version of previous models, breaking down the 5 × 5 convolutional filters into smaller 3 × 3 filters to increase computational efficiency.
Microsoft Research developed ResNet (Residual Network) [40], a convolutional neural network architecture, in 2016. ResNet addresses the learning problems in deep networks by adding the input to the output of each layer using a structure called residual connections, making it easier to backpropagate gradients and mitigating the vanishing gradient problem. However, it may require significant computational power and memory for deep models. An improved model, ResNet152V2, introduced in 2016, features a deep network structure comprising a total of 152 layers. The pre-activation structure of ResNet152V2 applies the activation function before the convolutional operation, further stabilizing the learning process.
In 2017, researchers introduced DenseNet [41], a convolutional neural network architecture characterized by dense connections where each layer is connected to all its preceding layers. This allows for higher performance with fewer parameters, mitigates the vanishing gradient problem, and enables efficient learning in deeper networks. However, the dense connections can make the model complex and difficult to interpret. DenseNet201 is a variant of the DenseNet architecture, consisting of a total of 201 layers. Its deeper network structure allows for the learning of more complex features.
Introduced in 2019, EfficientNet [42] is designed to prioritize model size and computational efficiency. It uses compound scaling to simultaneously adjust the width, depth, and image resolution, achieving high performance with relatively fewer parameters and computational resources. This gives it the advantage of excellent computational efficiency. EfficientNetB7 is one of the most complex versions of the EfficientNet architecture, having the highest number of layers, nodes, and input image sizes. It is the model in the EfficientNet series that can be expected to deliver the highest performance.

2.5. Method for Applying the Transfer Learning Model

Figure 3 shows the setup of the photoelastic experiment. We prepared the specimens by attaching a photoelastic sheet (Vishay, Malvern, PA, USA) with a diameter of 8 mm and a thickness of 1 mm to the surface of a toroidal model made of brass with a diameter of 10 mm and a height of 15 mm. We positioned the photoelastic toroidal model in a loading frame under two-point contact conditions and conducted uniaxial compression tests. A digital camera (EOS 650D Canon; Tokyo, Japan) captured photoelastic responses each time the load increased by 100 N, from 0 N to 500 N. We tested a total of 10 model particles under the same conditions and obtained photoelastic images. Therefore, we acquired a total of 60 photoelastic images from 10 model particles under six different load conditions.
Figure 4 shows the process of applying the deep transfer learning model. The 60 photoelastic images obtained from the uniaxial compression tests underwent data augmentation for training. Each photoelastic image was slightly altered in size and angle to generate 200 augmented data points, resulting in a total of 1200 images. We preprocessed the prepared images according to each transfer learning model and then divided them into training and validation datasets in an 8:2 ratio. We trained the datasets using the six transfer learning models listed in Table 2. We limited the number of training epochs to 10 and set the batch size to 5 in order to quickly identify superior transfer learning models within a short training time. Finally, the model predicted the stress level and output the result when a photoelastic response image was input based on training with 1200 images. The training results were compared across models using key evaluation metrics for prediction models, such as R2, MAE, and RMSE.

3. Results and Discussion

The following figure, Figure 5, illustrates the changes in photoelastic responses observed in different load stages. While visual observation might lead to the impression that the photoelastic responses are similar among specimens at the same load level, quantitative evaluation based on pixel-level light intensity and RGB data reveals distinct differences in the results.

3.1. Results of Previously Presented Quantitative Evaluation Methods

Both Wang’s G2 method and Park and Jung’s method were used to analyze the results of the photoelastic experiment conducted in this study, as shown in Figure 6. The large deviation between the actual axial load and the evaluated load significantly contributes to the low coefficient of determination. When using reflective photoelastic technology, the process of attaching the photoelastic material to the particle model is inevitable. Many factors, such as the shear strength, viscosity, amount, and curing time of the adhesive used, as well as the pressure level during attachment and the roughness of the attachment surface, can affect the quality of this attachment. Therefore, manually made reflective photoelastic toroidal models can inevitably show uneven attachment quality.
Previous research that used photoelastic materials that were precisely made by machines in factories and then cut right away into disk-shaped particle models did not have any problems with photoelastic responses that were not always the same. As a result, they can achieve high accuracy when evaluated using the G2 method or Park and Jung’s method. However, it was confirmed that these methods are entirely inadequate for evaluating the new form of photoelastic toroidal models used in this study.

3.2. Results of Transfer Learning Models

Figure 7 shows the validation results of the predicted load against the applied load for the six transfer learning models tested in this study. The graph also includes the R2 value, which indicates how well the model explains the variability in the data and was calculated according to Equation (11). The coefficient of determination ranges between 0 and 1, with values closer to 1 indicating a perfect fit.
R 2 = i = 1 n ( y i ^ y ¯ ) 2 ( y i y ¯ ) 2
where:
  • n = the total number of observations.
  • y i ^ = the predicted value.
  • y ¯ = the mean of the observed values.
  • y i = the actual observed values.
Figure 7. Evaluation results of validation data for transfer learning models: (a) VGG16; (b) VGG19; (c) InceptionV3; (d) ResNet152V2; (e) DenseNet201; (f) EfficientNetB7.
Figure 7. Evaluation results of validation data for transfer learning models: (a) VGG16; (b) VGG19; (c) InceptionV3; (d) ResNet152V2; (e) DenseNet201; (f) EfficientNetB7.
Applsci 14 00758 g007aApplsci 14 00758 g007b
All six transfer learning models displayed coefficients of determination approaching 0.9, demonstrating significantly higher accuracy compared with the results from Wang’s method and Park and Jung’s method, as shown in Figure 6. Among these, the VGG16 and VGG19 models showed exceptional performance, with coefficients of determination of 0.984 and 0.988, respectively.
InceptionV3, ResNet152V2, and DenseNet201 also showed high coefficients of determination, with values of 0.932, 0.957, and 0.946, respectively. However, we observed a noticeable increase in the deviation of the predicted data when the axial load levels ranged between 100 and 400 N. The validation data in this study are structured in the same six steps as the training data, increasing by 100 N from a minimum of 0 N to a maximum of 500 N. Applying a photoelastic response image at a load level between 200 and 300 N, such as 270 N, to the InceptionV3 model as validation data would result in a significantly larger error compared with the VGG models. The coefficient of determination for EfficientNetB7 was the lowest among the six models tried, 0.886, and even the predicted results at 0 N axial load showed a much larger deviation compared with the other models.
Figure 8 visualizes the learning process, showing the loss function values for training and validation data against epochs, which represent the total number of iterations over the entire dataset. The loss function used here is the MSE, commonly used for regression problems, and is calculated according to Equation (12).
M S E = 1 n i = 1 n y i y i ^ 2
By observing the trend of loss values against the number of epochs, one can assess the learning status, such as the presence of overfitting or underfitting and the stability of the learning. Except for EfficientNetB7, the models generally showed a decreasing trend in both training and validation loss as the number of epochs increased, indicating a favorable learning status. In contrast, EfficientNetB7 exhibited significant fluctuations in validation loss, suggesting the possibility of overfitting and an unstable learning state.
Table 3 summarizes the performance metrics of the transfer learning models, allowing for a clearer comparison. Equations (13) and (14) calculate the MAE and RMSE. The mean absolute error intuitively represents the average deviation of the model’s predictions, while the root mean square error is more sensitive to larger errors due to squaring the deviations. Both metrics have the same units as the prediction errors and are simple to compute, making them widely used as performance evaluation metrics for regression models.
M A E = 1 n i = 1 n y i y i ^
R M S E = 1 n i = 1 n y i y i ^ 2
As mentioned earlier, a higher coefficient of determination indicates smaller MAE and RMSE, reflecting a smaller deviation in predictions. VGG19’s MAE was 14.127, and its RMSE was 18.834, making it the best-performing model among those tested. Based on the performance metrics, the models can be ranked in the following order: VGG19, VGG16, ResNet152V2, DenseNet201, InceptionV3, and EfficientNetB7. The R2, MAE, and RMSE calculated for the training/calibration dataset were evaluated in the same order as those calculated for the test/validation dataset.
For the VGG model, which was evaluated as the most superior in performance, both the MAE and the RMSE were calculated to be over 10 N. This makes it challenging to accurately predict the load level for photoelastic images that change in increments of 10 N. This inherent limitation can be attributed to the fact that only photoelastic images with load increments of 100 N were used for training. Therefore, to obtain a more precise quantitative evaluation model for photoelastic responses, training data based on more refined load increments must be secured.
The VGG model, which was the first transfer learning model introduced in this study, has the simplest network architecture. One can examine the superior performance of the VGG model compared with more recently introduced transfer learning models with more complex network structures from various perspectives. The simplicity of the VGG model’s architecture, composed of 3 × 3 convolutional filters and pooling layers, can sometimes result in a more robust and easier-to-train model on specific datasets. Furthermore, when the dataset to be trained on shares similarities with the pre-trained data, using a pre-trained model can greatly enhance feature extraction in the case of transfer learning. Therefore, the performance of deep learning models heavily depends on the characteristics of the dataset, and in this particular study, the photoelastic response images used were suitable for training with the VGG model, which is why it exhibited the best performance. Brinez-de Leon et al. [46] proposed the StressNet model, a convolutional-based autoencoder architecture inspired by the VGG16 model, for analyzing photoelastic images.
Several limitations need to be acknowledged in this study. First, the suggested transfer learning model lets you measure the photoelastic response to find out how much force is pressing down on the particles as a whole, but it does not tell you anything about the direction and size of each stress component. Researchers [47,48,49] have proposed stress separation techniques in the literature to obtain individual stress components. Similarly, for the transfer learning model, considering individual stress components based on the training data is worth exploring. Secondly, this study limited the number of training iterations for the transfer learning model to focus on the accuracy of results obtained within a short time. As mentioned in Section 2.4, models like VGG and ResNet demand significant computational resources. In terms of computational cost, a comprehensive analysis of the model’s performance should consider a wider range of complex problems and settings that consume varying amounts of computational resources. Lastly, it is important to note that the method proposed in this study is nonparametric and relies on limited data. Further investigation should determine if obtaining a sufficient number of samples and employing parametric methods yield different results. Interpreting the results and considering the applicability of the proposed method in practical scenarios require taking these limitations into account.

4. Conclusions

In this study, we applied six different deep learning transfer models to quantitatively analyze images obtained from photoelastic experiments on particle models. Among the models tested, the VGG19 model exhibited the most promising results, with the R2 of 0.988, the MAE of 14.127, and the RMSE of 18.834 for the validation data.
Previously established quantitative evaluation methods for photoelastic responses, such as Wang’s method and Park and Jung’s method, can predict load levels with high accuracy under specific conditions. However, their accuracy significantly dropped when applied to the reflective photoelastic particle models used in this study. The inherent variability in the attachment quality of the reflective photoelastic material to each particle model is responsible for this discrepancy.
The deep learning transfer models demonstrated their capability to quantitatively evaluate load levels even under these challenging conditions. All six models—VGG16, VGG19, InceptionV3, ResNet152V2, DenseNet201, and EfficientNetB7—got R2 values close to 0.9, which means they were better than the old ways of testing photoelastic responses.
The VGG model outperformed other, more complex transfer learning models in this study. VGG’s simplicity in architecture, consisting of 3 × 3 convolutional filters and pooling layers, can make it more robust and easier to train on specific datasets. Additionally, when a dataset is similar to pre-trained data, using a pre-trained model for transfer learning is more effective. The photoelastic response images in this study were well suited for training with the VGG model, explaining its superior performance. Similarly, another study used a VGG16-inspired architecture called StressNet for analyzing photoelastic images.
The exploration of deep learning models in the analysis of photoelastic particle models has shown promising results. Although this study focused on applying learning models to a limited dataset, we anticipate that a more diverse and extensive dataset will enable the development of models that provide even higher accuracy in the future. As geotechnical engineering continues to evolve, the integration of advanced computational methods like deep learning will undoubtedly play a pivotal role in future research and applications.

Author Contributions

Conceptualization, B.H.N. and Y.-H.J.; Data curation, B.H.N.; Formal analysis, Y.-H.J.; Funding acquisition, Y.-H.J.; Investigation, B.H.N.; Methodology, S.K.; Project administration, Y.-H.J.; Resources, Y.-H.J.; Software, S.K.; Supervision, B.H.N. and Y.-H.J.; Validation, S.K., B.H.N. and Y.-H.J.; Visualization, S.K. and B.H.N.; Writing—original draft, S.K.; Writing—review and editing, B.H.N. and Y.-H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (RS-2023-00254093).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available upon request from the authors. The data that support the findings of this study are available from the corresponding author, Y.-H.J., upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Frocht, M.M. Photoelasticity: The Selected Scientific Papers of MM Frocht; Pergamon: Oxford, UK, 1969. [Google Scholar]
  2. Dantu, P. Contribution à l’étude mécanique et géométrique des milieux pulvérulents. In Proceedings of the 4th International Conference on Soil Mechanics and Foundation Engineering, London, UK, 12–24 August 1957; Volume 1, pp. 144–148. [Google Scholar]
  3. Wakabayashi, T. Photoelastic method for determination of stress in powdered mass. In Proceedings of the 7th Japan National Congress for Applied Mechanics; JNCTAM: Tokyo, Japan, 1957; Volume I-34, pp. 153–158. [Google Scholar]
  4. Drescher, A.; De Jong, G.D.J. Photoelastic verification of a mechanical model for the flow of a granular material. J. Mech. Phys. Solids 1972, 20, 337–340. [Google Scholar] [CrossRef]
  5. Dyer, M. Observation of the Stress Distribution in Crushed Glass with Applications to Soil Reinforcement. Ph.D. Thesis, University of Oxford, Oxford, UK, 1985. [Google Scholar]
  6. Allersma, H.G.B. Optical Analysis of Stress and Strain in Photoelastic Particle Assemblies. Ph.D. Thesis, Delft University of Technology, Delft, The Netherlands, 1987. [Google Scholar]
  7. Daniels, K.E.; Kollmer, J.E.; Puckett, J.G. Photoelastic force measurements in granular materials. Rev. Sci. Instrum. 2017, 88, 051808. [Google Scholar] [CrossRef] [PubMed]
  8. Hariprasad, M.P.; Ramesh, K. Analysis of contact zones from whole field isochromatics using reflection photoelasticity. Opt. Lasers Eng. 2018, 105, 86–92. [Google Scholar] [CrossRef]
  9. Abed Zadeh, A.; Bares, J.; Brzinski, T.A.; Daniels, K.E.; Dijksman, J.; Docquier, N.; Everitt, H.O.; Kollmer, J.E.; Lantsoght, O.; Wang, D.; et al. Enlightening force chains: A review of photoelasticimetry in granular matter. Granul. Matter 2019, 21, 83. [Google Scholar] [CrossRef]
  10. Hartley, R.R.; Behringer, R.P. Logarithmic rate dependence of force networks in sheared granular materials. Nature 2003, 421, 928–931. [Google Scholar] [CrossRef]
  11. Hayman, N.W.; Ducloue, L.; Foco, K.L.; Daniels, K.E. Granular controls on periodicity of stick-slip events: Kinematics and force-chains in an experimental fault. Pure Appl. Geophys. 2011, 168, 2239–2257. [Google Scholar] [CrossRef]
  12. Behringer, R.P.; Chakraborty, B. The physics of jamming for granular materials: A review. Rep. Prog. Phys. 2018, 82, 012601. [Google Scholar] [CrossRef] [PubMed]
  13. Mesnager, M. Sur la determination optique des tensions interieures dans les solides a trois dimensions. Comptes Rendus 1930, 190, 1249. [Google Scholar]
  14. Zandman, F.; Redner, S.S.; Riegner, E.I. Reinforcing effect of birefringent coatings. Exp. Mech. 1962, 2, 55–64. [Google Scholar] [CrossRef]
  15. Ramesh, K. Digital Photoelasticity: Advanced Techniques and Applications, 1st ed.; Springer-Verlag: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  16. Kim, S.M.; Nam, B.H.; Jung, Y.H. Evaluating variability in reflective photoelasticity: Focus on adhesives, light sources, and camera setup. Appl. Sci. 2023, 13, 10628. [Google Scholar] [CrossRef]
  17. Ajovalasit, A.; Barone, S.; Petrucci, G. Towards RGB photoelasticity: Full-field automated photoelasticity in white light. Exp. Mech. 1995, 35, 193–200. [Google Scholar] [CrossRef]
  18. Ramesh, K.; Deshmukh, S.S. Three fringe photoelasticity-use of colour image processing hardware to automate ordering of isochromatics. Strain 1996, 32, 79–86. [Google Scholar] [CrossRef]
  19. Ajovalasit, A.; Petrucci, G.; Scafidi, M. Photoelastic analysis of edge residual stresses in glass by automated “test fringes” methods. Exp. Mech. 2012, 52, 1057–1066. [Google Scholar] [CrossRef]
  20. Ajovalasit, A.; Petrucci, G.; Scafidi, M. Review of RGB photoelasticity. Opt. Lasers Eng. 2015, 68, 58–73. [Google Scholar] [CrossRef]
  21. Briñez-de León, J.C.; Restrepo, M.A.; Branch, B.J.W. Computational analysis of Bayer colour filter arrays and demosaicking algorithms in digital photoelasticity. Opt. Lasers Eng. 2019, 122, 195–208. [Google Scholar] [CrossRef]
  22. Ramesh, K.; Sasikumar, S. Digital photoelasticity: Recent developments and diverse applications. Opt. Lasers Eng. 2020, 135, 106186. [Google Scholar] [CrossRef]
  23. Park, K.H.; Jung, Y.H.; Kwak, T.Y. Effect of initial granular structure on the evolution of contact force chains. Appl. Sci. 2019, 9, 4735. [Google Scholar] [CrossRef]
  24. Park, K.H.; Jung, Y.H. Quantitative detection of contact force chains in a model particle assembly using digital RGB photoelastic measurements. KSCE J. Civ. Eng. 2020, 24, 63–72. [Google Scholar] [CrossRef]
  25. Park, K.H.; Baek, S.H.; Jung, Y.H. Investigation of arch structure of granular assembly in the trapdoor test using digital RGB photoelastic analysis. Powder Technol. 2020, 366, 560–570. [Google Scholar] [CrossRef]
  26. Hecker, F.W.; Abeln, H. Digital phase-shifting photoelasticity. In Proceedings of the 14th Congress of the International Commission for Optics, Quebec, QC, Canada, 24–28 August 1987; Volume 813, pp. 97–98. [Google Scholar] [CrossRef]
  27. Briñez, J.C.; Martínez, A.R.; Branch, J.W. Computational hybrid phase shifting technique applied to digital photoelasticity. Optik 2018, 157, 287–297. [Google Scholar] [CrossRef]
  28. Kemao, Q. Two-dimensional windowed Fourier transform for fringe pattern analysis: Principles, applications and implementations. Opt. Lasers Eng. 2007, 45, 304–317. [Google Scholar] [CrossRef]
  29. Huang, L.; Kemao, Q.; Pan, B.; Asundi, A.K. Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry. Opt. Lasers Eng. 2010, 48, 141–148. [Google Scholar] [CrossRef]
  30. Zhang, Z.; Jing, Z.; Wang, Z.; Kuang, D. Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase calculation at discontinuities in fringe projection profilometry. Opt. Lasers Eng. 2012, 50, 1152–1160. [Google Scholar] [CrossRef]
  31. Pandey, A.; Ramesh, K. Development of a new normalization technique for twelve fringe photoelasticity (TFP). In Advancement of Optical Methods & Digital Image Correlation in Experimental Mechanics, Volume 3: Proceedings of the 2018 Annual Conference on Experimental and Applied Mechanicsand Lasers in Engineering; Springer: Cham, Switzerland, 2018; Volume 3, pp. 177–180. [Google Scholar] [CrossRef]
  32. Sasikumar, S.; Ramesh, K. Applicability of colour transfer techniques in Twelve fringe photoelasticity (TFP). Opt. Lasers Eng. 2020, 127, 105963. [Google Scholar] [CrossRef]
  33. Feng, S.; Chen, Q.; Gu, G.; Tao, T.; Zhang, L.; Hu, Y.; Yin, W.; Zuo, C. Fringe pattern analysis using deep learning. Adv. Photonics 2019, 1, 025001. [Google Scholar] [CrossRef]
  34. Sergazinov, R.; Kramár, M. Machine learning approach to force reconstruction in photoelastic materials. Mach. Learn. Sci. Technol. 2021, 2, 045030. [Google Scholar] [CrossRef]
  35. Briñez-de León, J.C.; Rico, G.M.; Branch, J.W.; Restrepo, M.A. Pattern recognition based strategy to evaluate the stress field from dynamic photoelasticity experiments. Opt. Photonics Inf. Process. XIV 2020, 11509, 112–126. [Google Scholar] [CrossRef]
  36. Briñez-de León, J.C.; Rico, G.M.; Restrepo, M.A. PhotoelastNet: A deep convolutional neural network for evaluating the stress field by using a single color photoelasticity image. Appl. Opt. 2022, 61, D50–D62. [Google Scholar] [CrossRef]
  37. Tao, B.; Wang, Y.; Qian, X.; Tong, X.; He, F.; Yao, W.; Chen, B.; Chen, B. Photoelastic stress field recovery using deep convolutional neural network. Front. Bioeng. Biotechnol. 2022, 10, 818112. [Google Scholar] [CrossRef]
  38. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014. [Google Scholar] [CrossRef]
  39. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  40. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  41. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  42. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  43. Maxwell, J.C. On the equilibrium of elastic solids. Proc. R. Soc. Edinb. 1851, 2, 294–296. [Google Scholar] [CrossRef]
  44. Wang, D. Response of Granular Materials to Shear: Origins of Shear Jamming, Particle Dynamics, and Effects of Particle Properties. Ph.D. Thesis, Duke University, Durham, NC, USA, 2018. [Google Scholar]
  45. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
  46. De León, J.C.B.; Rico-Garcia, M.; Branch, J.W.; Restrepo, M.A. StressNet: A deep convolutional neural network for recovering the stress field from isochromatic images. Appl. Digit. Image Process. XLIII 2020, 11510, 126–137. [Google Scholar] [CrossRef]
  47. Murakami, Y. StressNet: Development of system resolving all stress components in thermoelastic stress analysis. Trans. Jpn. Soc. Mech. Eng. 1995, 61, 2482. [Google Scholar] [CrossRef]
  48. Sakagami, T.; Kubo, S.; Fujinami, Y.; Kojima, Y. StressNet: Experimental stress separation technique using thermoelasticity and photoelasticity and its application to fracture mechanics. JSME Int. J. Ser. A Solid Mech. Mater. Eng. 2004, 47, 298–304. [Google Scholar] [CrossRef]
  49. Rodríguez, J. Comparison of stress separation procedures. experiments versus theoretical formulation. Eng. Solid Mech. 2022, 10, 153–164. [Google Scholar] [CrossRef]
Figure 2. Results of applying the G2 method to the photoelastic response image.
Figure 2. Results of applying the G2 method to the photoelastic response image.
Applsci 14 00758 g002
Figure 3. The setup of the photoelastic experiment.
Figure 3. The setup of the photoelastic experiment.
Applsci 14 00758 g003
Figure 4. The process of applying the deep transfer learning model.
Figure 4. The process of applying the deep transfer learning model.
Applsci 14 00758 g004
Figure 5. Variations in photoelastic responses observed for each specimen in different load stages.
Figure 5. Variations in photoelastic responses observed for each specimen in different load stages.
Applsci 14 00758 g005
Figure 6. Evaluation results of photoelastic responses: (a) G2 method; (b) Park and Jung’s method.
Figure 6. Evaluation results of photoelastic responses: (a) G2 method; (b) Park and Jung’s method.
Applsci 14 00758 g006
Figure 8. Examine the loss of training and validation data in accordance with Epoch: (a) VGG16; (b) VGG19; (c) InceptionV3; (d) ResNet152V2; (e) DenseNet201; (f) EfficientNetB7.
Figure 8. Examine the loss of training and validation data in accordance with Epoch: (a) VGG16; (b) VGG19; (c) InceptionV3; (d) ResNet152V2; (e) DenseNet201; (f) EfficientNetB7.
Applsci 14 00758 g008
Table 1. List of abbreviations used in the paper.
Table 1. List of abbreviations used in the paper.
AbbreviationsDefinition
RGBRed, green, and blue
R2Coefficient of determination
MSEMean square error
MAEMean absolute error
RMSERoot mean square error
Table 2. Deep learning transfer learning models used in the study.
Table 2. Deep learning transfer learning models used in the study.
ModelYear of IntroductionKey FeaturesAdvantagesDisadvantages
VGG-1620143 × 3 convolution, 2 × 2 max Pooling, ReLU activationSimple and easy-to-understand structureHigh computational and memory requirements, risk of overfitting
VGG-192014Similar to VGG-16 but deeperSlightly better performance than VGG-16 due to depthHigh computational and memory requirements, risk of overfitting
InceptionV32015Parallel convolutional architecture, multiple filter sizesEffectively learns features at multiple scalesComplex structure can be hard to interpret
ResNet152V22016Residual connections; V2 is an improved versionEfficient learning in deep networks mitigates the vanishing gradient problemMay require significant computational resources
DenseNet2012017Dense connections; each layer is connected to all previous layersHigh parameter efficiency, less prone to vanishing gradient problemComplex structure, may have high memory usage
EfficentNetB72019Compound scaling method, high performance with smaller model sizeHigh performance and efficiency, fewer parameters for high performanceComplex structure, proper scaling required for optimal performance
Table 3. Comparison results of performance metrics for transfer learning models.
Table 3. Comparison results of performance metrics for transfer learning models.
ModelTraining/Calibration DataTest/Validation Data
R2MAE
(N)
RMSE
(N)
R2MAE
(N)
RMSE
(N)
VGG-160.986 15.176 20.458 0.984 15.792 21.520
VGG-190.986 14.738 19.844 0.988 14.127 18.834
InceptionV30.938 35.263 42.501 0.932 36.506 44.321
ResNet152V20.958 25.804 35.278 0.957 25.685 34.412
DenseNet2010.946 31.899 39.586 0.946 31.920 39.212
EfficientNetB70.896 44.375 54.967 0.886 48.624 58.081
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.; Nam, B.H.; Jung, Y.-H. Comparison of Deep Transfer Learning Models for the Quantification of Photoelastic Images. Appl. Sci. 2024, 14, 758. https://doi.org/10.3390/app14020758

AMA Style

Kim S, Nam BH, Jung Y-H. Comparison of Deep Transfer Learning Models for the Quantification of Photoelastic Images. Applied Sciences. 2024; 14(2):758. https://doi.org/10.3390/app14020758

Chicago/Turabian Style

Kim, Seongmin, Boo Hyun Nam, and Young-Hoon Jung. 2024. "Comparison of Deep Transfer Learning Models for the Quantification of Photoelastic Images" Applied Sciences 14, no. 2: 758. https://doi.org/10.3390/app14020758

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop