Next Article in Journal
In Situ Photoacoustic Detection System for SO2 in High-Pressure SF6 Buffer Gas Using UV LED
Next Article in Special Issue
Machine Learning-Assisted Improved Anomaly Detection for Structural Health Monitoring
Previous Article in Journal
Disentangling Noise from Images: A Flow-Based Image Denoising Neural Network
Previous Article in Special Issue
Automated Impact Damage Detection Technique for Composites Based on Thermographic Image Processing and Machine Learning Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Imaging of Insect Hole in Living Tree Trunk Based on Joint Driven Algorithm of Electromagnetic Inverse Scattering

Department of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9840; https://doi.org/10.3390/s22249840
Submission received: 29 October 2022 / Revised: 7 December 2022 / Accepted: 8 December 2022 / Published: 14 December 2022

Abstract

:
Trunk pests have always been one of the most important species of tree pests. Trees eroded by trunk pests will be blocked in the transport of nutrients and water and will wither and die or be broken by strong winds. Most pests are social and distributed in the form of communities inside trees. However, it is difficult to know from the outside if a tree is infected inside. A new method for the non-invasive detecting of tree interiors is proposed to identify trees eroded by trunk pests. The method is based on electromagnetic inverse scattering. The scattered field data are obtained by an electromagnetic wave receiver. A Joint-Driven algorithm is proposed to realize the electromagnetic scattered data imaging to determine the extent and location of pest erosion of the trunk. This imaging method can effectively solve the problem of unclear imaging in the xylem of living trees due to the small area of the pest community. The Joint-Driven algorithm proposed by our group can achieve accurate imaging with a ratio of pest community radius to live tree radius equal to 1:60 under the condition of noise doping. The Joint-Driven algorithm proposed in this paper reduces the time cost and computational complexity of tree internal defect detection and improves the clarity and accuracy of tree internal defect inversion images.

1. Introduction

According to the Global Forest Resources Assessment report released by FAO in 2020, there are 4.06 billion hectares of forest in the world and, since 2015, an average of 10 million hectares of forestland has disappeared every year. The annual outbreak of forest pests will damage about 35 million hectares of forests. Some insects eat leaves and branches, but some trunk-boring pests such as red fat sacs live in the trunk and cannot be found in a timely manner, causing the deaths of trees and economic losses. Therefore, it is necessary to detect pest communities in the xylems of living trees in order to timely prevent and control pests and diseases and protect forest resources.
Mainstream methods for the detection of internal defects in living tree trunks include stress wave, ultrasonic, and computed tomography (CT) scanning [1,2,3]. However, most of these methods have their drawbacks [4,5]. For example, the stress wave method involves nailing into the trunk of the tree due to its detection characteristics [6]. This detection method will cause damage to the tree and cannot be non-invasive. The ultrasonic inspection process is susceptible to interference from the external environment, and coupling agents may cause environmental pollution [7]. CT equipment is costly and can easily cause radiation hazards to researchers [8,9]. Although nuclear magnetic resonance imaging has proven to be one of the most accurate and powerful characterization techniques in applications such as the diagnosis of wounded branch structures in beech trees [10], its high cost makes it unfeasible in most applications.
At present, electromagnetic inverse scattering is a good method for non-invasive testing. Electromagnetic inverse scattering (EMIS) [11] refers to the use of measured field data to invert the electrical property parameters of a scatterer in the detection region. A large body of research has been accumulated on the EMIS problem, which can be broadly classified into two categories. One type is the linearization of nonlinear problems, represented by the Born and Rytov approximations [12,13], which have very good imaging results for Low dielectric constants. Another type of inversion method is the nonlinear method [14], which uses an iterative idea to improve the inversion accuracy through continuous iteration; the main methods are the Contrast Source Inversion (CSI) and the Subspace Optimization Method (SOM) [15,16]. Although these nonlinear iterative methods improve accuracy compared to linear approximation methods, they have disadvantages such as sensitivity to initial values and slow convergence [17]. See Table 1 for details.
Although electromagnetic inverse scattering has a wide range of applications in engineering measurement and medical fields [18,19], when it is used to detect the interior of trees we find that, with larger trees, the relatively smaller proportion of the pest community area will make inversion imaging more difficult. This is because the electromagnetic inverse scattering is nonlinear, which easily makes the program fall into an error solution that differs from the accurate algorithm. Moreover, the scattering equation is highly ill-conditioned; that is, a small error in the input data may cause a great change in the output results. The traditional methods, whether iterative or non-iterative, do not have a very strong data processing capacity, so retrieving the model with a small proportion of pest communities is difficult.
The Deep Convolutional network is improved by studying the Contrast Source Inversion algorithm, and the Super-Resolution network is subsequently introduced. A Joint-driven Super-Resolution algorithm with a low computational load and high resolution is proposed to solve the smaller proportion of pest communities in living trees.

2. Electromagnetic Inverse Scattering Formulation

The algorithm is tested from a two-dimensional (2-D) perspective [20], and Figure 1 shows the model legend of our EMIS, constructed as a living tree–pest model. Our group will start with Maxwell’s equations [21] by constructing scattering theoretical equations to obtain the parameter optimization ideas. To find the solution of the integral equations and investigate two major types of model-driven and data-driven inversion algorithms, our group will add Super-Resolution networks based on issues such as noise clutter during imaging to form a model-driven deep learning super-resolution-based inversion algorithm [22,23,24,25].
In this paper, we use a two-dimensional plane wave as the incident wave to detect the unknown domain, denoted as E s , and the scattered field, denoted as E i . The forward propagation of electromagnetic waves is composed of two equations known as the Lippmann-Schwinger equation [26]. The first is the equation of state, which describes the interaction of the wave scatters:
E t ( r ) = E i ( r ) + i ω μ 0 D G ( r , r ) [ i ω ε 0 ( ε r ( r ) 1 ) E t ( r ) ] d r ,
where ω = 2 π f is the angular frequency, μ 0 is the air magnetic permeability, and G ( r , r ) denotes the Green’s function. For a two-dimensional plane wave, the G ( r , r ) is defined as
G ( r , r ) = j 4 H 0 ( 2 ) ( k 0 | r r | ) ,
where   H 0 ( 2 ) denotes the second class of Hankel functions [27], k 0 denotes the number of waves in free space, r = ( x , y ) is the source point of the object domain D, r = ( x , y ) denotes the position vector at the receiver, E t denotes the total field, expressed as E t = E s + E i , and ε r ( r ) is the relative permittivity of the target scatter.
The second equation, known as the data equation, describes the equivalent current radiation process for the scattered field:
E s ( r ) = i ω μ 0 D G ( r , r ) [ i ω ε 0 ( ε r ( r ) 1 ) E t ( r ) ] d r .  
In (3), the physical meaning of i ω ε 0 ( ε r ( r ) 1 ) is the inductive contrast current density, but we define the standardized contrast current density as J ( r ) = ( ε r ( r ) 1 ) E t ( r ) , making variable k 0 = ω μ 0 ε 0 , express ε r ( r ) 1 as contrast χ ( r ) , and introducing operators G s ( · ) and G D ( · ) .
k 0 2 D G ( r , r ) J ( r ) d r = { G s ( J ) , r S G D ( J ) , r D .  
The control equations can then be written in two non-permissible forms.
The first is called the field-type equation, where the electric field involves two equations:
E t ( r ) = E i ( r ) + G D ( χ E t ) , r D ,
E s ( r ) = G s ( χ E t ) , r S .
The second is called the source-type equation, in which the current involves two equations:
J ( r ) = χ ( r ) [ E i ( r ) + G D ( J ) ] , r D ,
E s ( r ) = G s ( J ) , r S .  
The electromagnetic inverse scattering imaging algorithm is based on the above two types of equations to solve for the target cross-section parameters.

3. Joint-Driven Algorithm

To overcome the nonlinearity and high morbidity of inverse scattering and to solve the problem that the pest community accounts for a small proportion of the cross-sectional area of living trees, a Joint-Driven algorithm is proposed by combining the data-driven algorithm with the model-driven algorithm. The advantage of the model-driven algorithm in solving problems lies in accurate mathematical modeling based on the mechanism information of the problem itself, while the disadvantage is its excessive dependence on the initial value. The data-driven algorithm relies on the extraction of feature information in the data to give full play to the advantages of big data. It can only solve problems by constantly learning the relationship between data. Therefore, this paper introduces the model-driven mechanism into the data-driven, integrating the advantages of both.
Because the non-destructive testing of trees using electromagnetic waves is almost carried out in the field, it causes the data we obtain to contain noise. To improve the resolution, increase the clarity, and optimize the inversion results, we add a super-resolution network after the inversion and train it in a targeted way so that it can match the optimized inversion results. We call this behavior the Joint-Driven Super-Resolution algorithm.
Neural networks are typically data-driven algorithms, with Deep Convolutional neural networks being the most capable of extracting and classifying data features [28]. The convolutional neural network structure in this paper is shown in Figure 2 and is a typical ConvNets network, consisting of four types of layers: input layer, convolutional layer, pooling layer, and fully connected layer.
E s is fed into the network as a 32 × 32 matrix form as an input into the convolutional network. The perceptual field size in the network is 5 × 5, and the feature data is evaluated in the pooling layer with a 2 × 2 pooling operation. Finally, a 100 × 100 regression value is involved as the output for imaging through the fully connected layer.

3.1. Unite the Contrast Source Inversion with a Deep Convolutional Network

To prevent the occurrence of the overfitting phenomenon in the process of neural network training—that is, when the neural network has achieved good detection accuracy on the training data, but the detection accuracy on the test data set is greatly decreased–it is necessary to normalize the cost function. In this paper, L2 normalization is adopted to improve the cost function.
C = 1 n x j [ y j ln a j L + ( 1 y j ) ln ( 1 a j L ) ] + λ 2 n ω ω 2 ,
If λ 2 n ω ω 2 is the normalization term and the cost function before improvement is defined as C 0 , then the cost function after specification can be written as
C = C 0 + λ 2 n ω ω 2 .
The partial derivative of C concerning the weight and bias in the network is
C ω = C 0 ω + λ n ω ,
C b = C 0 b .
In (12), it shows that the partial derivative of the cost function L2 concerning bias does not change after specification and the biased gradient descent rule does not change, while the weight learning rule becomes
ω ( 1 η m ) ω η m x C x ω .
In the learning process, the weight is adjusted by the factor ( 1 η m ) .
In the cost function, the learning rate η and parameter λ are hyperparameters. Although they are not directly related to the establishment of the Deep Convolutional neural network, the selection of hyperparameters λ and η has a great relationship with the training speed and imaging effect.
To speed up the training process of a Deep Convolutional neural network and optimize the selection of hyperparameters, the Contrast Source Inversion is combined with the cost function of a Deep Convolutional neural network. The core of the Contrast Source Inversion is to minimize the objective function [29,30], which is in the form of
F ( J j , χ , β ) = β j χ E j i n c J j + χ G D J j D 2 j χ E j i n c D 2 + j E j s G S J j S 2 j E j s S 2 .
The aim of the Deep Convolutional neural network is to acquire the appropriate weight and bias by minimizing the cost function, so the normalization of the Contrast Source Inversion is introduced into the Deep Convolutional neural network and the value of the hyperparameter λ is determined by the mechanism of information of the Contrast Source Inversion. Formally, the value of λ is changed from a constant value to a dynamic real value determined by the Contrast Source Inversion to form a Jointly-Driven deep learning network. Therefore, the cost–function form of a model-driven deep learning network is:
C = 1 n Σ x j [ y j ln a j L + ( 1 y j ) ln ( 1 a j L ) ] Σ j Σ k E k s s 2 + λ 0 2 n Σ j Σ k E k s s 2 + Σ ω ω 2 .
In (15), Σ j Σ k E k s s 2 represents the   j group of data and the sum of the binary norm of scattering field data obtained by K receivers in each group of data. Although E k s is known, E k s in each group of training data is not equal, so Σ j Σ k E k s s 2 is a dynamic real value. λ 0 is a constant weighting coefficient. In addition to preventing the Deep Convolutional neural network from over-fitting, λ 0 can also prevent abnormal training data input from causing Σ j Σ k E k s s 2 mutation, thus reducing the training effect of Deep Convolutional neural network.

3.2. Analysis and Optimization of Weights

To speed up training, weights and bias are optimized. Traditional convolutional networks are selected by Gaussian random variables, and the mean of normalized Gaussian distribution is 0 and the standard deviation is 1 [31]. Since z = ω x + b , z itself obeys the Gaussian distribution after initializing ω and b using gaussian random variables, as shown in Figure 3.
As can be seen in Figure 3, the slope of the curve is small and the overly large input values make σ ( z ) , the output data, almost saturated. This characteristic makes the weight updating process change very slowly. To improve the learning speed of the network, this paper sets the mean of the Gaussian random distribution to 0 and the standard deviation to 1 n i n , where n i n is the number of input neurons. The improved z distribution is shown in Figure 4.

3.3. Super-Resolution Network-Assisted Imaging

When the proportion of pest communities in the cross-section of trees is less than 10%, the inversion results will be noisy. If the proportion of the pest community in the cross-section of the tree becomes smaller and smaller, the noise of the inversion image becomes denser and denser. To solve this problem, an image optimization and image reconstruction technology is introduced: Real-ESRGAN [32]. The Super-Resolution algorithm can not only improve the resolution of the inversion image but also optimize the image quality and remove the dryness of the image. Real-ESRGAN uses a U-Net discriminator with spectral normalization [33,34,35], which can identify where the noise is and where the pest community is located. Its network structure is shown in Figure 5.
The U-Net discriminator focuses not only on the image as a whole but also on the characteristics of each part, the most important for inverse imaging being the shape and position of the scatter.

3.4. Evaluation Indicators

The results are evaluated using Mean Square Error (MSE) and Image Intersection Over Union (IOU) as a criterion for the accuracy of our single inversion images [36], where IOU is defined as
I O U = S i S f S i S f .
S i is the area of internal pests in the scatterer in the inversion diagram and S f is the area of the internal pest community set in the living tree–borers model. It ·is specified that a single pest detection is judged to be accurate when I O U > 0.87 ; ideally, I O U = 1 . To test the performance of the algorithm, this paper sets up multiple groups of experimental data to test the algorithm and calculates the accuracy rate of the algorithm in different detection environments. The accuracy rate formula is as follows.
A c c = N t p N t × 100 % .
A c c represents the detection accuracy of the algorithm for all test data, N t p is the number of all test results that are judged to be accurate, and N t is the total number of test data.

4. Experiments and Results Analysis

4.1. Experiment Condition

Considering that the detection of trees is mostly carried out outdoors, in order to better simulate the noise interference in the field detection in real life, a method of artificially adding interference factors was proposed to simulate the detection of trees in the field. The noise construction equation is shown below.
E n = 1 2 1 N s N i E 0 s n 0 ,
where E n is our constructed noise scattered field data, E 0 s is the scattered field data received by the receiver, N i is the number of transmitters, N s is the number of receivers, and n 0 is the noise matrix, which is structured as follows:
n 0 = n l ( [ 1 + i 1 + i 1 + i 1 + i ] ) ,
where   n l is the noise factor, which can be changed by changing the value of n l , and n 0 is the N i × N s matrix. Our final input scattered field data for training E s is
E s = E 0 s + E n .
To better describe the ratio of pest community size to the cross-sectional area of wood, we call it the   I n v e r s i o n   R a t i o , and its formula is as follows:
I n v e r s i o n   R a t i o = r p r t .
where   r p represents the radius of the pest community in m and r t is the radius of the living tree in meters.
This section uses simulated electromagnetic inverse scattering data to validate our proposed method with the simulation model parameters set as shown in the Table 2 below.
Our group used finite element simulations to obtain 18,000 sets of scattering data in each of the two states. To ensure generalization, we selected a random sample of 15,000 sets of data in the dataset to train our proposed combination of model-driven and data-driven methods. Another 500 sets of data in the remaining 3000 sets were randomly selected for testing.
Within 2 m × 2 m square areas, white indicates air; brown is a two-dimensional cross-section of the xylem, a circle of radius 0.6 m with a relative permittivity of 7; and black indicates an internal trunk pest community with a relative permittivity of 60.

4.2. Imaging Experiment of Insect Hole in Living Trees

4.2.1. Relationship between Accuracy and I n v e r s i o n R a t i o

Using the Deep Convolutional algorithm, experiments were carried out on four models with different sizes but the same Inversion Ratio. The results are shown in Figure 6.
The accuracy of the inversion results was evaluated according to Formula (16). The results are shown in Table 3, which demonstrates that under the condition of maintaining the I n v e r s i o n   R a t i o , changing the radius will not affect the accuracy value.

4.2.2. Detection of Pest Communities with Different I n v e r s i o n R a t i o Values

The Contrast source Inversion, Deep Convolution algorithm, Joint-Driven algorithm, and Joint-Driven Super-Resolution algorithm were used to detect pest communities with different I n v e r s i o n   R a t i o s . The results are shown in Figure 7 and Figure 8. The I O U is calculated according to Formula (16) to evaluate the accuracy of inversion results, and the results are shown in Table 4.
Figure 7, Figure 8 and Figure 9 respectively calculate the I O U of six different I n v e r s i o n   R a t i o inversion results by the four algorithms, and the values are shown in Table 4.

4.3. Discussion of Experimental Results

When I n v e r s i o n   R a t i o = 1 : 60 , the Contrast Source Inversion cannot reverse a regular graph, resulting in I O U   = error. When the inversion ratio was 1:30 to 1:50, the pest community size retrieved by the Contrast Source Inversion algorithm was the same. When I n v e r s i o n   R a t i o = 1 : 10 and I n v e r s i o n   R a t i o = 1 : 20 , the Contrast Source Inversion algorithm imaging produces a certain “halo.” With a threshold value of 80% of the highest relative dielectric constant value in the imaging image, only the red part of the image can be observed.
Regarding the Deep Convolutional algorithm: when I n v e r s i o n   R a t i o = 1 : 20 ,   1 : 30 ,   and   1 : 40 , its inversion insect pest community size is unchanged, and its algorithm is insensitive to size change, which is prone to error.
When the Joint-Driven algorithm inverts the model of I n v e r s i o n   R a t i o = 1 : 10 ,   1 : 20 , the inversion results are clear and accurate. Moreover, the Joint-Driven algorithm can accurately reverse the size of the pest community. Although there are interference points in the inversion results, the location of the pest community can also be distinguished.
The Joint-Driven Super-Resolution algorithm can not only de-noise inverted images, but also further highlights the position of the pest community, further improves the inversion effect, and is more conducive to judging the internal conditions of trees by further optimizing the image.
Figure 10 demonstrates the I n v e r s i o n   R a t i o = 1 : 30 single–partial pest detection process because when the I n v e r s i o n   R a t i o is between 1:30 and 1:60, Contrast Source Inversion cannot correctly show the shape of the scatter. The Contrast Source Inversion, Deep Convolution, and Joint-Driven methods are compared to analyze the stability of model-driven deep learning networks in the iterative process.
Figure 10a shows that the Contrast Source Inversion tends to stabilize after around 500 iterations, with the number of iterations being higher and more time-consuming. Figure 10b shows that the Deep Convolutional algorithm needs to iterate about 350 times to reach the stable range, and the training process is not smooth enough. Figure 10c shows that the Joint-Driven algorithm only needs about 60 iterations to reach the stable state, which greatly reduces the number of iterations, improves the imaging time, and smooths the training process.
According to Equation (17), the detection accuracy statistics of each inversion algorithm were carried out separately for each of the three radii, and Figure 11 shows the detection accuracy of each algorithm in 300 sets of single-pest inversion tests.
As can be seen from Figure 11, the Contrast Source Inversion can only invert the living tree burrow scatters with an I n v e r s i o n   R a t i o greater than 1:30 and its application range is limited. Although the Deep Convolutional network algorithm can invert living tree burrow scatters with an I n v e r s i o n   R a t i o = 1 : 60 , its accuracy is only about 70%, which cannot meet the actual requirements. The Joint-Driven algorithm proposed in this paper not only solves the problem of imaging the wood cross-sectional area with a small insect colony proportion, but it also has an inversion accuracy of up to 90% after the optimization of a specially-trained super-resolution network.
To sum up, the four methods used are summarized and the results are shown in the following Table 5.
When I n v e r s i o n   R a t i o = 1 : 10 and I n v e r s i o n   R a t i o = 1 : 20 , the Contrast Source Inversion algorithm can reverse meaningful results. If the I n v e r s i o n   R a t i o continues to change, the results generated by the Contrast Source Inversion algorithm will lose their reference value. Therefore, the calculation error is represented by ‘Non’.

5. Conclusions

In this paper, a model-driven deep learning Super-Resolution inversion algorithm is proposed to solve the problem of high noise and poor imaging in electromagnetic wave detection of tree pest communities. By studying the propagation process of electromagnetic waves in scatters and combining the advantages of the Contrast Source Inversion, Deep Convolutional network, and Super-Resolution network, a Joint-Driven Super-Resolution algorithm is proposed. This algorithm overcomes the problem that the selection of neural network structure and the process of parameter optimization depend too much on the experience of the experimenter by introducing the Contrast Source Inversion, so that the neural network can better fit the nonlinear problem and overcome the ill-posed problem by learning a large amount of data. The experiments are carried out by continuously reducing the radius of the model pest community and using the Contrast Source Inversion, the Convolutional Network algorithm, the Joint-Driven algorithm, and the Joint-Driven Super-Resolution algorithm. Our solution to the problem of imaging tiny high-contrast scatters is provided.

Author Contributions

Conceptualization, J.S. (Jiayin Song), J.S. (Jie Shi), H.Z. (Hongwei Zhou), W.S., H.Z. (Hongju Zhou) and Y.Z.; methodology, J.S. (Jie Shi) and H.Z. (Hongju Zhou); software, J.S. (Jie Shi); validation, J.S. (Jiayin Song), J.S. (Jie Shi), H.Z. (Hongwei Zhou) W.S., H.Z. (Hongju Zhou) and Y.Z.; formal analysis, J.S. (Jie Shi); investigation, H.Z. (Hongwei Zhou), W.S. and Y.Z.; resources, J.S. (Jiayin Song); data curation, H.Z. (Hongwei Zhou) and W.S.; writing—original draft preparation, J.S. (Jie Shi); writing—review and editing, J.S. (Jiayin Song) and H.Z. (Hongwei Zhou); visualization, J.S. (Jie Shi) and H.Z. (Hongwei Zhou); supervision, J.S. (Jiayin Song) and H.Z. (Hongwei Zhou); project administration, H.Z. (Hongwei Zhou); funding acquisition, H.Z. (Hongwei Zhou). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Forestry Science and Technology Promotion Demonstration Project of the Central Government, Grant Number Hei[2022]TG21, and the Fundamental Research Funds for the Central Universities, Grant Number 2572022DP04.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Yin, Q.; Liu, H.-H. Drying stress and strain of wood: A Review. Appl. Sci. 2021, 11, 5023. [Google Scholar] [CrossRef]
  2. Ji, B.; Zhang, Q.; Cao, J.; Zhang, B.; Zhang, L. Delamination detection in bimetallic composite using laser ultrasonic bulk waves. Appl. Sci. 2021, 11, 636. [Google Scholar] [CrossRef]
  3. Ansari, M.S.; Bartos, V.; Lee, B. Shallow and Deep Learning Approaches for Network Intrusion Alert Prediction. Procedia Comput. Sci. 2020, 171, 644–653. [Google Scholar] [CrossRef]
  4. Du, X.; Li, S.; Li, G.; Feng, H.; Chen, S. Stress wave tomography of wood internal defects using ellipse-based spatial interpolation and velocity compensation. BioResources 2015, 10, 3948–3962. [Google Scholar] [CrossRef]
  5. Taskhiri, M.S.; Hafezi, M.H.; Harle, R.; Williams, D.; Kundu, T.; Turner, P. Ultrasonic and thermal testing to non-destructively identify internal defects in plantation eucalypts. Comput. Electron. Agric. 2020, 173, 105396. [Google Scholar] [CrossRef]
  6. Du, X.; Feng, H.; Hu, M.; Fang, Y.; Chen, S. Three-dimensional stress wave imaging of wood internal defects using TKriging method. Comput. Electron. Agric. 2018, 148, 63–71. [Google Scholar] [CrossRef]
  7. Mousavi, M.; Taskhiri, M.S.; Holloway, D.; Olivier, J.; Turner, P. Feature extraction of wood-hole defects using empirical mode decomposition of ultrasonic signals. NDT E Int. 2020, 114, 102282. [Google Scholar] [CrossRef]
  8. Ligong, P.; Rodion, R.; Sergey, K. Artificial neural network for defect detection in CT images of wood. Comput. Electron. Agric. 2021, 187, 106312. [Google Scholar]
  9. Kaczmarek, R.G.; Bednarek, D.R.; Wong, R.; Kaczmarek, R.V.; Rudin, S.; Alker, G. Potential radiation hazards to personnel during dynamic CT. Radiology 1986, 161, 853. [Google Scholar] [CrossRef]
  10. Merela, M.; Oven, P.; Sepe, A.; Serša, I. Three-dimensional in vivo magnetic resonance microscopy of beech (Fagus sylvatica L.) wood. Biol. Med. 2005, 18, 171–174. [Google Scholar] [CrossRef]
  11. Winters, D.W.; Van Veen, B.D.; Hagness, S.C. A sparsity regularization approach to the electromagnetic inverse scattering problem. IEEE Trans. Antennas Propag. 2009, 58, 145–154. [Google Scholar] [CrossRef] [PubMed]
  12. Kaipio, J.P.; Huttunen, T.; Luostari, T.; Lähivaara, T.; Monk, P.B. A Bayesian approach to improving the Born approximation for inverse scattering with high-contrast materials. Inverse Probl. 2019, 35, 084001. [Google Scholar] [CrossRef] [Green Version]
  13. Tajik, D.; Kazemivala, R.; Nikolova, N.K. Real-time imaging with simultaneous use of born and Rytov approximations in quantitative microwave holography. IEEE Trans. Microw. Theory Tech. 2021, 70, 1896–1909. [Google Scholar] [CrossRef]
  14. Shah, P.; Chen, G.; Moghaddam, M. Learning nonlinearity of microwave imaging through deep learning. In Proceedings of the 2018 IEEE International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting, Boston, MA, USA, 8–13 July 2018; pp. 699–700. [Google Scholar]
  15. Leijsen, R.; Fuchs, P.; Brink, W.; Webb, A.; Remis, R. Developments in electrical-property tomography based on the contrast-source inversion method. J. Imaging 2019, 5, 25. [Google Scholar] [CrossRef] [Green Version]
  16. Pasha, M.; Kupis, S.; Ahmad, S.; Khan, T. A Krylov subspace type method for Electrical Impedance Tomography. ESAIM Math. Model. Numer. Anal. 2021, 55, 2827–2847. [Google Scholar] [CrossRef]
  17. Vrahatis, M.; Magoulas, G.; Plagianakos, V. From linear to nonlinear iterative methods. Appl. Numer. Math. 2003, 45, 59–77. [Google Scholar] [CrossRef]
  18. Rekanos, I.T. Neural-network-based inverse-scattering technique for online microwave medical imaging. IEEE Trans. Magn. 2002, 38, 1061–1064. [Google Scholar] [CrossRef]
  19. Anaraki, A.K.; Ayati, M.; Kazemi, F. Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms. Biocybern. Biomed. Eng. 2019, 39, 63–74. [Google Scholar] [CrossRef]
  20. Desmal, A.; Bağcı, H. Shrinkage-thresholding enhanced Born iterative method for solving 2D inverse electromagnetic scattering problem. IEEE Trans. Antennas Propag. 2014, 62, 3878–3884. [Google Scholar] [CrossRef]
  21. Lim, J.; Psaltis, D. MaxwellNet: Physics-driven deep neural network training based on Maxwell’s equations. Apl Photonics 2022, 7, 011301. [Google Scholar] [CrossRef]
  22. Park, S.C.; Park, M.K.; Kang, M.G. Super-resolution image reconstruction: A technical overview. IEEE Signal Processing Mag. 2003, 20, 21–36. [Google Scholar] [CrossRef]
  23. Svendsen, D.H.; Morales-Álvarez, P.; Ruescas, A.B.; Molina, R.; Camps-Valls, G. Deep Gaussian processes for biogeophysical parameter retrieval and model inversion. ISPRS J. Photogramm. Remote Sens. 2020, 166, 68–81. [Google Scholar] [CrossRef] [PubMed]
  24. Sun, K.; Simon, S. Bilateral spectrum weighted total variation for noisy-image super-resolution and image denoising. IEEE Trans. Signal Processing 2021, 69, 6329–6341. [Google Scholar] [CrossRef]
  25. Schneider, M. Lippmann-Schwinger solvers for the computational homogenization of materials with pores. Int. J. Numer. Methods Eng. 2020, 121, 5017–5041. [Google Scholar] [CrossRef]
  26. Caratelli, D.; Cicchetti, R.; Cicchetti, V.; Testa, O.; Faraone, A. Electromagnetic scattering from truncated thin cylinders: An approach based on the incomplete Hankel functions and surface impedance boundary conditions. In Proceedings of the 2019 PhotonIcs & Electromagnetics Research Symposium-Spring (PIERS-Spring), Rome, Italy, 17–20 June 2019; pp. 1739–1742. [Google Scholar]
  27. Peters, G.; Wilkinson, J.H. Inverse iteration, ill-conditioned equations and Newton’s method. SIAM Rev. 1979, 21, 339–360. [Google Scholar] [CrossRef]
  28. Ran, P.; Qin, Y.; Lesselier, D. Electromagnetic imaging of a dielectric micro-structure via convolutional neural networks. In Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, 2–6 September 2019; pp. 1–5. [Google Scholar]
  29. Buehlmann, U.; Thomas, R.E. Impact of human error on lumber yield in rough mills. Robot. Comput. Integr. Manuf. 2002, 18, 197–203. [Google Scholar] [CrossRef] [Green Version]
  30. Hashim, U.R.; Hashim, S.Z.; Muda, A.K. Automated vision inspection of timber surface defect: A review. J. Teknol. 2015, 77. [Google Scholar] [CrossRef] [Green Version]
  31. Golilarz, N.A.; Demirel, H.; Gao, H. Adaptive generalized Gaussian distribution oriented thresholding function for image de-noising. Int. J. Adv. Comput. Sci. Appl. 2019, 10. [Google Scholar] [CrossRef] [Green Version]
  32. Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1905–1914. [Google Scholar]
  33. Schonfeld, E.; Schiele, B.; Khoreva, A. A u-net based discriminator for generative adversarial networks. In Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8207–8216. [Google Scholar]
  34. Yan, Y.; Liu, C.; Chen, C.; Sun, X.; Jin, L.; Peng, X.; Zhou, X. Fine-grained attention and feature-sharing generative adversarial networks for single image super-resolution. IEEE Trans. Multimed. 2021, 24, 1473–1487. [Google Scholar] [CrossRef]
  35. Wiratama, W.; Lee, J.; Sim, D. Change detection on multi-spectral images based on feature-level U-Net. IEEE Access 2020, 8, 12279–12289. [Google Scholar] [CrossRef]
  36. Zhang, G.; Pan, Y.; Zhang, L. Semi-supervised learning with GAN for automatic defect detection from images. Autom. Constr. 2021, 128, 103764. [Google Scholar] [CrossRef]
Figure 1. Electromagnetic inverse scattering model.
Figure 1. Electromagnetic inverse scattering model.
Sensors 22 09840 g001
Figure 2. Structure of the convolutional neural network in this paper.
Figure 2. Structure of the convolutional neural network in this paper.
Sensors 22 09840 g002
Figure 3. Gaussian distribution of z .
Figure 3. Gaussian distribution of z .
Sensors 22 09840 g003
Figure 4. Gaussian distribution of z after improvement.
Figure 4. Gaussian distribution of z after improvement.
Sensors 22 09840 g004
Figure 5. The architecture of the U-Net discriminator with spectral normalization.
Figure 5. The architecture of the U-Net discriminator with spectral normalization.
Sensors 22 09840 g005
Figure 6. Inversion images with the same inversion ratio and different sizes: (a) 0.04 m radius of pest community; 0.4 m radius of the living tree, (b) 0.05 m radius of pest community; 0.5 m radius of the living tree, (c). 0.07 m radius of pest community; 0.7 m radius of the living tree, and (d) 0.08 m radius of pest community; 0.8 m radius of the living tree.
Figure 6. Inversion images with the same inversion ratio and different sizes: (a) 0.04 m radius of pest community; 0.4 m radius of the living tree, (b) 0.05 m radius of pest community; 0.5 m radius of the living tree, (c). 0.07 m radius of pest community; 0.7 m radius of the living tree, and (d) 0.08 m radius of pest community; 0.8 m radius of the living tree.
Sensors 22 09840 g006
Figure 7. Inversion results of the Inversion Ratio from 1:10 to 1:20. (a) Model diagram of I n v e r s i o n   R a t i o = 1:10, (b) model diagram of I n v e r s i o n   R a t i o = 1:20, (c) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:10, (d) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:20, (e) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:10, (f) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:20, (g) joint-Driven inversion results of I n v e r s i o n   R a t i o = 1:10, (h) joint-Driven inversion results of I n v e r s i o n   R a t i o = 1:20, (i) Joint-Driven super-resolution inversion results of I n v e r s i o n   R a t i o = 1:10, and (j) Joint-Driven super-resolution inversion results of I n v e r s i o n   R a t i o = 1:20.
Figure 7. Inversion results of the Inversion Ratio from 1:10 to 1:20. (a) Model diagram of I n v e r s i o n   R a t i o = 1:10, (b) model diagram of I n v e r s i o n   R a t i o = 1:20, (c) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:10, (d) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:20, (e) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:10, (f) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:20, (g) joint-Driven inversion results of I n v e r s i o n   R a t i o = 1:10, (h) joint-Driven inversion results of I n v e r s i o n   R a t i o = 1:20, (i) Joint-Driven super-resolution inversion results of I n v e r s i o n   R a t i o = 1:10, and (j) Joint-Driven super-resolution inversion results of I n v e r s i o n   R a t i o = 1:20.
Sensors 22 09840 g007aSensors 22 09840 g007b
Figure 8. Inversion results of Inversion Ratio from 1:30 to1:40. (a) Model diagram of I n v e r s i o n   R a t i o = 1:30, (b) model diagram of I n v e r s i o n   R a t i o = 1:40, (c) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:30, (d) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:40, (e) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:30, (f) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:40, (g) Joint-Driven inversion results of I n v e r s i o n   R a t i o = 1:30, (h) Joint driven inversion results of I n v e r s i o n   R a t i o = 1:40, (i) Joint-Driven Super-resolution inversion results of I n v e r s i o n   R a t i o = 1:30, and (j) Joint-Driven Super-resolution inversion results of I n v e r s i o n   R a t i o = 1:40.
Figure 8. Inversion results of Inversion Ratio from 1:30 to1:40. (a) Model diagram of I n v e r s i o n   R a t i o = 1:30, (b) model diagram of I n v e r s i o n   R a t i o = 1:40, (c) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:30, (d) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:40, (e) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:30, (f) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:40, (g) Joint-Driven inversion results of I n v e r s i o n   R a t i o = 1:30, (h) Joint driven inversion results of I n v e r s i o n   R a t i o = 1:40, (i) Joint-Driven Super-resolution inversion results of I n v e r s i o n   R a t i o = 1:30, and (j) Joint-Driven Super-resolution inversion results of I n v e r s i o n   R a t i o = 1:40.
Sensors 22 09840 g008aSensors 22 09840 g008b
Figure 9. Inversion results of Inversion Ratio from 1:50 to 1:60. (a) Model diagram of I n v e r s i o n   R a t i o = 1:50. (b) Model diagram of I n v e r s i o n   R a t i o = 1:60. (c) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:50. (d) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:60. (e) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:50. (f) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:60. (g) Joint driven inversion results of I n v e r s i o n   R a t i o =1:50 (h) Joint driven inversion results of I n v e r s i o n   R a t i o =1:60 (i) Joint-Driven Super-Resolution inversion results of I n v e r s i o n   R a t i o = 1:50. (j) Joint-Driven Super-Resolution inversion results of I n v e r s i o n   R a t i o = 1:60.
Figure 9. Inversion results of Inversion Ratio from 1:50 to 1:60. (a) Model diagram of I n v e r s i o n   R a t i o = 1:50. (b) Model diagram of I n v e r s i o n   R a t i o = 1:60. (c) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:50. (d) Contrast Source Inversion results of I n v e r s i o n   R a t i o = 1:60. (e) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:50. (f) Deep Convolutional Inversion results of I n v e r s i o n   R a t i o = 1:60. (g) Joint driven inversion results of I n v e r s i o n   R a t i o =1:50 (h) Joint driven inversion results of I n v e r s i o n   R a t i o =1:60 (i) Joint-Driven Super-Resolution inversion results of I n v e r s i o n   R a t i o = 1:50. (j) Joint-Driven Super-Resolution inversion results of I n v e r s i o n   R a t i o = 1:60.
Sensors 22 09840 g009aSensors 22 09840 g009b
Figure 10. Number of algorithm iterations versus MSE. (a) Contrast Source Inversion iteration process, (b) Deep Convolutional network training process, and (c) the Joint-Driven algorithm trains the iterative process.
Figure 10. Number of algorithm iterations versus MSE. (a) Contrast Source Inversion iteration process, (b) Deep Convolutional network training process, and (c) the Joint-Driven algorithm trains the iterative process.
Sensors 22 09840 g010
Figure 11. Accuracy of four inversion algorithms for detection.
Figure 11. Accuracy of four inversion algorithms for detection.
Sensors 22 09840 g011
Table 1. Summary of the advantages and disadvantages of algorithms.
Table 1. Summary of the advantages and disadvantages of algorithms.
AlgorithmAlgorithm DefectsAlgorithm Advantages
CSISensitive initial value; slow convergence speed; unable to process large-scale dataIterative solution; not involving solving the positive problem
SOMSensitive initial value; unable to process large-scale data; large amount of calculationReduces CSI solving dimension; improves solving speed and success probability
CNNNeeds a lot of data training; shifts the computational burden to the learning stageNo physical modeling required
Table 2. Simulation parameter settings.
Table 2. Simulation parameter settings.
Parameter NameParameter ValuesParameter NameParameter Values
Domain of solution 2 m × 2 m The relative permittivity of pest community60
The radius of a living tree0.6 mAir resistance120 π
I n v e r s i o n   R a t i o 1:10~1:60Number of electromagnetic emitters32
Electromagnetic frequency200 MHz~700 MHzNumber of electromagnetic wave receivers32
The relative permittivity of air1The internal relative dielectric constant of tree7
Noise factor n l 0.2
Table 3. I O U of each Inversion image when I n v e r s i o n   R a t i o =1:10.
Table 3. I O U of each Inversion image when I n v e r s i o n   R a t i o =1:10.
Figure 6aFigure 6bFigure 6cFigure 6d
I O U 0.9760.9750.9770.979
Table 4. I O U of each algorithm.
Table 4. I O U of each algorithm.
Contrast Source InversionDeep Convolutional InversionJoint Driven InversionJoint-Driven Super-Resolution Inversion
1:100.8850.95411
1:200.7570.77511
1:300.5410.8510.9550.984
1:400.4320.6530.9530.982
1:500.3250.7730.9580.988
1:60error0.7630.9540.986
Table 5. Comparison by the Four Methods.
Table 5. Comparison by the Four Methods.
Contrast Source InversionDeep Convolutional InversionJoint-Driven InversionJoint-Driven Super-Resolution Inversion
Iteration
stability times
5003506060
I n v e r s i o n   R a t i o 1:10Maximum
error
11.1%6.6%3.3%2.5%
Minimum error7.3%3.6%1.5%1.3%
1:20Maximum
error
23.3%15.5%7.2%3.3%
Minimum error15.4%10.2%4.5%1.5%
1:30Maximum
error
Non17.5%8%4.2%
Minimum errorNon14.3%4.8%2.3%
1:40Maximum
error
Non25.8%8.6%4.9%
Minimum errorNon22.3%5.1%2.8%
1:50Maximum
error
Non36.4%9.2%5.6%
Minimum errorNon28.5%5.5%3.7%
1:60Maximum
error
Non41.2%10.1%6.5%
Minimum errorNon33.3%6.5%4.3%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, J.; Shi, J.; Zhou, H.; Song, W.; Zhou, H.; Zhao, Y. Imaging of Insect Hole in Living Tree Trunk Based on Joint Driven Algorithm of Electromagnetic Inverse Scattering. Sensors 2022, 22, 9840. https://doi.org/10.3390/s22249840

AMA Style

Song J, Shi J, Zhou H, Song W, Zhou H, Zhao Y. Imaging of Insect Hole in Living Tree Trunk Based on Joint Driven Algorithm of Electromagnetic Inverse Scattering. Sensors. 2022; 22(24):9840. https://doi.org/10.3390/s22249840

Chicago/Turabian Style

Song, Jiayin, Jie Shi, Hongwei Zhou, Wenlong Song, Hongju Zhou, and Yue Zhao. 2022. "Imaging of Insect Hole in Living Tree Trunk Based on Joint Driven Algorithm of Electromagnetic Inverse Scattering" Sensors 22, no. 24: 9840. https://doi.org/10.3390/s22249840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop