Next Article in Journal
Critical Factors and Practices in Mitigating Cybercrimes within E-Government Services: A Rapid Review on Optimising Public Service Management
Next Article in Special Issue
Audio-Driven Facial Animation with Deep Learning: A Survey
Previous Article in Journal
Gaussian Kernel Approximations Require Only Bit-Shifts
Previous Article in Special Issue
Utilizing RT-DETR Model for Fruit Calorie Estimation from Digital Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Nonlinear Image Reconstruction in Electrical Capacitance Tomography Using the Generative Adversarial Network

Faculty of Electronics and Information Technology, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Information 2024, 15(10), 617; https://doi.org/10.3390/info15100617
Submission received: 4 September 2024 / Revised: 1 October 2024 / Accepted: 8 October 2024 / Published: 9 October 2024
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)

Abstract

:
This study investigated the potential of the generative adversarial neural network (cGAN) image reconstruction in industrial electrical capacitance tomography. The image reconstruction quality was examined using image patterns typical for a two-phase flow. The training dataset was prepared by generating images of random test objects and simulating the corresponding capacitance measurements. Numerical simulations were performed using the ECTsim toolkit for MATLAB. A cylindrical sixteen-electrode ECT sensor was used in the experiments. Real measurements were obtained using the EVT4 data acquisition system. The reconstructed images were evaluated using selected image quality metrics. The results obtained using cGAN are better than those obtained using the Landweber iteration and simplified Levenberg–Marquardt algorithm. The suggested method offers a promising solution for a fast reconstruction algorithm suitable for real-time monitoring and the control of a two-phase flow using ECT.

Graphical Abstract

1. Introduction

The primary use of electrical capacitance tomography is in monitoring the two-phase flow, particularly when a cylindrical tomography sensor is attached to a horizontal pipeline section in a chemical or process engineering plant [1,2]. Typical structures observed in multiphase flow include annular, stratified, and slug patterns, where one or two circular cross-section elements may appear in the resulting image [3,4].
There is a critical need to develop and evaluate the measurement and image reconstruction methods to keep pace with the high-speed flow of multiphase materials in industrial installations [2,5,6]. In such environments, it is crucial that the imaging system not only captures accurate data but also processes it rapidly to provide real-time monitoring and control [7]. Though effective, nonlinear approaches such as the Levenberg–Marquardt algorithm (LM) are computationally intensive [8,9,10]. For example, the LM algorithm requires repeated updates of the sensitivity matrix calculated by forward problem solving, i.e., complex calculations of the electric field distributions. On the other hand, fast, traditional reconstruction techniques based on linear approximation, such as the linear back projection (LBP) or Landweber iterations, do not assure sufficient image quality [11,12].
Given these challenges, we explored the potential of deep neural network-based methods, which have shown promising results in recent years for solving nonlinear problems [13,14,15]. The essential advantage of once-trained neural networks is their ability to perform rapid reconstructions, making them a viable option for real-time applications [16,17,18]. In this study, we focused on evaluating a conditional generative adversarial network (cGAN) due to its increasingly recognized effectiveness in reconstructing high-quality images at a speed that is necessary for industrial flow monitoring and control [19,20,21].
This paper presents experimental verification of adversarial network image reconstruction in electrical capacitance tomography (ECT) of two-phase flows. The experiments were conducted numerically using a computer simulation of tomographic measurements. The numerically generated dataset for supervised learning was divided into training, testing, and validation parts. The validation of the cGAN reconstruction was also performed using real measurements of two-phase flow phantoms. For this purpose, the data for four phantoms mimicking typical flow patterns were acquired using a 16-electrode sensor.
The study aimed to verify whether image reconstruction based on an artificial neural network (cGAN) allows the reconstruction of acceptable ECT images and the detection of flow regions. The results were compared with those obtained using the Landweber algorithm and the simplified Levenberg–Marquardt (sLM) algorithm (without a sensitivity matrix update).

2. Materials and Methods

2.1. Numerical Simulation of the Training Dataset

In supervised learning, the training dataset consists of elements represented as ordered pairs of model inputs and their corresponding outputs. For a computed tomography model, each data pair includes an image of the tomographic cross-section of the object being tested and the tomographic projections recorded for that object. However, when addressing the inverse problem, the order of the data pairs is reversed. In electrical capacitance tomography, the input data comprises inter-electrode capacitance measurements, while the output data corresponds to the pixels representing the permittivity distribution image within the examined space.
A large training dataset was required due to the size of the inverse problem considered in ECT. Performing measurements for several thousand known objects does not seem to be achievable in a reasonable time. Only a numerical simulation allowed us to collect the required several thousand cases in the dataset. The numerical simulations were performed using the ECTsim toolbox for MATLAB R2024a [22,23]. The software developed by our team is available under an open-source license on GitHub platform owned by GitHub Inc. (San Francisco, CA, USA) [24]. ECTsim enables the highly efficient modeling of electrical tomography problems, simulating electric field distributions and sensitivity matrices, making it well-suited for preparing large datasets. The software includes basic image reconstruction methods and intuitive image display functions [24].
In ECTsim, numerical simulations are executed on a dense, non-uniform mesh, whereas reconstructions are carried out on a simplified mesh. In our study, the reconstructed image matrix had a resolution of 32 × 32 pixels. Since only the pixels within the sensor’s field of view were reconstructed, the total number of pixels was reduced to 608 for the chosen model. The number of reconstructed pixels in the image matrix corresponded to the number of output nodes of the neural network. The number of the input nodes of the neural network corresponded to the number of measured inter-electrode capacitances, which was equal to 120 for a 16-electrode sensor.

2.2. Training Dataset with the Two-Phase Flow Patterns

Determining the optimal number of samples for the training dataset is challenging, as there is no precise method to estimate the required dataset size. Nevertheless, our experiments indicate that a model with an input size of 120 and an output size of 608, when trained on a dataset of approximately 150,000 samples, yields satisfactory results.
Figure 1 illustrates the classes of the flow patterns. A single element of the training dataset consists of the image of the distribution of electric permittivity inside a cylindrical sensor and the matrix of the inter-electrode capacitances normalized between the minimum (empty sensor) and the maximum (sensor filled with a material with relative permittivity equal 3). The capacitance measurements are shown in Figure 1 in the form of a 2D color map. The 16 × 15 pixels map size corresponds to the number of exciting and sensing electrodes in the tomographic scan. During scanning, one of the electrodes acts as a voltage-forcing electrode, and the others (here, 15) act as current-measuring electrodes. In the tomographic scan, voltage forcing is carried out successively for each of the 16 electrodes. Since the order of electrodes in the capacitance measurement does not change the measured value, we have only 120 independent measurements for the 16-electrode sensors. In our experiments, the two measurements made with one pair of electrodes were averaged, giving a 120-element input vector.
In the experiments, the training dataset, which includes four flow patterns (75,000 elements), was extended by a class of random circles (75,000 elements), giving five classes of images in total. This approach ensures that the image patterns are most likely well-represented while preventing the model from overfitting these patterns. Since the imaging was limited to a two-phase flow, the electric permittivity values were restricted to two specific values (ε_r1 = 1 and ε_r2 = 3) corresponding to the gas and oil present in the flow. A randomly selected part (25%) of the training set was reserved as a testing subset used in the learning process.

2.3. ECT Sensor and Test Objects for Real Measurements

The EVT4 data acquisition system, developed by our team, has been used for real measurements [25]. The system uses the single-shot high-voltage (SSHV) technique to measure capacitances [26]. It achieves a high signal-to-noise ratio (SNR), ranging from 30 to 65 dB for capacitance measurements between 1 fF and 1 pF. The EVT4 supports a frame rate of 1000 with a 16-electrode sensor [27].
Figure 2 illustrates the view (2a) and the sketch (2b) of the 2D sensor used in our experiment. The sensor’s wall, made of PVC, features an inner diameter of 152 mm. Sixteen thin copper foil electrodes (200 µm) are mounted on the inner side of the sensor wall. These electrodes are insulated with a layer of PVC foil (80 µm) adhered to the inner surface. With the excellent SNR provided by the EVT4, it was possible to use electrodes with a height of only 50 mm, which is one-third of the sensor’s diameter, thereby achieving good resolution along the sensor’s Z-axis. The SNR values achieved in measurements conducted using the developed sensor are presented in Figure 3.
We designed four static test objects to simulate the cross-sections of the two-phase flows within a cylindrical pipeline. For this purpose, we used polypropylene (PP) pellets, which have an electrical permittivity of approximately 2.2, similar to that of crude oil. The images of these test objects are shown later in the Section 3.

2.4. Neural Network Architecture and Training Procedure

During our previous research [28,29], we found conditional generative adversarial networks (cGAN) with the U-Net [30] based generator being very promising in electrical impedance tomography image reconstruction. They provide satisfactory image quality and enough resistance to overtraining. The main component of the cGAN is the generator shown in Figure 4. The U-Net-like network structure implies having an image on the input and an image on the output. Because capacitance measurements in electrical tomography are represented by a 1D vector, it is necessary to add a linear layer with an input size corresponding to the number of measurements (120 in our case) and an output size corresponding to the total number of pixels in the image (1024). As the network layers are deterministic functions, they provide the same output for the given input. To successfully train the network, it is necessary to allow for output variability. In the traditional GAN approach, this is achieved by providing a Gaussian noise to the generator network input [31]. This works well in the case of a small number of measurements [32]; however, when increasing the size of the input vector and output image, the network requires an additional source of entropy to maintain its stability. We found that the Pix2Pix approach, originally proposed for large image translation [33], allows us to achieve satisfactory network stability and generalization ability. It uses dropout layers as a source of additional entropy. Such an approach allows us to eliminate latent vectors when using a sufficient number of dropout layers.
The U-Net network consists of encoder and decoder parts. In our case, the encoder part consists of five convolutional layers with Leaky ReLU (rectified linear unit) activation. All layers, except the first one, are also stacked with 2D batch normalization layers [10], dramatically decreasing the training time. The decoder comprises five deconvolutional layers with Leaky ReLU activation stacked with 2D batch normalization layers. The first two layers are also stacked with dropout layers with a factor of 0.2, meaning 20% of the pixels are randomly zeroed to provide enough randomness. At the output of the decoder we have a 32 × 32 × 32 tensor, and it is necessary to process this tensor into a 32 × 32 image using a convolutional layer.
The GAN approach for training an artificial neural network implies using a binary classifier, called a discriminator, aimed to distinguish the generated images from the ground-truth images. The conditional GAN method, shown in Figure 5, also requires adding a condition to the input of the binary classifier. In our case, we use a linear layer to combine the 1D measurement vector with the 2D image in the same way as it was done with the generator’s input. The traditional approach supposes using a binary cross-entropy as a loss function. However, it becomes difficult to prevent discriminator overtraining when the image size increases. It is possible to overcome this challenge by modifying a loss function using the already mentioned Pix2Pix approach. Modification is based on including the L1 and L2 components into the loss function, comparing the obtained image and a ground-truth image in the following way:
f l r , l p , y r , y p = B C R l p , l r + 100 M S E y r , y p + 100 y r y p
where l r is the reference value of the probability that the image is true, and l p is the probability value of the image being true, determined by the discriminator. y r and y p are the reference and predicted images, respectively. BCR is a binary cross-entropy, and MSE is a mean square error.
The network was implemented using Python programming language version 3 with the PyTorch library version 2.4. The network was trained, and the results were analyzed using an additionally prepared validation dataset (10,000 elements). Tuning the generator and discriminator learning rates’ initial values and using a cosine annealing learning rate schedule allowed us to obtain discriminator and generator loss curves for 50 training cycles (epochs), as shown in Figure 6a,b. The mean relative image error was calculated on each epoch, as shown in Figure 6c. The total training time was 40 min using a Tesla P100 GPU with 16 Gb of memory (Nvidia corp., Santa Clara, CA, USA). However, it is important to mention that network training should be conducted only once for a given ECT sensor configuration and the expected set of materials inside the sensor.

2.5. Reference Image Reconstruction Algorithms

The results generated by the cGAN network were compared with those obtained using two iterative image reconstruction methods commonly used in electrical capacitance tomography:
I.
The Landweber method, which is given by the formula:
ε ( k + 1 ) = ε ( k ) α k S T ( S ε k C ) ,
where α is the self-relaxing step length, S is the sensitivity matrix calculated for a uniform distribution inside the sensor, C is a vector of measured capacitances, and k is the iteration number. A discrepancy norm, given by the formula
d i s ( k ) = S ε k C 2
is calculated in each step of the algorithm. If the error fails to decrease from one iteration to the next, the algorithm may reduce the step size to maintain progress. The calculations are stopped if the step length falls below the assumed minimal level.
II.
The simplified Levenberg–Marquardt (sLM) algorithm (also with a self-relaxing step length) is given by the formula:
ε ( k + 1 ) = ε ( k ) α k ( S T S + λ I ) 1 S T ( S ε k C ) ,
where λ is a regularization parameter that balances the trade-off between the solution’s stability and accuracy. To optimize the performance of the sLM method, we conducted several reconstructions on the entire validation dataset and determined that a λ value of 5   ×   10 3 yielded the best results. The sensitivity matrix S was not updated to reduce computation time, making this a linearized version of the sLM method, which is more comparable in execution time to the neural network-based reconstruction. In the literature, this approach is also called iterative Tikhonov regularization [34]. The stopping criterion is consistent with that used in the Landweber method.
Validation was also conducted using real measurements. For this, a tomographic sensor was prepared, corresponding to the numerical model used in the simulations. Also, several test objects were designed to simulate a two-phase flow. These physical setups allowed us to compare the results obtained from the neural network and traditional methods, validating the effectiveness of our approach in practice.

3. Results

This experiment aimed to compare the reconstruction results obtained using the cGAN network with those from the Landweber method and sLM algorithm. The experiment consisted of two parts: in the first part, a quantitative comparison was performed on the additionally generated validation set, and in the second part, real measurements were conducted.

3.1. Analysis of the Simulated Dataset

The validation dataset consisted of two subsets, each containing 5000 elements: one with flow patterns and the other with random circles. The noise was added to the measurements to achieve 30 dB for the opposing electrodes, which reflects the SNR of our actual measurement system. Figure 7 shows the reference pattern and the images reconstructed using the Landweber method, sLM algorithm (λ = 5 × 10−3), and cGAN neural network.
The performance of the cGAN was evaluated using a commonly adopted and recognized approach for measuring the quality of ECT images [35]. This approach relies on straightforward yet objective pixel-by-pixel comparisons to gauge the difference between the reconstructed and original ground-truth images. The quality of the reconstructed images was assessed using several metrics: mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and correlation coefficient. Each metric relates to a reference cross-sectional image, the basis for generating the dataset. The mean, median, and standard deviation of these norms were calculated for the validation dataset (Table 1).
The average value helps us evaluate the overall reconstruction quality by quantifying the discrepancy between the real and reconstructed image datasets. However, this does not capture the issues related to specific elements, known as outliers. Therefore, the distributions of the MSE and correlation coefficient across the testing dataset were also analyzed and presented as histograms in Figure 8.
The distributions of the time required for reconstruction were also calculated and presented in Figure 9 for all three reconstruction methods.

3.2. Reconstruction from Real Measurements

For the real measurements, we used polypropylene (PP) pellets to create test objects designed to simulate a two-phase flow. The versatility of forming PP pellets into various shapes enabled us to construct detailed but static phantoms that could replicate the characteristics of two-phase flow patterns (annular, stratified, and slug) without needing an industrial setup. PVC pipes were used to shape the PP pellets into the desired forms. For measurements involving random circles, the test object was constructed using polyamide rods ( ε r = 4 ) of varying diameters (10 mm, 20 mm, and 30 mm) placed vertically in the sensor. The results are shown in Figure 10. In addition to the photos of the object setup inside the sensor, we included their numerical models to illustrate the expected reconstruction results better. Real measurements were conducted using the EVT4 data acquisition system. Normalized capacitance values are presented as a two-dimensional set of tomographic projections (sinograms). We compared the images obtained using a neural network based on cGAN to those reconstructed using the Landweber method (with step size relaxation) and those obtained using the sLM algorithm (without a sensitivity matrix update). The results demonstrate the superiority of the cGAN-based neural network over the iterative algorithm.

4. Conclusions

Implementing ECT in industrial settings requires a rapid and computationally efficient reconstruction method. Our research indicates that this can be achieved using an artificial neural network. Through numerical simulations and real measurements, we have demonstrated that a cGAN-based method produces promising results. The reconstruction speed is comparable to iterative linear algorithms, and the quality of the reconstructions with the cGAN-based approach, evaluated using image distance metrics, is superior.
Utilizing neural networks like cGAN for image reconstruction necessitates the availability of large training datasets. In our study, we employed a dataset containing 150,000 elements. Generating such a large dataset through real-world measurements would have been impractical; thus, it was only achievable through precise numerical simulations. We used our custom software, ECTsim, which allows for a rapid and accurate simulation of measurements in electrical capacitance tomography. We have decided to make ECTsim available under an open-source license, enabling other researchers and practitioners interested in preparing large training datasets to benefit from our tool.
We were particularly focused on validating the effectiveness of our network (trained on the simulated dataset) in real-world measurements. To achieve this, we prepared a measurement sensor and various test objects. The measurements were carried out using our custom-built tomography system, EVT4, which can capture 1000 images with a 16-electrode sensor. The reconstructions were performed on individual measurements. The average time for reconstruction using the C++ cGAN neural network implementation was 2.7 ms on our laboratory computer, indicating that we could easily achieve up to 200 reconstructed images per second using this method. Furthermore, the quality of the reconstructions in case of real measurements was superior to that of the linear iterative algorithms with comparable computation times.

Author Contributions

Conceptualization, W.T.S. and M.M.; methodology, M.I., W.T.S., D.W. and M.M.; software, M.I., D.W. and P.W.; validation, M.I., D.W. and M.M.; formal analysis, W.T.S.; investigation, M.I., D.W. and P.W.; resources, P.W.; data curation, P.W.; writing—original draft preparation, M.I. and W.T.S.; writing—review and editing, W.T.S. and M.M.; visualization, M.I. and D.W.; supervision, W.T.S. and D.W.; project administration, W.T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the internal grant for employees of the Warsaw University of Technology supporting scientific activity in the discipline of Biomedical Engineering, grant number 3/2023/RNDIB/WEITI.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Plaskowski, A.; Beck, M.S.; Thorn, R.; Dyakowski, T. Imaging Industrial Flows: Applications of Electrical Process Tomography; CRC Press: Boca Raton, FL, USA, 1995; ISBN 9780750302968. [Google Scholar]
  2. Rymarczyk, T.; Sikora, J. Applying industrial tomography to control and optimization flow systems. Open Phys. 2018, 16, 332–345. [Google Scholar] [CrossRef]
  3. Zheng, J.; Peng, L. An autoencoder-based image reconstruction for electrical capacitance tomography. IEEE Sens. J. 2018, 18, 5464–5474. [Google Scholar] [CrossRef]
  4. Rouhani, S.Z.; Sohal, M.S. Two-phase flow patterns: A review of research results. Prog. Nucl. Energy 1983, 11, 219–259. [Google Scholar] [CrossRef]
  5. Yan, Y.; Mohanarangam, K.; Yang, W.; Tu, J. Experimental measuring techniques for industrial-scale multiphase flow problems. Exp. Comput. Multiph. Flow 2024, 6, 1–13. [Google Scholar] [CrossRef]
  6. Darnajou, M. A Novel Approach to High-Speed Electrical Impedance Tomography with Frequency Division Multiplexing: Mathieu Darnajou to Cite this Version: HAL Id: Tel-03934574. Ph.D. Dissertation, Ecole Centrale Marseille, Marseille, France, 2023. [Google Scholar]
  7. Hampel, U.; Babout, L.; Banasiak, R.; Schleicher, E.; Soleimani, M.; Wondrak, T.; Vauhkonen, M.; Lähivaara, T.; Tan, C.; Hoyle, B.; et al. A Review on Fast Tomographic Imaging Techniques and Their Potential Application in Industrial Process Control. Sensors 2022, 22, 2309. [Google Scholar] [CrossRef]
  8. Bergou, E.H.; Diouane, Y.; Kungurtsev, V. Convergence and Complexity Analysis of a Levenberg–Marquardt Algorithm for Inverse Problems. J. Optim. Theory Appl. 2020, 185, 927–944. [Google Scholar] [CrossRef]
  9. Husain, Z.; Liatsis, P. A neural network-based local decomposition approach for image reconstruction in Electrical Impedance Tomography. In Proceedings of the IST 2019—IEEE International Conference on Imaging Systems and Techniques, Abu Dhabi, United Arab Emirates, 9–10 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  10. Pinheiro, P.A.T.; Loh, W.W.; Dickin, F.J. Three-dimensional reconstruction algorithm for electrical resistance tomography. IEE Proc. Sci. Meas. Technol. 1998, 145, 85–93. [Google Scholar] [CrossRef]
  11. Wang, F.; Marashdeh, Q.; Fan, L.S.; Warsito, W. Electrical capacitance volume tomography: Design and applications. Sensors 2010, 10, 1890–1917. [Google Scholar] [CrossRef]
  12. Nombo, J.; Mwambela, A.; Kisngiri, M. Analysis and Performance Evaluation of Entropic Thresholding Image Processing Techniques for Electrical Capacitance Tomography Measurement System. Tanzan. J. Sci. 2021, 47, 928–942. [Google Scholar] [CrossRef]
  13. Li, F.; Tan, C.; Dong, F.; Jia, J. V-Net Deep Imaging Method for Electrical Resistance Tomography. IEEE Sens. J. 2020, 20, 6460–6469. [Google Scholar] [CrossRef]
  14. Wu, Y.; Chen, B.; Liu, K.; Zhu, C.; Pan, H.; Jia, J.; Wu, H.; Yao, J. Shape Reconstruction with Multiphase Conductivity for Electrical Impedance Tomography Using Improved Convolutional Neural Network Method. IEEE Sens. J. 2021, 21, 9277–9287. [Google Scholar] [CrossRef]
  15. Hamilton, S.J.; Hauptmann, A. Deep D-Bar: Real-Time Electrical Impedance Tomography Imaging with Deep Neural Networks. IEEE Trans. Med. Imaging 2018, 37, 2367–2377. [Google Scholar] [CrossRef] [PubMed]
  16. Schlemper, J.; Caballero, J.; Hajnal, J.V.; Price, A.N.; Rueckert, D. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 491–503. [Google Scholar] [CrossRef] [PubMed]
  17. Pelt, D.M.; Batenburg, K.J.; Sethian, J.A. Improving tomographic reconstruction from limited data using mixed-scale dense convolutional neural networks. J. Imaging 2018, 4, 128. [Google Scholar] [CrossRef]
  18. Hsieh, J.; Liu, E.; Nett, B.; Tang, J.; Thibault, J.-B.; Sahney, S. A New Era of Image Reconstruction: TrueFidelityTM Technical White Paper on Deep Learning Image Reconstruction; General Electric Company: Boston, MA, USA, 2019. [Google Scholar]
  19. Deabes, W.; Abdel-Hakim, A.E.; Bouazza, K.E.; Althobaiti, H. Adversarial Resolution Enhancement for Electrical Capacitance Tomography Image Reconstruction. Sensors 2022, 22, 3142. [Google Scholar] [CrossRef]
  20. Zhu, Q.X.; Xu, T.X.; Xu, Y.; He, Y.L. Improved Virtual Sample Generation Method Using Enhanced Conditional Generative Adversarial Networks with Cycle Structures for Soft Sensors with Limited Data. Ind. Eng. Chem. Res. 2022, 61, 530–540. [Google Scholar] [CrossRef]
  21. Chen, Z.; Chen, M.; Wang, J.; Liu, R.; Shao, Y.; Liu, B.; Tang, Y.; Liu, M.; Yang, W. Imaging irregular structures using electrical capacitance tomography. Meas. Sci. Technol. 2021, 32, 075006. [Google Scholar] [CrossRef]
  22. Kryszyn, J.; Smolik, W. Toolbox for 3D Modelling and Image Reconstruction in Electrical Capacitance Tomography. Inform. Control Meas. Econ. Environ. Prot. 2017, 7, 137–145. [Google Scholar] [CrossRef]
  23. Wanta, D.; Smolik, W.T.; Kryszyn, J.; Wróblewski, P.; Midura, M. A Finite Volume Method using a Quadtree Non-Uniform Structured Mesh for Modeling in Electrical Capacitance Tomography. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2021, 92, 443–452. [Google Scholar] [CrossRef]
  24. Wanta, D.; Smolik, W.T.; Kryszyn, J. ECTsim. Zenodo 2024. [Google Scholar] [CrossRef]
  25. Kryszyn, J.; Wróblewski, P.; Stosio, M.; Wanta, D.; Olszewski, T.; Smolik, W.T. Architecture of EVT4 data acquisition system for electrical capacitance tomography. Meas. J. Int. Meas. Confed. 2017, 101, 28–39. [Google Scholar] [CrossRef]
  26. Smolik, W.T.; Kryszyn, J.; Radzik, B.; Stosio, M.; Wróblewski, P.; Wanta, D.; Dańko, L.; Olszewski, T.; Szabatin, R. Single-shot high-voltage circuit for electrical capacitance tomography. Meas. Sci. Technol. 2017, 28, 025902. [Google Scholar] [CrossRef]
  27. Kryszyn, J.; Wanta, D.M.; Smolik, W.T. Gain Adjustment for Signal-to-Noise Ratio Improvement in Electrical Capacitance Tomography System EVT4. IEEE Sens. J. 2017, 17, 8107–8116. [Google Scholar] [CrossRef]
  28. Ivanenko, M.; Wanta, D.; Smolik, W.T.; Wróblewski, P.; Midura, M. Generative-Adversarial-Network-Based Image Reconstruction for the Capacitively Coupled Electrical Impedance Tomography of Stroke. Life 2024, 14, 419. [Google Scholar] [CrossRef] [PubMed]
  29. Ivanenko, M.; Smolik, W.T.; Wanta, D.; Midura, M.; Wróblewski, P.; Hou, X.; Yan, X. Image Reconstruction Using Supervised Learning in Wearable Electrical Impedance Tomography of the Thorax. Sensors 2023, 23, 7774. [Google Scholar] [CrossRef]
  30. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics); Springer International Publishing: New York, NY, USA, 2015; Volume 9351, pp. 12–20. [Google Scholar] [CrossRef]
  31. Cohen, G.; Giryes, R. Generative Adversarial Networks. In Machine Learning for Data Science Handbook, Data Mining and Knowledge Discovery Handbook, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2023; pp. 375–400. [Google Scholar] [CrossRef]
  32. Deabes, W.; Abdel-Hakim, A.E. CGAN-ECT: Tomography Image Reconstruction from Electrical Capacitance Measurements Using CGANs. arXiv 2022, arXiv:2209.03737. [Google Scholar]
  33. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar] [CrossRef]
  34. Yang, W.Q.; Peng, L. Image reconstruction algorithms for electrical capacitance tomography. Meas. Sci. Technol. 2003, 14, R1–R13. [Google Scholar] [CrossRef]
  35. Xia, Z.; Cui, Z.; Chen, Y.; Hu, Y.; Wang, H. Generative adversarial networks for dual-modality electrical tomography in multi-phase flow measurement. Meas. J. Int. Meas. Confed. 2021, 173, 108608. [Google Scholar] [CrossRef]
Figure 1. The classes of training datasets. (a) Typical flow patterns: annular flow, stratified flow, and slug flow (one and two rods). (b) Images containing random circular objects. Two relative permittivity values, 1 and 3, were used. For each map of permittivity distribution, the corresponding capacitance measurements are shown in the form of a 2D 16 × 15 pixels color map.
Figure 1. The classes of training datasets. (a) Typical flow patterns: annular flow, stratified flow, and slug flow (one and two rods). (b) Images containing random circular objects. Two relative permittivity values, 1 and 3, were used. For each map of permittivity distribution, the corresponding capacitance measurements are shown in the form of a 2D 16 × 15 pixels color map.
Information 15 00617 g001
Figure 2. (a) ECT sensor filled with PP pellet, and (b) sketch of the sensors.
Figure 2. (a) ECT sensor filled with PP pellet, and (b) sketch of the sensors.
Information 15 00617 g002
Figure 3. The signal-to-noise ratio of capacitance measurements obtained using the prepared sensor and EVT4 data acquisition system: ‘min’—measurement with empty sensor; ‘max’—measurement conducted with sensor filled with PP pellet. The graph shows data for the first electrode in the pair.
Figure 3. The signal-to-noise ratio of capacitance measurements obtained using the prepared sensor and EVT4 data acquisition system: ‘min’—measurement with empty sensor; ‘max’—measurement conducted with sensor filled with PP pellet. The graph shows data for the first electrode in the pair.
Information 15 00617 g003
Figure 4. The architecture of the generator network in the cGAN model featuring a U-Net-style structure.
Figure 4. The architecture of the generator network in the cGAN model featuring a U-Net-style structure.
Information 15 00617 g004
Figure 5. Neural network training procedure.
Figure 5. Neural network training procedure.
Information 15 00617 g005
Figure 6. Neural network training on 50 epochs: (a) discriminator loss, (b) generator loss, and (c) relative image error.
Figure 6. Neural network training on 50 epochs: (a) discriminator loss, (b) generator loss, and (c) relative image error.
Information 15 00617 g006
Figure 7. The modeled true distribution of relative permittivity and images were reconstructed using the Landweber method, sLM algorithm (λ = 5 × 10−3), and cGAN neural network. Based on examples from the test set, with added noise (set to achieve an SNR of 30 dB for the opposing electrodes in the measurement).
Figure 7. The modeled true distribution of relative permittivity and images were reconstructed using the Landweber method, sLM algorithm (λ = 5 × 10−3), and cGAN neural network. Based on examples from the test set, with added noise (set to achieve an SNR of 30 dB for the opposing electrodes in the measurement).
Information 15 00617 g007aInformation 15 00617 g007b
Figure 8. Histograms of mean square error (a,b) and correlation distribution (c,d) for the elements of the testing dataset with flow patterns (a,c) and with random circles (b,d).
Figure 8. Histograms of mean square error (a,b) and correlation distribution (c,d) for the elements of the testing dataset with flow patterns (a,c) and with random circles (b,d).
Information 15 00617 g008
Figure 9. Histogram of the computation time required for each reconstruction algorithm.
Figure 9. Histogram of the computation time required for each reconstruction algorithm.
Information 15 00617 g009
Figure 10. Four test objects mimicking the two-phase flow patterns (annular and stratified flows, slug, and circles) and the results of the image reconstructions. From left to right: view of test objects inside the sensor, numerical representations of the test objects, normalized sinograms of measured capacitances, and images reconstructed using the Landweber algorithm, the sLM algorithm, and cGAN.
Figure 10. Four test objects mimicking the two-phase flow patterns (annular and stratified flows, slug, and circles) and the results of the image reconstructions. From left to right: view of test objects inside the sensor, numerical representations of the test objects, normalized sinograms of measured capacitances, and images reconstructed using the Landweber algorithm, the sLM algorithm, and cGAN.
Information 15 00617 g010aInformation 15 00617 g010b
Table 1. The mean, median, and standard deviation of image quality metrics for the testing datasets, including MSE, SSIM, correlation, PSNR, and reconstruction time, are presented. The method with the best performance on each metric is highlighted in bold.
Table 1. The mean, median, and standard deviation of image quality metrics for the testing datasets, including MSE, SSIM, correlation, PSNR, and reconstruction time, are presented. The method with the best performance on each metric is highlighted in bold.
Flow PatternsRandom Circles
LanwebersLMcGANLanwebersLMcGAN
MSEµ0.7620.8590.0340.3720.3810.018
mdn0.4090.5160.0100.3610.3580.012
σ0.6050.7130.0480.0830.1150.018
SSIMµ0.3420.3210.8670.3180.3150.945
mdn0.3200.2980.8790.3150.3120.951
σ0.0870.0990.0750.0550.0560.027
correlationµ0.8450.8290.9650.7800.7790.985
mdn0.8620.8580.9940.7880.7870.990
σ0.0630.0770.0640.0560.0570.017
PSNRµ2.5242.16620.364.3884.34519.21
mdn3.8862.86920.164.4304.46319.08
σ3.3813.6287.8120.9061.1014.008
time [ms]µ3.15.32.73.15.82.7
mdn3.05.82.73.06.02.7
σ0.41.30.10.61.40.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wanta, D.; Ivanenko, M.; Smolik, W.T.; Wróblewski, P.; Midura, M. Real-Time Nonlinear Image Reconstruction in Electrical Capacitance Tomography Using the Generative Adversarial Network. Information 2024, 15, 617. https://doi.org/10.3390/info15100617

AMA Style

Wanta D, Ivanenko M, Smolik WT, Wróblewski P, Midura M. Real-Time Nonlinear Image Reconstruction in Electrical Capacitance Tomography Using the Generative Adversarial Network. Information. 2024; 15(10):617. https://doi.org/10.3390/info15100617

Chicago/Turabian Style

Wanta, Damian, Mikhail Ivanenko, Waldemar T. Smolik, Przemysław Wróblewski, and Mateusz Midura. 2024. "Real-Time Nonlinear Image Reconstruction in Electrical Capacitance Tomography Using the Generative Adversarial Network" Information 15, no. 10: 617. https://doi.org/10.3390/info15100617

APA Style

Wanta, D., Ivanenko, M., Smolik, W. T., Wróblewski, P., & Midura, M. (2024). Real-Time Nonlinear Image Reconstruction in Electrical Capacitance Tomography Using the Generative Adversarial Network. Information, 15(10), 617. https://doi.org/10.3390/info15100617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop