Next Article in Journal
Kinematic Effect on the Navicular Bone with the Use of Rearfoot Varus Wedge
Previous Article in Journal
Distributed Strain Monitoring Using Nanocomposite Paint Sensing Meshes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inpainted Image Reconstruction Using an Extended Hopfield Neural Network Based Machine Learning System

Department of Electrical Engineering, Gdynia Maritime University, Morska 81-87, 81-225 Gdynia, Poland
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(3), 813; https://doi.org/10.3390/s22030813
Submission received: 26 November 2021 / Revised: 7 January 2022 / Accepted: 17 January 2022 / Published: 21 January 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
This paper considers the use of a machine learning system for the reconstruction and recognition of distorted or damaged patterns, in particular, images of faces partially covered with masks. The most up-to-date image reconstruction structures are based on constrained optimization algorithms and suitable regularizers. In contrast with the above-mentioned image processing methods, the machine learning system presented in this paper employs the superposition of system vectors setting up asymptotic centers of attraction. The structure of the system is implemented using Hopfield-type neural network-based biorthogonal transformations. The reconstruction property gives rise to a superposition processor and reversible computations. Moreover, this paper’s distorted image reconstruction sets up associative memories where images stored in memory are retrieved by distorted/inpainted key images.

1. Introduction

Machine learning, a sub-field of artificial intelligence, deals with algorithms that build mathematical models to automatically make decisions or predictions based on sample data called training sets. The concept of learning is the key to understanding intelligence in both biological brain structures and machines. The aim of machine learning is to create mappings y = F x , y R m , x R n , generated by training sets S = x i , y i i = 1 N , the vectors of which are approximation nodes. Hence, the training points include:
y i = F x i ,   i = 1   ,   ,   N
The machine learning model described in a previous paper [1] was derived from an extended Hopfield neural network and is based on spectral analysis that uses biorthogonal and orthogonal transformations. It should be emphasized that this system has a universal character that enables the implementation of basic functions of learning systems, such as pattern association, pattern recognition, and inverse modeling. One of the aforementioned properties of this model is the recognition and reconstruction of image patterns. In [1], we presented an example where the object of the reconstruction was an incomplete, inpainted image of a subject named Lena. Such examples of reconstruction allow for the development of a system based on the above-mentioned model of machine learning that can recognize people wearing masks. It is worth noting that the above-mentioned model of machine learning represents an alternative to classical image reconstruction/restoration systems, which make use of such processing tools as inverse modelling, deconvolution, Wiener filters, and PCA (Principal Component Analysis) [2,3,4,5].
Classical image reconstruction systems are currently being intensively supplemented and replaced by those using neural, neuro-fuzzy architecture, and algorithms, especially in medical applications [6,7,8,9,10]. A comprehensive review of recent advances in image reconstruction can be found in [11]. Current research is focused on sparsity, low-rankness, and machine learning [12,13]. It is worth noting that the most up-to-date image reconstruction structures are based on constrained optimization algorithms and adequate regularizers [14].
Recently, deep learning algorithms are driving a renaissance of interest in neural network research and applications (e.g., image processing). Most of the known deep learning algorithms are implemented in the form of ANN (DLNN) learned from training set data by minimizing loss functions. Thus, deep learning approach can be seen as a special topic in optimization theory. Standard types of deep learning neural networks include the multilayer perceptrons (MLP), convolutional neural networks (CNN), recurrent neural networks (RNN), and generative adversarial networks (GAN) [15,16,17,18]. However, optimal networks topology and implementation technology have not yet been selected (the generalizability of networks is not well understood, and there is a lack of explanation for the relationship between the network topology and performance [19]. Nevertheless, we claim that ANN should constitute both a universal algorithmic and physical models used in computational intelligence. It is clear that Hopfield-type neural networks are both physical and algorithmic models suitable for neural computations. Hence, we considered an extended Hopfield-type model of the neural network defined by the following equations:
x ˙ = η W w 0 1 + ε W s θ x + I d
where
W —skew-symmetric orthogonal matrix;
W s —real symmetric matrix;
1 —identity matrix;
θ x —vector of activation functions;
I d —input vector; and
ε ,   w 0 ,   η —parameters.
η W >w 0 1 + ε W s θ x + I d = 0
An equilibrium equation of neural networks (2), i.e., gives rise to the universal models of machine learning based on biorthogonal transformations, enabling the realization of common learning systems functions. One of these functions is the implementation of associative memories. Thus, this paper’s inpainted image reconstruction system sets up associative memories where images stored in memory are retrieved by distorted/inpainted key images. To summarize, we propose a machine learning model that uses biorthogonal transformations based on spectral processing as alternative solutions to deep learning based on optimization procedures.
The rest of this paper is structured as follows: Section 2 provides details on the proposed learning algorithm and presents a structure of the machine learning systems for image processing. Section 2 contains also some results of computational verifications using MATLAB software. Section 3 includes some results of image processing as an inverse problem. Some unique properties of this machine learning system are discussed in Section 4. The conclusions underline the main features of the machine learning system presented in the article.

2. Materials and Methods

2.1. Machine Learning System for Image Processing

We consider a set of N black and white images represented by m rows and n columns, i.e., a set of m · n pixels with different shades of grayness. For vector analysis, each image is transformed by concatenating m rows to form the column vector x i   m · n × 1 ,   i = 1 , ,   N . Thus, the set of N images is represented by the following matrix:
X = x 1 , x 2 ,   ,   x N   , dim x i = m · n = 2 k ,   k = 3 ,   4 ,   ,  
where
N < 1 2 n · m .
The set of distorted images is given by the matrix:
X s = x 1 s , x 2 s ,   ,   x N s   .
It is straightforward to observe that the training set is as follows:
S = x i , x i s i = 1 N .
S creates a mapping F · defined by the following properties:
x i = F x i
and
x i s F x i ,   i = 1 ,   2 ,   , N .
Thus, the mapping F is implemented as a machine learning system for image reconstruction.
The structure implementing the mapping F · defined by Equations (8) and (9) can be obtained as the solutions of the equilibrium Equation (3). Thus, for w 0 = 2 ,   ε = 1 in Equation (3), one gets:
W 2 k 2 · 1 + W s m i + x i s = 0
where W 2 k 2 = 1 , skew-symmetric, orthogonal matrix
Hence, the N-solutions are as follows:
m i = 2 · 1 W s W 2 k 1 x i s ,   i = 1 , ,   N
where
W s = M M T M 1 M T
and
M = m 1 , m 2 ,   , m N
is a spectrum matrix of given vectors x i , i.e.,
m i = 1 2 W 2 k + 1 x i
and
x i = W 2 k + 1 m i ,   i = 1 , ,   N .
Equation (11) can be seen as a determination of biothogonal transformation T s · :
m i = T x i s
and Equation (14) can be seen as an orthogonal transformation:
m i = T x i ,     x i = T 1 m i .
The transformations T s · and T 1 · , arranged as a realization of the mapping F · , have the block structure, as shown in Figure 1 [1]. The orthogonal transformation T · , which makes use of the Hurwitz-Radon matrix family [20], allows for determining the Haar–Fourier spectra of the system vectors x i .
The structure from Figure 1 serves as the estimator of the spectrum m ^ i :
m ^ i = T s x i s ,   i = 1 ,   ,   N .
In the system, due to the iterative nature of the feedback loop, the following convergence of vectors is obtained:
m ^ i m i
  y ^ i x i ,   i = 1 ,   ,   N .
The convergence determined by Equation (20) is performed in K iterations (K depends on the reconstruction problem, note the example shown below). Moreover, it should be noted that for input image z x i ,   i = 1 , ,   N , the output of the system is given by the superposition of system vectors:
F z = i = 1 N α i x i ,   α i R .
The system vectors x i set up the attraction centers.
The structure in Figure 1 can also be represented as the lumped memory model in Figure 2. It is worth noting that this structure gives rise to the realization of an AI analog processor. However, this topic is beyond the scope of this paper. The synthesis algorithm of the system given in Figure 1 can be found in Appendix A

2.2. Computational Verification of the Learning Algorithm—Examples of Face Image Reconstruction and Person Recognition

A. The machine image processing system described in the previous section was used to reconstruct and classify a set of images. The system task was to reconstruct a complete face image based on a masked photo (mask applied by software) and to assign the reconstructed image to a specific person. In the system, photos of 9 faces N = 9 were stored in memory in the form of a 64 × 64 matrix defining the degree of grayness of individual image pixels. The saved face images are presented in Figure 3. For vector analysis, each image was transformed by concatenating 64 lines into the form of a column vector x i   64 · 64 × 1 ,   i = 1 , ,   9 . After transformation, the set of 9 images was represented by the matrix X   4096 × 9 : X = x 1 , x 2 ,   ,   x 9   , dim x i = 64 · 64 = 4096 = 2 k ,   k = 12 .
In the experiments, the identification numbers 1, 2, …, 9 were assigned to the images. The system vectors u i = x i i ,   i = 1 , 2 ,   ,   9 were used to construct the machine learning system according to the procedure described in the previous section. Examples of the reconstruction of photos of people wearing masks are shown in Figure 4.
Table 1 shows the nominal values of the Recognition Index, i.e., the assigned numbers and their associated values after 100 iterations. The results presented in Table 1 show that in most cases, the value of the index rounded to the nearest integer corresponds to the nominal value. Thus, the system correctly identifies each person with the exception of Photo Number 3, where the person is incorrectly recognized. Increasing the number of iterations did not change the index, as the process quickly converges to the final value.
The convergence of the iterative process is illustrated in Table 2, which presents the index values obtained after successive iterations. The experiment was carried out for Photo Number 2 with the nominal value of the coefficient of 2.0.
A significant result that confirms the principle of the proposed system was obtained by substituting in a photo that was not saved in the system. The response shown in Figure 5 is a superposition of the photos stored in memory in the system (Equation (21)).
The mean squared error (MSE) values calculation for the previously performed image reconstructions are presented in Table 3. The second column of the table shows the MSE describing the difference between the original and masked photos, whereas the third column shows the error describing the difference between the original photo and the photo after reconstruction. Each time an image saved in the system was analyzed, the mean squared error decreased. For the reconstruction attempt shown in Figure 5, which uses an image not saved in the system, the MSE error is 2950.90.
The fractional value of the index in Table 1 reflects the system operation mechanism, which is a weighted combination of numbers 1, …, 9.
B. In the case of another masking method, as illustrated in Figure 6, a set of distorted images is given according to relationship (6) by the matrix:
X s = x 1 s , x 2 s ,   ,   x N s  
where d i m x i s = k · n × 1 ,   i = 1 , ,   N ,   k < m .
The model structure for the reconstruction of such distorted images is shown in Figure 7.
The image reconstruction process in Figure 6 is illustrated in Figure 8, which shows the results obtained after 1, 2, 5, 10, and 100 iterations. After 100 iterations, the reconstruction MSE is 0, and the identification index is 9.0. It is worth comparing the above values with the data for Photo Number 9 presented in Table 1 and Table 3.
It is worth noting that, as mentioned in the Introduction, the reconstruction of Lena’ s photo was realized by using the structure presented in Figure 7 [1], as well. For example, one of the distorted images of Lena and the reconstruction is shown in Figure 9.
The potential reconstruction of a distorted image using the structure in Figure 1 is illustrated for Photo Number 9 (Figure 3) by superimposing a noise vector generated by using the RAND function in MATLAB. The measure of this distortion is the signal/noise ratio expressed in decibels. The results of such a reconstruction are presented in Table 4 and Figure 10.
Based on Table 4, the machine learning system correctly and automatically identified the distorted image at S/N > 10 dB. Yet, even at S/N = 2.7 dB in the reconstructed image, significant similarity to the saved original photo is observed.
The numerical data in Table 4, set as a function, MSE vs. S/N, form the plots presented in Figure 11.

3. Inpainted Image Recognition and Reconstruction as an Inverse Problem

The image reconstruction models presented in the previous sections are based on the availability of training sets S in Equations (7) and (22) containing original and damaged patterns. Alternatively, a common model of image reconstruction is given by the equation:
A x = y ˜
where
  • A —known processing operator, for example, A is a matrix;
  • x —original image; and
  • y ˜ —observed degenerate image.
According to Equation (23), the reconstruction of an image leads to solving the inverse problem. Most of the solutions to Equation (23) in the literature use an optimization solution [14,21], for example:
min x || y ˜ A x || 2 2 ,       s .   t .   x K   or min x || y ˜ A x || 2 2 + β R x    
where
  • K—set of feasible solutions;
  • R x —regulizer;
  • β regularization parameter.
As mentioned above, different types of neural networks are currently used to solve inverse problems in imaging, including image reconstruction. Many approaches to this problem can be found in recent reviews [22,23] and the novel proposal in [24].
The use of the machine learning model shown in Figure 1 to solve Equation (23) leads to the solution of the following problem:
F x : A x = y ˜   x = F 1 y ˜
where
  • A m × n known real matrix, m n ;
  • y ˜ m × 1 real matrix;
  • x n × 1 real matrix;
  • m + n = 2 k ,   k = 3 ,   4 ,  
The case of m n is still under consideration. The generation of the training set S = x i , y i i = 1 N for Equation (23) is given by:
A x i = y i ,     i = 1 ,   2 ,   , N
where x i ,   i = 1 ,   2 , ,   N is the vector form of training images, for example, those shown in Figure 3.
Assuming that the matrix A   m × n in projection (25) is a random matrix, the images y i of the training set become random vectors. For example, training image number 1 takes the form shown in Figure 12.
Thus, the vector form of the transformation of this image (No 1) is:
A x 1 = y 1
where A m × n , m > n .
Taking the system vectors u i of the form
u i = y i x i ,   i = 1 ,   ,   N ,
the structure of the inverse mapping system (25), i.e.,:
x i = F 1 y i ,   i = 1 ,   ,   N
is given in Figure 13a,b.
It should be noted that the biorthogonal transformation T s · and orthogonal transformation T · in Figure 13 are given by Equations (16) and (17), respectively:
Thus,
m i = T s y i 0
u i = T 1 m i
where u i —system vectors Equation (28).
In the system presented in Figure 13, the distorted projections of the images y ˜ i ,   i = 1 ,   ,   N undergo reconstruction, in contrast with the system in Figure 7, where the distorted images are reconstructed. To illustrate the properties of the reconstruction system presented in Figure 13, a training set S was generated using Equation (26) consisting of nine images x i ,   i = 1 ,   ,   9 , where x i were images from Figure 3, and their projections y i ,   i = 1 ,   ,   9 were obtained with the random matrix A . An exemplary transformation of Image Number 9 from Figure 3 is shown in Figure 14.
In the system shown in Figure 13b, we obtain:
|| x 9 x ^ 9 || 2 2 = 0 .
To conclude, Figure 1, Figure 7, and Figure 11 show image reconstruction systems that substantially implement an associative memory structure for recognizing damaged key patterns. However, it is worth noting that on other hand, the system in Figure 13 implements inverse transformation and solves optimization tasks constrained by images stored in memory. Moreover, this system enables the solving of linear Equation (23) by using a random form of training vectors x i in Equation (26) [1] as well.

4. Discussion on Some Features of the Machine Learning System

A. This section focuses on some of the features that underlie the universality of the machine learning system presented in Figure 1 and Figure 7. First of all, it is clear that this machine learning system can be categorized as an iterative scheme. On other hand, the structure in Figure 1 can be treated as a feedforward block connection constituting a multilayer, deep learning architecture (Figure 15).
The structure in Figure 7 can be similarly treated as a feedforward scheme, as shown in Figure 16.
The blocks S i · ,   i = 1 ,   ,   K L are identical in multilayer structures (Figure 15 and Figure 16).
It is worth nothing that the multilayer structures in Figure 15 and Figure 16 can be seen as an implementation of deep learning using recurrent neural networks (RNN) [25]. However, the topology of these structures is not a result of optimization algorithms typically used to solve the inverse problems.
B. An interesting property of the structure in Figure 1 can be set up by a computational experiment illustrated in Figure 17; i.e., when this structure memorizes only one image (e.g., photo No 2 in Figure 3), then any image is mapped on this memorized image (a property of global attractor).
C. Another interesting aspect of this machine learning system can be derived from the so-called Q-inspired neural networks feature [1]. This feature can be determined by the following statement:
Given a set of complex-valued training vectors x i , y i i = 1 N where x i C n ,   y i C m ,   n + m = 2 k ,   k = 3 ,   4 ,   , a realization of mapping given by complex training vectors, i.e., C n C m can be implemented as complex-valued neural networks or as a complex-valued machine learning system with the structure presented in Figure 2, where the memory block is determined by the Hermitian matrix W H   ( W s W H in Equation (3)).
Such a machine learning system can be used as an image processor to reconstruct complex-valued images. It is clear that the computational efficiency of this system is greater than that of the real-valued approximator (due to the processing of two images by only one system). Figure 18 provides an example of complex-valued image reconstruction.
D. This article focus on image processing by using the recursive machine learning system in Figure 1. It should be clear that this image processing sets up only one aspect of the potential system applicability in the field of signal processing. For example, the same machine learning could be used for time-series analysis and forecasting. To generalize, the essential function of the machine learning system described in this paper is the implementation of mapping defined by a training set S = x i , y i i = 1 N ; i = 1 ,   , N , where d i m x i = n ,   d i m y i = m . The recurrence is convergent under the linear independence of input vectors, and the number of vectors N fulfills (see Equation (5)) N < 0.5   n + m ,   n + m = 2 k ,   k = 2 ,   4 ,   . Thus, a large capacity system (large N) needs a large even dimension n + m system. It could be considered as a disadvantage of this machine learning systems.

5. Conclusions

The aim of this article was to illustrate the potential for using the machine learning system shown in Figure 1 to reconstruct and recognize distorted or damaged patterns, in particular, images of people wearing masks. In contrast to the image reconstruction methods based on using optimization algorithms, this system employs the superposition of system vectors setting up asymptotic centers of attraction. Hence, this system is particularly useful for the implementation of associative memories. Thus, this paper’s inpainted image reconstruction sets up associative memories where images stored in memory are retrieved by distorted/inpainted key images. To conclude, we formulated another image processing tool augmenting the set of known image processing methods. Finally, all the image reconstructions presented in this paper, were done using MATLAB (The Math Works, Inc. MATLAB version 2021b).

Author Contributions

Conceptualization, W.S. and W.C.; methodology, W.S.; software, W.C.; validation, W.S. and W.C.; formal analysis, W.S.; investigation, W.S.; resources, W.S.; data curation, W.C.; writing—original draft preparation, W.C.; writing—review and editing, W.C. and W.S.; visualization, W.C.; supervision, W.S.; project administration, W.C.; funding acquisition, W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education through the “Regionalna Inicjatywa Doskonalosci” Program in 2019–2022 under Project 006/RID/2018/19, and the total financing is 11 870 000 PLN.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Algorithm A1. Summary of algorithm [1].
1
Declaration:
 Input the set of training points:
S =   x i ,   y i ,     i = 1 ,   2 ,   ,   N ,
x i R n ,   y i R m ,   n + m = 2 k ,   k = 3 ,   4 ,  
2.
System design
 Create system vectors u i :
u i =   x i y i   ,   d i m   u i = n + m .
 Calculate the spectrum m i of system vectors u i :
m i = 1 2 W 2 k + 1 u i
 Create spectrum matrix M :
M =   m 1 ,   m 2 ,   ,   m N
 Calculate Hermitian matrix W H :
W H = M M T M 1 M T
 Calculate orthogonal transformation T · :
T ·   T = 1 2 W 2 k + 1
 Calculate biorthogonal transformation T s · :
T s ·   T s = 2 · 1 W H W 2 k 1 .
3.
Recursive procedure:
for i = 1:N
   x ˜ i 0 = 0
  while   || x ˜ i l x ˜ i l 1 || e p s
    x ˜ i y i l = T 1 T s 0 y i + x ˜ i 0 l 1
  end
end
 ( l = 1 ,   2 ,   steps of recurrence)
Final   results   of   recurrence :   x ˜ i = x i .

References

  1. Citko, W.; Sienko, W. Hamiltonian and Q-Inspired Neural Network-Based Machine Learning. IEEE Access 2020, 8, 220437–220449. [Google Scholar] [CrossRef]
  2. Gonzales, R.C.; Woods, R.E. Digital Image Processing; Pearson International Edition; Pearson: London, UK, 2008. [Google Scholar]
  3. Nelson, R.A.; Roberts, R.G. Some Multilinear Variants of Principal Component Analysis: Examples in Grayscale Image Recognition and Reconstruction. IEEE Syst. Man Cybern. Mag. 2021, 7, 25–35. [Google Scholar] [CrossRef]
  4. Sirovich, L.; Kirby, M. Low Dimensional Procedure for the Characterization of Human Faces. J. Opt. Soc. Am. 1987, 4, 519–524. [Google Scholar] [CrossRef] [PubMed]
  5. Turk, M.; Pentland, A. Face Recognition Using Eigenfaces. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’91), Maui, HI, USA, 3–6 June 1991; pp. 586–591. [Google Scholar]
  6. Pal, S.K.; Ghosh, A.; Kundu, M.K. (Eds.) Soft Computing for Image Processing. In Studies in Fuzziness and Soft Computing; Physica-Verlang Heidelberg: New York, NY, USA, 2000. [Google Scholar]
  7. Huang, Z.; Ye, S.; McCann, M.T.; Ravishankar, S. Model-based Reconstruction with Learning: From Unsupervised to Supervised and Beyond. arXiv 2021, arXiv:2103.14528v1. [Google Scholar]
  8. Kaderuppan, S.S.; Wong, W.W.L.; Sharma, A.; Woo, W.L. Smart Nanoscopy: A Review of Computational Approaches to Achieve Super-Resolved Optical Microscopy. IEEE Access 2020, 8, 214801–214831. [Google Scholar] [CrossRef]
  9. Ramanarayanan, S.; Murugesan, B.; Ram, K.; Sivaprakasam, M. DC-WCNN: A Deep Cascade of Wavelet Based Convolutional Neural Networks for MR Image Reconstruction. In Proceedings of the IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020. [Google Scholar]
  10. Ravishankar, S.; Lahiri, A.; Blocker, C.; Fessler, J.A. Deep Dictionary-transform Learning for Image Reconstruction. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1208–1212. [Google Scholar] [CrossRef]
  11. Ravishankar, S.; Ye, J.C.; Fessler, J.A. Image Reconstruction: From Sparsity to Data-Adaptive Methods and Machine Learning. Proc. IEEE 2020, 108, 86–109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Ravishankar, S.; Bresler, Y. MR Image Reconstruction From Highly Undersampled k-Space Data by Dictionary Learning. IEEE Trans. Med. Imaging 2021, 30, 1028–1041. [Google Scholar] [CrossRef] [PubMed]
  13. Panagakis, Y.; Kossaifi, J.; Chrysos, G.G.; Oldfield, J.; Nicolaou, M.A.; Anandkumar, A.; Zafeiriou, S. Tensor Methods in Computer Vision and Deep Learning. Proc. IEEE 2021, 109, 863–890. [Google Scholar] [CrossRef]
  14. Fessler, A.J. Optimization Methods for Magnetic Resonance Image Reconstruction: Key Models and Optimization Algorithms. IEEE Signal Process. Mag. 2020, 37, 33–40. [Google Scholar] [CrossRef] [PubMed]
  15. Zheng, H.; Sherazi, S.W.A.; Son, S.H.; Lee, J.Y. A Deep Convolutional Neural Network-Based Multi-Class Image Classification for Automatic Wafer Map Failure Recognition in Semiconductor Manufacturing. Appl. Sci. 2021, 11, 9769. [Google Scholar] [CrossRef]
  16. Szhou, S.K.; Greenspan, H.; Davatzikos, C.; Duncan, J.S.; Ginneken, B.; Madabhushi, A.; Prince, J.L.; Rueckert, D.; Summers, R.M. A Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies with Progress Highlights, and Future Promise. Proc. IEEE 2021, 109, 820–838. [Google Scholar]
  17. Quan, T.M.; Nguyen-Duc, T.; Jeong, W.K. Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network with a Cyclic Loss. IEEE Trans. Med. Imaging 2018, 37, 1488–1497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Mardani, M.; Gong, E.; Cheng, J.Y.; Vasanawala, S.S.; Zaharchuk, G.; Xing, L.; Pauly, J.M. Deep Generative Adversarial Neural Networks for compressive sensing MRI. IEEE Trans. Med. Imaging 2019, 38, 167–179. [Google Scholar] [CrossRef] [PubMed]
  19. Liang, D.; Cheng, J.; Ke, Z.; Ying, L. Deep Magnetic Resonance Image Reconstruction: Inverse Problems Meet Neural Networks. IEEE Signal Process. Mag. 2020, 37, 141–151. [Google Scholar] [CrossRef] [PubMed]
  20. Sienko, W.; Citko, W. Hamiltonian Neural Networks Based Networks for Learning. In Machine Learning; Mellouk, A., Chebira, A., Eds.; I-Tech: Vienna, Austria, 2009; pp. 75–92. [Google Scholar]
  21. Gilton, D.; Ongie, G.; Willett, R. Deep Equilibrium Architectures for Inverse Problems in Imaging. arXiv 2021, arXiv:2102.07944v2. [Google Scholar] [CrossRef]
  22. Arridge, S.; Maass, P.; Oktem, O.; Schonlieb, C. Solving Inverse Problems using Data-driven Models. Acta Numer. 2019, 28, 1–174. [Google Scholar] [CrossRef] [Green Version]
  23. Ongie, G.; Jalal, A.; Metzler, C.A.; Baraniuk, R.G.; Dimakis, A.G.; Willett, R. Deep Learning Techniques for Inverse Problems in Imaging. arXiv 2020, arXiv:205.06001. [Google Scholar] [CrossRef]
  24. Gilton, D.; Ongie, G.; Willett, R. Model Adaptation for Inverse Problems in Imaging. arXiv 2021, arXiv:2012.00139v2. [Google Scholar] [CrossRef]
  25. Giryes, R.; Eldar, Y.C.; Bronstein, A.M.; Sapiro, G. Tradeoffs Between Convergences Speed and Reconstruction Accuracy in Inverse Problems. arXiv 2018, arXiv:1605.09232v3. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Structure of the machine learning model for image processing.
Figure 1. Structure of the machine learning model for image processing.
Sensors 22 00813 g001
Figure 2. Block diagram of the approximator with lumped memory.
Figure 2. Block diagram of the approximator with lumped memory.
Sensors 22 00813 g002
Figure 3. Face images saved (source: https://pixabay.com/pl, accessed on 17 February 2021).
Figure 3. Face images saved (source: https://pixabay.com/pl, accessed on 17 February 2021).
Sensors 22 00813 g003
Figure 4. Reconstruction of face images of people wearing masks.
Figure 4. Reconstruction of face images of people wearing masks.
Sensors 22 00813 g004
Figure 5. Attempt to recognize an unsaved photo.
Figure 5. Attempt to recognize an unsaved photo.
Sensors 22 00813 g005
Figure 6. Masked image of the face in Photo Number 9 (Figure 3).
Figure 6. Masked image of the face in Photo Number 9 (Figure 3).
Sensors 22 00813 g006
Figure 7. Structure of the reconstruction system when a fragment of the image (k–lines) is kept as the input.
Figure 7. Structure of the reconstruction system when a fragment of the image (k–lines) is kept as the input.
Sensors 22 00813 g007
Figure 8. Image reconstruction process in Figure 6 (after 1, 2, 5, 10, and 100 iterations).
Figure 8. Image reconstruction process in Figure 6 (after 1, 2, 5, 10, and 100 iterations).
Sensors 22 00813 g008
Figure 9. Image reconstruction of Lena’s photo (reconstruction system in Figure 7).
Figure 9. Image reconstruction of Lena’s photo (reconstruction system in Figure 7).
Sensors 22 00813 g009
Figure 10. Reconstruction of distorted images (Items 10 and 14 in Table 4).
Figure 10. Reconstruction of distorted images (Items 10 and 14 in Table 4).
Sensors 22 00813 g010
Figure 11. The plots of a function: MSE vs. S/N.
Figure 11. The plots of a function: MSE vs. S/N.
Sensors 22 00813 g011
Figure 12. Original image and its transformation (projection).
Figure 12. Original image and its transformation (projection).
Sensors 22 00813 g012
Figure 13. Structure of the system implementing inverse transformation. (a) y i —undegenerated image projection; (b) y ˜ i —degenerated image projection.
Figure 13. Structure of the system implementing inverse transformation. (a) y i —undegenerated image projection; (b) y ˜ i —degenerated image projection.
Sensors 22 00813 g013
Figure 14. An exemplary reconstruction (F (·)—system from Figure 13b).
Figure 14. An exemplary reconstruction (F (·)—system from Figure 13b).
Sensors 22 00813 g014
Figure 15. Multilayer learning structure (K—number of steps; e.g., K = 100).
Figure 15. Multilayer learning structure (K—number of steps; e.g., K = 100).
Sensors 22 00813 g015
Figure 16. Multilayer learning structure (L—number of steps; e.g., L = 100).
Figure 16. Multilayer learning structure (L—number of steps; e.g., L = 100).
Sensors 22 00813 g016
Figure 17. Illustration of global attractor properties.
Figure 17. Illustration of global attractor properties.
Sensors 22 00813 g017
Figure 18. Complex-valued image reconstruction: z 43 = x 4 + j x 3   ,   j 2 = 1 , x 3 , x 4 —vectorized form of images No. 3 and No. 4 in Figure 3, x 3 s ,   x 4 s —distorted images.
Figure 18. Complex-valued image reconstruction: z 43 = x 4 + j x 3   ,   j 2 = 1 , x 3 , x 4 —vectorized form of images No. 3 and No. 4 in Figure 3, x 3 s ,   x 4 s —distorted images.
Sensors 22 00813 g018
Table 1. Values of the Recognition Index of each person.
Table 1. Values of the Recognition Index of each person.
Photo NumberIndex Nominal ValueIndex Value after 100 Iterations
11.00.8622
22.01.6240
33.02.3660
44.03.9983
55.05.1259
66.05.8842
77.06.7262
88.08.0466
99.08.9576
Table 2. Convergence of the iterative process for Photo Number 2.
Table 2. Convergence of the iterative process for Photo Number 2.
Number of IterationsIndex ValueNumber of IterationsIndex Value
1−0.081371.4607
20.175881.5394
30.533291.5843
40.8703101.6078
51.1394121.6233
61.33271001.6240
Table 3. Values of the Recognition Index of each person.
Table 3. Values of the Recognition Index of each person.
Photo NumberMSE (Original Photo—Ask Photo)MSE (Original Photo—Reconstructed Photo)
1366.49105.06
2595.96176.38
31573.00570.95
4398.0037.58
5552.04114.55
6675.67112.13
7828.53221.09
8171.5226.05
9327.0640.75
Table 4. Mean squared error of reconstruction.
Table 4. Mean squared error of reconstruction.
S/N Ratio [dB]MSE (Original Image—Noisy Image)MSE (Original Image—Reconstructed Image)Index Value
41.898.60.89.04
34.4391.92.18.97
30.7909.31.79.01
25.42523.115.68.79
22.64807.120.78.78
18.89698.955.69.39
14.720,942.042.98.54
12.032,932.040.49.06
10.941,500.0155.29.14
9.561,752.0164.58.66
7.291,527.0266.69.53
5.8122,950.0563.48.13
4.216,0360.0521.07.14
2.724,6230.0754.210.45
−2.562,9540.04375.811.17
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Citko, W.; Sienko, W. Inpainted Image Reconstruction Using an Extended Hopfield Neural Network Based Machine Learning System. Sensors 2022, 22, 813. https://doi.org/10.3390/s22030813

AMA Style

Citko W, Sienko W. Inpainted Image Reconstruction Using an Extended Hopfield Neural Network Based Machine Learning System. Sensors. 2022; 22(3):813. https://doi.org/10.3390/s22030813

Chicago/Turabian Style

Citko, Wieslaw, and Wieslaw Sienko. 2022. "Inpainted Image Reconstruction Using an Extended Hopfield Neural Network Based Machine Learning System" Sensors 22, no. 3: 813. https://doi.org/10.3390/s22030813

APA Style

Citko, W., & Sienko, W. (2022). Inpainted Image Reconstruction Using an Extended Hopfield Neural Network Based Machine Learning System. Sensors, 22(3), 813. https://doi.org/10.3390/s22030813

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop