Next Article in Journal
Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices
Previous Article in Journal
The Principles of Hearable Photoplethysmography Analysis and Applications in Physiological Monitoring–A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning for Optical Sensor Applications: A Review

by
Nagi H. Al-Ashwal
1,2,
Khaled A. M. Al Soufy
1,2,
Mohga E. Hamza
1 and
Mohamed A. Swillam
1,*
1
Department of Physics, The American University in Cairo, New Cairo 11835, Egypt
2
Department of Electrical Engineering, Ibb University, Ibb City 00967, Yemen
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(14), 6486; https://doi.org/10.3390/s23146486
Submission received: 7 June 2023 / Accepted: 30 June 2023 / Published: 18 July 2023
(This article belongs to the Topic Artificial Intelligence in Sensors, 2nd Volume)

Abstract

:
Over the past decade, deep learning (DL) has been applied in a large number of optical sensors applications. DL algorithms can improve the accuracy and reduce the noise level in optical sensors. Optical sensors are considered as a promising technology for modern intelligent sensing platforms. These sensors are widely used in process monitoring, quality prediction, pollution, defence, security, and many other applications. However, they suffer major challenges such as the large generated datasets and low processing speeds for these data, including the high cost of these sensors. These challenges can be mitigated by integrating DL systems with optical sensor technologies. This paper presents recent studies integrating DL algorithms with optical sensor applications. This paper also highlights several directions for DL algorithms that promise a considerable impact on use for optical sensor applications. Moreover, this study provides new directions for the future development of related research.

1. Introduction

Modern digital development is the combination of sensors (hardware) and artificial intelligence (AI) (software) to perform intelligent tasks, now a key component in machine learning (ML). ML, a branch of AI, has a significant impact on optical sensors. This new model takes a data-driven approach without focusing on the underlying physics of the design. It also brings forth new advancements to conventional design tools and opens up numerous opportunities. Sensing has a significant impact on a broad range of scientific and engineering problems [1,2,3]. There are many types of sensors, one of them being optical sensors. Optical sensors have many useful features, including their light-weight, low-cost, small size, flexible deployment, and ability to operate at high pressures [4], high/low temperatures [5,6], and in high electromagnetic fields [7] without a reduction in their performance. Due to these advantages, optical sensors have been used in many applications such as intrusion detection [8], the monitoring of railways and general transport [9], pipelines [10], and bridge structures [11]. They also are used in the detection and localization of seismic events [12], human event recognition [13], healthy tasks [14], building structure [15], and landslide detection [16].
The use of multiple sensors generates huge raw datasets, causing serious challenges in processing and managing the data. Furthermore, conventional processing techniques in traditional sensing devices are not suitable for labelling, processing, or analysing the data [17]. Moreover, the collected data require a long time to be processed. The cost is another problem, where some applications require the deployment of many sensors. Deep learning (DL), a branch of ML, is incorporated with optical sensors to solve the aforementioned challenges [18,19,20,21]. Deep neural networks (DNNs) are a modern and promising technology that can be used with various optical sensors applications. The main advantage of DNNs is their ability to dynamically extract features from the collected raw data with high accuracy, often outperforming the capability of humans [22,23].
In the state-of-the-art research in this field, some previous survey papers have reviewed the use of DL in specific applications for optical sensors. As an example, the authors in [24] presented an extensive review of the recent advances in the estimation of multiphase fluid flows. The distributed optical fibre sensors and their working mechanism were addressed. The article provided some recent works which were used to characterize multiphase fluid flows in the production optimization of the oil and gas industry. It also included traditional methods, such as estimation of the sound speed and Joule–Thomson coefficient, in addition to data-driven ML techniques such as CNN, ensemble Kalman filter (EnKF) and support vector machine (SVM) algorithms. Some related papers that used CNN and ANN models to perform flow regime classification and multiphase estimation are mentioned in [10,25]. The LSTM algorithm was adopted by other related papers to estimate fluid flow rate [26,27,28,29]. Another survey includes ref. [30], in which the authors presented the latest advancements in pattern recognition models utilized in distributed optical fibre vibration sensing systems (DOVS). The main issues presented were feature extraction, the structure of the model, and the model performance. Some applications were introduced, including railway safety monitoring, perimeter security and pipeline monitoring. The authors also provided the pattern recognition prospects for DOVS in addition to some related references which realized the pattern recognition of vibration signals using ML and DL. In [15], the authors reviewed the current state of optical sensors and the application of DL for the structural health monitoring of civil infrastructures. The review considered the past five years and found that optical fibre sensors were applied to the measurement of concrete properties, leakage monitoring, corrosion, and fatigue responses.
The objective of this work is to review DL for optical sensor applications. This review comprehensively covers all published optical sensor types that have been utilized in conjunction with DL techniques. DL can benefit optical sensors in many directions such as processing huge datasets, pre-processing noisy data, automating feature extraction, predicting the results with high accuracy and reliability, and reducing the number of optical sensors required for the deployment in any system. To the best of our knowledge, this is the first study that discusses DL applications for optical sensors.
This paper is organized as follows. In the Section 2, we introduce the operating principles of optical sensors. In the Section 3, we present a brief discussion on DL. A survey of the application of DL on optical sensors is presented in the Section 4. A discussion and future perspectives are given in the Section 5.

2. Optical Sensor Technologies

Since the 1900s, plasmonic sensors have been widely used in many areas for numerous applications [31,32,33,34,35]. Since then, they have been utilized in a diverse range of fields, including biology and medical diagnosis [36,37,38,39,40], chemistry [41], food safety [42,43], and environmental monitoring and evaluation [44,45,46,47,48]. In addition, plasmonic sensors are being used in negative refractive index materials [49,50,51], optical meta-surfaces [52,53,54], and integrated circuits [55,56,57]. Consequently, the effects caused by the surface plasmon resonance (SPR) or localized surface plasmon resonance (LSPR) have been proven to have an astounding sensitivity needed in those applications [58]. To design plasmonic sensors, the appropriate selection of the operating wavelength in addition to the type and thickness of the metal film to be used is required to achieve optimum sensitivity [59]. If the sensors are used in the visible range and the near-infrared range of the electromagnet spectrum, the most typical metals used are gold, copper, silver, and aluminium since they have the sharpest resonance compared to other metals [60]. Among these metals, gold is mostly preferred since it is the most chemically stable when exposed to the atmosphere. However, gold does not show SPR when the wavelength used is less than 0.5 μ m [61,62].
The SPR functions when photons from the incident light directed onto a metal surface layer excite the conductive electrons on its surface at a specific angle to undergo collective oscillations and then be propagated parallel to the surface [63]. This occurs since the surface of the SPR is highly sensitive due to the SPR-generated evanescent field, occurring at the interface between the metal and the dielectric while undergoing total internal reflection. The point of interface is considered the strongest part at which the evanescent field happens because of the resonance coupling between the incident rays and the surface plasmon wave [64]. As the evanescent field infuses into the dielectric media and the metal film, it decreases exponentially. The greatest decay of the field makes the SPR sensors significantly sensitive to the thickness of the material and the refractive index alterations of the dielectric film affixed to the metal-based surface [65,66,67,68,69]. The binding occurring at the surface of the metal and the thickness of the dielectric film affect the signal measured by the plasmonic sensor [70]. This happens since the resonance of the surface plasmon wave shifts with any changes in the thickness of the material, observed when the SPR curve shifts [64]. There is a linear relationship between the SPR signal and the dielectric film thickness and the refractive index, which facilitates SPR spectroscopy of the interaction happening to be quantitatively analysed. Hence, studying the SPR signal as a function of time explains the binding kinetics and interactions occurring at the plasmonic sensors to be measured in real-time [70].
The measurement of the reflective index changes along with the binding of the sample for recognition of the immobilized molecules on the SPR sensor, as shown in Figure 1. Hence, the structures’ size, shape, and composition along with the dielectric properties of the neighbouring environment, utterly determines the intensity and position of the SPR, which are key components in creating an optical sensor [71,72]. Hence, any minor adjustment to the reflective index of the sensing medium would alter the SPR occurrence which is used for detecting the analyte or chemical [73,74]. The numerous variables involved in the analysis of the SPR certify high sensitivity, making it highly important to be utilized in various applications [69]. Another sensor is nanoparticle-based plasmonic biosensors, which have high sensitivity and low LOD so they have been employed in pathogen detection, allowing for the detection of various diseases due to the wide spectrum of antibody binding. They are especially crucial for POCT because, when employed as biosensors, they are non-intrusive, quick, and accurate. The sensitivity of plasmonic biosensors can be further increased by adding metamaterials, enabling the biosensor to be reliable and reproducible. Recently, some researchers have been focusing on improving biosensors so that they might be created as a lab-on-a-chip diagnostic tool, making them ubiquitous. Additionally, it is being used by other researchers to look for airborne illnesses in the environment. Metamaterial-based plasmonic biosensors would enable highly accurate and rapid detection of pathogens that could improve human well-being and shield humanity from any pandemic in the future, regardless of their physical or airborne modes of transmission [36].

3. Deep Neural Networks Overview

DL, a subset of ML, has a great advantage due to its ability to automatically learn representative features from input data, identified as “feature learning”. DL has shown outstanding success due to having large datasets, partaking ongoing advances in computing power, and having an enduring opportunity in algorithm improvements. DL uses several DNNs to carry out intricate computations on enormous volumes of structured and unstructured data. There are three types of learning in DNNs. The first type is a supervised algorithm which works with labelled data. At this point, the model is trained to reduce the cost function which reflects the difference between the model’s predictions and the actual values.
CNN [75] and LSTM [76,77] are examples of this type. The second type is a semi-supervised algorithm where a small part of the sample has the annotations necessary to train the model. This form of algorithm constructs a self-learning strategy where it generates its own annotations [78]. Examples of this type include generative adversarial networks (GAN) [79] and deep reinforcement learning (DRL) [80]. The third type of DNN is an unsupervised algorithm where the model finds the structure or relationship between the input data without labels or annotations. Restricted Boltzmann machines (RBM) [81] and AE [82] are examples of this type of DNN and they perform dimensionality reduction functions or clustering.
In optical sensor applications, the most commonly adopted learning architectures are CNN, AEs, and multiple layer perceptron (MLP) [83].

3.1. Convolutional Neural Network (CNN)

The CNN has three types of layers, including the convolutional layer, which works as a feature extractor from the input image, the pooling layer, which reduces the dimensionality of features maps [84], and the fully connected layer, which is located near the output layer. A SoftMax classifier is usually used as the final output layer as shown in Figure 2. In synchronization, these layers enable a learning scheme that links the map features to predict the output and minimize the cost function. By employing a shared kernel in the convolution operation, DL models are able to learn space-invariant features. Furthermore, in comparison with fully connected neural networks, CNNs are good at capturing local dependencies. As for LSTMs, they are employed to make time-series predictions as they resolve the issue of the vanishing gradient which arises in conventional recurrent neural networks (RNNs) [76].

3.2. Autoencoders (AE)

An AE is a neural network designed with the objective of learning a representation that closely mirrors the input data when presented as an output [82,85]. As shown in Figure 3, the AE consists of two components, the encoder and the decoder. The input and output layers have the same number of neurons while all the layers are interconnected. However, the network incorporates a bottleneck to encourage the learning of the essential features only. To create a bottleneck effect in the AE, the number of nodes in the connecting layer, located between the encoder and the decoder parts, are reduced in comparison to nodes in the input layer. Similar to other neural networks, the training process of the AE involves learning the weights and biases of the network by minimizing the loss functions. This ensues as the encoder component learns a compact representation of the input data, while the decoder component reconstructs the original input from the learned representation provided by the encoder. The process of learning and reconstruction in the AE has been used in a range of applications, including anomaly detection. By leveraging the learned representation and reconstruction capability, the AE can effectively identify anomalies or deviations from the normal patterns in the input data. This enables the AE to serve as a valuable tool for detecting and flagging unusual or anomalous occurrences in various domains.

3.3. Multilayer Perceptron (MLP)

The multilayer perceptron (MLP) is a complement of the feedforward artificial neural network. It comprises three layers, including the input layer, the output layer, and the hidden layer, as shown in Figure 4. The input layer is responsible for receiving the input signal, which needs to be processed. The output layer is responsible for performing the desired task, such as prediction or classification. The true computational engine of the MLP lies within an arbitrary number of hidden layers, positioned between the input and the output layers. These hidden layers carry out the complex computations and transformations that enable the MLP to learn and extract meaningful patterns from the input data. In an MLP, data follows a similar flow to a feedforward network, progressing from the input layer to the output layer in a forward direction. The neurons within the MLP are trained via the usage of the backpropagation learning algorithm. MLPs are specifically designed to approximate any continuous function and are capable of solving problems that are not linearly separable. Some of the most significant applications of MLP include pattern classification, recognition, prediction, and approximation.

4. DL Applications for Optical Sensors

DL development is a highly iterative and empirical process. It can be implemented in three steps, including choosing the initial weights and hyperparameter values, coding, and experimenting. These steps are interconnected through an interactive process.
For optical sensor applications, the first step is to figure out the current problem with those applications such as the processing of vast data, noisy data, missing data, and the delay in data processing. Then, an innovative idea needs to be formed to integrate DL into the optical sensor to solve these complications. The next step is to code the proposed solution using related and modern frameworks or toolkits, such as TensorFlow, Keras, PyTorch, CNTK, etc. After this, training and evaluating the model needs to be conducted by having raw data gathered, pre-processed, and fed into the proposed DL model. Based on the results, the developer should refine the proposed model cyclically and apply any required amendments in the model to obtain better accuracy. The overall view of these steps is depicted in Figure 5. Optical sensor applications based on DL techniques are surveyed here to provide interested researchers and readers with the elementary knowledge for developing high-performance optical sensors. Furthermore, this work introduces and discusses the most recent, related applications, which is mainly focusing on some factors and issues such as motivation, strategy, and effectiveness. The content of the survey is further expanded according to the used DL model to which each work belongs.

4.1. CNN-Based Applications

CNN-based applications are attracting interest across a variety of domains, including optical sensor applications. In this section, some of the recent works that have been applying this model for optical sensors will be briefly presented
In [86], a CNN model was developed to comprehend an optical fibre curvature sensor. A large number of specklegrams have been automatically detected from the facet of multimode fibres (MMFs) in the experiments. The detected specklegrams were pre-processed and fed to the model for training, validation, and testing. The dataset was collected in the form of a light beam by designing an automated detecting experimental setup as shown in Figure 6. The light beam was detected by a CCD camera with a resolution of 1280 × 960 and a pixel size of 3.75 × 3.75 μ m 2 . As shown in Figure 7, the architecture of VGG-Nets was adopted to build the CCN. The mean squared error (MSE) was then used as the loss function. The predicted accuracy of the proposed CNN was 94.7 % of specklegrams with an error of curvature prediction within 0.3 m 1 . However, the learning-based scheme reported has the capability to only predict a solitary parameter and does not fully utilize the potential of DL.
In [9], the authors proposed semi-supervised DL to detect a track. An experimental setup was created using a portion of a high-speed railway track and installing a distributed optical fibre acoustic system (DAS). In the proposed model, an image recognition model with a specific pre-processed dataset and an acquisitive algorithm for selecting hyperparameters was used.
The considered events supposed to be recognized in this model are shown in Table 1.
In addition, the hyperparameters were selected based on an acquisitive algorithm. The obtained dataset after the augmentation process is shown in Table 2.
Four structural hyperparameters were used in this work as shown in Table 3. The obtained accuracy of the proposed model was 97.91%. However, it is important to highlight that the traditional methods have proven to execute improved spatial accuracy. Some other related works can be found in [87,88,89,90,91,92,93,94].
In [95], a distributed optical fibre sensor using a hybrid Michelson–Sagnac interferometer was proposed. The motivation of the proposed model was to solve the complications of the incapability of the conventional hybrid structure to locate in the near and flawed frequency response. The proposed model utilized basic mathematical operations and a 3 × 3 optical coupler to obtain two phase signals with a time difference that can be used for both location and pattern recognition. The received phase signals were converted into 2D images. These images were used as a dataset and fed into the CNN to obtain the required pattern recognition. The dataset contained 5488 images with six categories, and the size of each image was 5000 × 5000 in .jpg format. The description of the dataset is shown in Table 4. The structural diagram of the used CNN is shown in Figure 8. The accuracy of the proposed model was 97.83%. However, the sensing structure employed is relatively simple and does not consider factors such as the influence of backward scattered light.
In [96], a DL model was proposed to extract time–frequency sequence correlation from signals and spectrograms to improve the robustness of the recognition system. The authors designed a targeted time attention model (TAM) to extract features in the time–frequency domain. The architecture of the TAM model comprises two stages, namely the convolution stage for extracting features and the time attention stage for reconstruction. The process of data streaming, domain transformation and features extraction to output is shown in Figure 9. The knocking event is taken as an example. The convolution stage is used to extract characteristic features. Here, the convolutional filter established a local connection in the convolution and shared the weights between receiving domains. The pooling layers emphasized the shift-invariance feature. In addition, a usual CNN model was used as the backbone. As shown in Figure 9, in the left stage, information was extracted from the spectrogram and transformed into a feature map ( 1 × 128 × 200 ) , where 1 represents the number of input channels (the grey image has one channel), while 128 and 200 represent the height and width of the input, respectively. The authors collected and labelled a large dataset of vibration scenes including 48 , 000 data points with eight vibration types. The experimental results indicated that this approach significantly improved the accuracy with minimal additional computational cost when compared to the related experiments [97,98]. The time attention stage was designed for the reconstruction of the features in which TAM was used to serve two purposes. The first purpose was to extract the sequence correlation by a cyclic element. The second purpose was to assign the weight matrices for the attention mechanism. F1 and F2 were unique in their emphasis on investigating the "where" and "what" features of time. An F-OTDR system was constructed to classify and recognize vibration signals. The F-OTDR system contained a sensing system and a producing system. This study was verified using a vibration dataset including eight different scenarios which were collected by an F-OTDR system. The achieved classification had an accuracy of 96.02%. However, this method did not only complicate the data processing procedure, but it also had the potential to result in the loss of information during the data processing phase.
In [99], a real-time action recognition model was proposed for long-distance oil–gas PSEW systems using a scattered distributed optical fibre sensor. They used two methods to calculate two complementary features, a peak and an energy feature, which described the signals. Based on the calculated features, a deep learning network (DLN) was built for a new action recognition. This DLN could effectively describe the situation of long-distance oil–gas PSEW systems. The collected datasets were 494 GB with several types of noise at the China National Petroleum Corporation pipeline. The collected signal involved four types of events containing background noise, mechanical excavation, manual excavation, and vehicle driving. As shown in Figure 10, the architecture of the proposed model consisted of two parts. The first part dealt with the peak, while the second part dealt with the energy. Each part consisted of many layers, including ConvD1, batch normalization, maxpool, dropout, Bi-LSTM, and a fully connected layer. Any damage event could be allocated and identified with accuracies of 99.26 and 97.20% at 500 and 100 Hz, respectively. Nonetheless, all the aforementioned methods consider an acquisition sample as a singular vibration event. However, for dynamic time-series identification tasks, the ratio of valid data within a sample relevant to the overall data was not constant. This means that the position of the label in relation to the valid portion of the input sequence remained uncertain. Further related research can be reviewed in [100,101,102].
In [103], the authors presented the application of signal processing and ML algorithms to detect events using signals generated based on DAS along a pipeline. ML and DL approaches were implemented and combined for event detection as shown in Figure 11. A novel method to efficiently generate training dataset was developed. Excavator and none-excavator events were considered.
The sensor signals were converted into a grey image to recognize the events depending on the proposed DL model. The proposed model was evaluated in real-time deployment within three months in a suburban location as shown in Figure 12.
The results showed that DL is the most promising approach due to its advantages over ML as shown in Table 5. However, the proposed model only differentiated between two events, namely ‘excavator’ and ‘non-excavator’, while there are multiple distinct events. Additionally, the system was evaluated in a real-time arrangement for a duration of three months in a suburban area. Yet, for further validation and verification, it is crucial to conduct tests in different areas and over an extended period of time.
In [104], an improved WaveNet was applied to recognize manufactured threatening events using distributed optical fibre vibration sensing (DVS). The improved WaveNet is called SE-WaveNet (squeeze and excitation WaveNet). WaveNet is a 1D CNN (1D-CNN) model. As a deep 1D-CNN, it can quickly achieve training and testing while also boasts a large receptive field that enables it to retain complete information from 1D time-series data. The structure of the SE functions in synchronization with the residual block of WaveNet in order to recognize 2D signals. The SE structure functions using an attention mechanism, which allows the model to focus on channel features to obtain more information. It can also suppress insignificant channel features. The structure of the proposed model is shown in Figure 13. The input of the SE-WaveNet is an n × m matrix, synthesized from the n-points of spatial signals and the m-groups of time signals. The used dataset is shown in Table 6. The results showed that the SE-WaveNet accuracy can reach approximately 97.73%. However, it is important to note that the model employed in this study was only assessed based on a limited number of events, and further testing is necessary to evaluate its performance in more complex events, particularly in engineering-relevant applications. Additionally, further research is needed to validate the effectiveness of the SE-WaveNet in practical, real-world settings.
In [14], a CNN and an extreme learning machine (ELM) were applied to discriminate between ballistocardiogram (BCG) and non-BCG signals. CNNs were used to extract relevant features. As for ELM, it is a feedforward neural network that takes the features extracted from CNN as the input and provides the category matrix as an output [105]. The architecture of the proposed CNN-ELM and the proposed CNN are shown in Figure 14 and Table 7, respectively.
BCG signals were obtained with a micro-bend fibre optical sensor based on IoT, taken from ten patients diagnosed with obstructive sleep apnoea and submitted for drug-induced sleep endoscopy. To balance the BCG (ballistocardiogram) and non-BCG signal samples, three techniques were employed: undersampling, oversampling, and generative adversarial networks (GANs). The performance of the system was evaluated using ten-fold cross-validation. Using GANs to balance the data, the CNN-ELM approach yielded the best results with an average accuracy of 94%, a precision of 90%, a recall of 98%, and an F-score of 94%, as shown in Table 8. Inspired by [106], the architecture of the used model is presented in Figure 15, showing balanced BCG and non-BCG chunks. Other relevant works were presented in [107,108].
In [11], the efficiency and accuracy enhancements of the bridge structure damage detection were addressed by monitoring the deflection of the bridge using a fibre optic gyroscope. A DL algorithm was then applied to detect any structural damage. A supervised learning model using CNN to perform structural damage detection was proposed. It contained eleven hidden layers constructed to automatically identify and classify bridge damage. Furthermore, the Adam optimization method was considered and the used hyperparameters are listed in Table 9. The obtained accuracy of the proposed model was 96.9% which was better than the random forest (RF), which was 81.6%, the SVM which was 79.9%, the k-nearest neighbor (KNN) which was 77.7%, and the decision trees (DT). Following the same path, comparable work was performed in [109,110].
The authors in [111] proposed an intrusion pattern recognition model based on a combination of Gramian angular field (GAF) and CNN, which possessed both high speed and accuracy rate in recognition. They used the GAF algorithm for mapping 1D vibration-sensing signals into 2D images with more distinguishing features. The GAF algorithm retained and highlighted the distinguishing differences of the intrusion signals. This was beneficial for CNN to detect intrusion events with more subtle characteristic variation differences. A CNN-based framework was used for processing vibration-sensing signals as input images. According to the experimental results, the average accuracy rate for recognizing the three natural intrusion events, light rain, wind blowing, and heavy rain, and the three human intrusion events, impacting, knocking, and slapping, on the fence was found to be 97.67%. With a response time of 0.58 seconds, the system satisfied the real-time monitoring requirements. By considering both accuracy and speed, this model achieved automated recognition of intrusion events. However, the application of complex pre-processing and denoising techniques to the original signal presented a challenge for intrusion recognition systems when it came to effectively addressing emergency response scenarios. Relevant work following a similar pattern was presented in [112].
A bending recognition model using the analysis of MMF specklegrams with diameters of 105 and 200 µm was proposed and assessed in [113]. The proposed model utilized a DL-based image recognition algorithm. The specklegrams detected from the facet of the MMF were subjected to various bendings and then utilized as input data. Figure 16 shows the used experimental setup to collect and detect fibre specklegrams.
The architecture of the model was based on VGG-Nets as shown in Figure 17.
The obtained accuracy of the proposed model for two multimode fibres is shown in Table 10.
More related research can be found in [113,114,115,116,117,118,119,120,121].
The authors in [122] used a CNN to demonstrate the capability for the identification of specific species of pollen from backscattered light. Thirty-core optical fibre was used to collect the backscattered light. The input data provided to the CNN was from camera images which were further divided into two sets, distance prediction and particle identification. In the first type, the total number of collected images was 1500, by which 90% of them were used as a training set and 10% were used as a validation set of the CNN. In the second type, 2200 images were collected and 90% of them were used as a training set and 10% were used as a validation set. The training procedure of the proposed model is depicted in Figure 18. The second version of ResNet-18 ([123,124]) was used to propose the required model with batch normalization [125] with a mini-batch size of 32 and a momentum of 0.95. The output was a single regression (single output). The neural network, trained to identify pollen grain types, achieved a real-time detection accuracy of approximately 97%. The developed system can be used in environments where transmission imaging is not possible or suitable.
In [13], a DL-based distributed optical fibre-sensing system was proposed for event recognition. A spatio-temporal data matrix from an F-OTDR system was used as the input data served to the CNN. The proposed method had advantageous characteristics, such as grey-scale image transformation and bandpass filtering, which were needed for pre-processing and classification instead of the usual complex data processing, small size, and high training speed, and excessive requirements for classification accuracy. The developed system was applied to recognize five distinct events involving background, jumping, walking, digging with a shovel, and striking with a shovel. The collected data were split into two types as shown in Table 11. The combined dataset for the five events consisted of 5644 instances.
Some common CNNs were examined, and the results are shown in Table 12.
The considered training parameters for all CNNs were the same. The total training steps were 50,000, the learning rate was 0.01, and the adopted optimizer was the root mean square prop (RMSProp) [126]. This work concluded that the VGGNet and GoogLeNet obtained better classification accuracy (greater than 95%) and GoogLeNet was selected to be the basic CNN structure due its model size. For further improvement of the model, Inception-v3 of GoogLeNet was used. Table 13 shows the classification accuracy achieved for the five events. The authors also optimized the network by tuning the size of some layers of the model. Table 14 shows the comparison between the optimized model and Inception-v3. However, it is important to note that this study trained the network using relatively small datasets consisting of only 4000 samples. Moreover, traditional data augmentation strategies employed in image processing, such as image rotation, cannot be directly applied to feature maps generated from fibre optic-sensing data.
In [8], the authors designed a DNN to identify and classify external intrusion signals from a 33 km optical fibre-sensing system in a real environment. In that article, the time-domain data was located directly into a DL model to deeply learn the characteristics of the destructive intrusion events and establish a reference model. This model included two CNN layers, one linear layer, one LSTM layer, and one fully connected layer as shown in Figure 19. It was called the convolutional, long short-term memory, fully connected deep neural network (CLDNN). The model effectively learned the signal characteristics captured by the DAS and was able to process the time-domain signal directly from the distributed optical fibre vibration-monitoring systems. It was found to be simpler and more effective than feature vector extraction through the frequency domain. The experimental results demonstrated an average intrusion event recognition rate exceeding 97% for the proposed model. Figure 20 shows the DAS system using the F-OTDR and the pattern recognition process using the CLDNN. However, the proposed model was not evaluated as a prospective solution for addressing the issue of sample contamination caused by external environmental factors, which can lead to a decline in the recognition accuracy. Other related work can be viewed in [127].
A novel method was developed in [12] to efficiently generate a training dataset using GAN [128]. End-to-end neural networks were used to process data collected using the DAS system. The proposed model’s architecture utilized the VGG16 network [23]. The purpose of the proposed model was to detect and localize seismic events. One extra convolutional layer was added to match the image size then a fully connected layer was added at the end of the model. Batch normalization for regularization and an ReLU activation function were used. The model was tested with experimentally collected data with a 5 km long DAS sensor, and the obtained classification accuracy was 94%. Nevertheless, achieving a reliable automatic classification using the DAS system remains computationally and resource-intensive, primarily due to the demanding task of constructing a comprehensive training database, which involves collecting labelled signals for different phenomena. Furthermore, overly complex approaches may render real-time applications impractical, introducing potential processing-delay issues. Other works in the same direction have been presented in [129,130].
In [127], the authors presented a DL model to recognize six activities, including walking, digging with a shovel, digging with a harrow, digging with a pickaxe, facility noise, and strong wind. The DAS system based on F-OTDR was presented along with novel threat detection, signal conditioning, and threat classification techniques. The CNN architecture used for the classification was trained with real sensor data and consisted of five layers, as illustrated in Figure 21. In this algorithm, an RGB image with dimensions 257 × 125 × 3 was constructed. This image was constructed for each detection point on the optical fibre, helping determine the classification of the event through the network. The results indicated that the accuracy of the threat classification exceeded 93%. However, increasing the depth of the network structure in the proposed model will unavoidably results in a significant slowdown in the training speed and potentially lead to overfitting.
In the study published in [131], the authors proposed an approach to detect defects in large-scale PCBs and measure their copper thickness before the mass production process using a hybrid optical sensor HOS based on CNN. The method involved combining microscopic fringe projection profilometry (MFPP) with lateral shearing digital holographic microscopy (LSDHM) for imaging and defect detection by utilizing an optical microscopic sensor containing minimal components. This allowed for more precise and accurate identification of diverse types of defects on the PCBs. The proposed approach had the potential to significantly improve the quality control process in PCB manufacturing, leading to more efficient and effective production. The researchers’ findings demonstrate a remarkable success rate with an accuracy of 99%.

4.2. Multilayer Perceptron (MLP)-Based Applications

In [132], MLP was proposed to achieve a specific event measurement in the existence of various noises without shielding the sensor against undesired perturbations. The proposed model was used for temperature sensing based on a sapphire crystal optical fibre (SOF). MMF interference spectra inclusive were used as the input of temperature changes and noise. The trained DNN was able to recognize the relationship between the temperature and transmission spectra, as shown in Figure 22. The proposed model consisted of four hidden layers. An Adam optimizer with a learning rate of 10 3 was utilized alongside an ReLU activation function for each output. However, due to the restrictions of the demodulation terminal, the demodulation speed was slow, and as a result, it had a limited scope for application.
In [133], MLP and CNN were used to demonstrate DL for improving the analysis of specklegram analysis for sensing air temperature and water immersion lengths. A comparison was made between the CNN and a traditional correlation technique. The input of the MLP was a 60 × 60 input image fed into 3600 nodes as the input layer. On the other hand, the output layer comprised a single node, representing a value of either temperature or immersion length. An ReLU activation function was also used after each hidden layer. The total number of trainable parameters was 9,752,929. On the other hand, VGG-16 architecture was used for the CNN model with 2014 input images. In the CNN model, the total number of trainable parameters was 29,787,329. The architecture of both models is shown in Figure 23. Both models obtained better accuracy in terms of their average errors.

4.3. Autoencoder (AE)-Based Applications

In [134], a novel deep AE model was proposed to detect water-level anomalies and report abnormal situations. Time-series data were collected from various sensors to train the proposed model, consisting of steps that included pre-processing data, training the model, and evaluating the model using normal and abnormal data, as shown in Figure 24. Combinations of hyperparameters were tuned to obtain the best results from the configuration of each experiment. Different architecture models were used (models through models). These model architecture models differed from each other by the number of units at each layer within the five layers. The studies concluded that the model with 600 × 200 × 100 × 200 × 600 achieved the best result with an F1-score of 99.9% and an area under the curve (AUC) of 1.00 when a window size of 36,000 was used.
In [135], a DL model based on a distributed optical fibre sensor (DOFS) was proposed to collect data concerning the temperature data along the optic fibre and identify the anomaly detected temperature in the early phase. The proposed model had the potential to be used for monitoring abnormal temperatures in crude oil tanks. The structured network used is shown in Figure 25 and Table 15, Table 16, Table 17 and Table 18.
The temperature collected by the DOFS was used as the normal temperature (NT) and used as a training set. The threshold value for anomaly detection was set using NT and a small amount of artificially added ambient temperature (AAT). The test set comprised self-heating temperature (ST) and AAT and NT collected from the experimental apparatus. Furthermore, the proposed model achieved an accuracy of 98%.
Table 19 provides a summary of the DL techniques used with optical sensor applications in this article. The CNN was used in applications from 1 to 16, MLP was used in application 17, and AE was used in the remaining applications. The table also shows the applications and their findings in terms of accuracy. However, the most common limitation of all the previous applications includes the methods to collect and pre-process the data. In addition, there are few DL models shown to be appropriate to be used with optical sensors. Moreover, the used methods for identifying anomalies are very simple, focusing solely on the impact of ambient temperature changes on the detection of sulphurized rust and self-heating anomalies. This method did not consider diverse weather conditions, such as intense winds, rainfall, or temperature variations resulting from seasonal changes or daily fluctuations.

5. Conclusions and Future Perspectives

This study summarized the applications of DL in integration with optical sensors. The necessity and significance of DL in optical sensor applications were demonstrated first by presenting the merit of DL, and then by presenting some past and present related works summarized and discussed to provide a wide view of the recent development in this field. This study was based on the type of DL model used. It was concluded that the most commonly used models were the CNN, MLP, and AE models due to the fact that they are suitable for most optical sensor applications. It was also noted that the main challenges in combining DL with optical sensor devices concern the type of data used and how it could be collected and pre-processed before feeding it into the DL model. The treatment of an image classification problem using MLP requires converting a 2D image into a 1D vector before training the model. Two key issues are faced with this approach, the number of parameters increases significantly as the image size increases, and the MLP ignorance of the spatial arrangement, or spatial features, of the pixels in an image. For these reasons, it is better to use CNNs to deal with image classification models. Furthermore, since the data of optical sensors can be modelled as 2D arrays or images, CNNs have become dominant models in various optical sensor applications.
Finally, there are some promising applications that can be considered when DL is combined with some optical sensor applications, such as detecting viruses and bacteria, environmental pollutants, smart city, and optical communication systems. In addition, upon our research, it was apparent that there was a noticeable gap in the literature regarding the application of some modern networks, such as graphical neural networks (GNN) and spiking neural networks (SNN) using optical sensors. Therefore, this article brings attention to this gap and highlights the potential for future research in exploring the utilization of GNN and SNN in optical sensor applications.

Author Contributions

N.H.A.-A. and K.A.M.A.S. writing—original draft, data curation, visualization, analysing the collected articles. M.A.S. conceptualization, supervision, project administration, writing—reviewing. M.E.H. helped in writing and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the research grant received from the Conservation, Food & Health Foundation, USA.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data in this study are available from the corresponding author upon request.

Acknowledgments

This work was made possible using the facilities of the Department of Physics, AUC, Cairo, Egypt, and the fellowship awarded from Scholar Rescue Fund-IIE. The statements made herein are solely the responsibility of the authors.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DLDeep learning
MLMachine learning
DNNDeep neural network
ANNArtificial neural network
LSTMLong short-term memory
AEAuto encoders
MLPMultilayer perceptron
SVMSupport vector machine
EnKFEnsemble Kalman filter
DOVSDistributed optical fibre vibration sensing
MMFMultimode fibre

References

  1. Ignatov, A.I.; Merzlikin, A.M. Two optical sensing elements for H2O and NO2 gas sensing based on the single plasmonic–photonic crystal slab. Adv. Opt. Technol. 2020, 9, 203–208. [Google Scholar] [CrossRef]
  2. Nechepurenko, I.; Andrianov, E.; Zyablovsky, A.; Dorofeenko, A.; Pukhov, A.; Lozovik, Y.E. Absorption sensor based on graphene plasmon quantum amplifier. Phys. Rev. B 2018, 98, 075411. [Google Scholar] [CrossRef]
  3. Tomyshev, K.; Manuilovich, E.; Tazhetdinova, D.; Dolzhenko, E.; Butov, O.V. High-precision data analysis for TFBG-assisted refractometer. Sens. Actuators A Phys. 2020, 308, 112016. [Google Scholar] [CrossRef]
  4. Kumari, C.U.; Samiappan, D.; Kumar, R.; Sudhakar, T. Fiber optic sensors in ocean observation: A comprehensive review. Optik 2019, 179, 351–360. [Google Scholar] [CrossRef]
  5. Roriz, P.; Frazão, O.; Lobo-Ribeiro, A.B.; Santos, J.L.; Simões, J.A. Review of fiber-optic pressure sensors for biomedical and biomechanical applications. J. Biomed. Opt. 2013, 18, 050903. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Gupta, S.; Mizunami, T.; Yamao, T.; Shimomura, T. Fiber Bragg grating cryogenic temperature sensors. Appl. Opt. 1996, 35, 5202–5205. [Google Scholar] [CrossRef] [PubMed]
  7. Taffoni, F.; Formica, D.; Saccomandi, P.; Pino, G.D.; Schena, E. Optical fiber-based MR-compatible sensors for medical applications: An overview. Sensors 2013, 13, 14105–14120. [Google Scholar] [CrossRef] [Green Version]
  8. Bai, Y.; Xing, J.; Xie, F.; Liu, S.; Li, J. Detection and identification of external intrusion signals from 33 km optical fiber sensing system based on deep learning. Opt. Fiber Technol. 2019, 53, 102060. [Google Scholar] [CrossRef]
  9. Wang, S.; Liu, F.; Liu, B. Semi-Supervised Deep Learning in High-Speed Railway Track Detection Based on Distributed Fiber Acoustic Sensing. Sensors 2022, 22, 413. [Google Scholar] [CrossRef]
  10. Vahabi, N.; Selviah, D.R. Convolutional neural networks to classify oil, water and gas wells fluid using acoustic signals. In Proceedings of the 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Ajman, United Arab Emirates, 10–12 December 2019; pp. 1–6. [Google Scholar]
  11. Li, S.; Zuo, X.; Li, Z.; Wang, H. Applying deep learning to continuous bridge deflection detected by fiber optic gyroscope for damage detection. Sensors 2020, 20, 911. [Google Scholar] [CrossRef] [Green Version]
  12. Shiloh, L.; Eyal, A.; Giryes, R. Deep learning approach for processing fiber-optic DAS seismic data. In Proceedings of the Optical Fiber Sensors, Lausanne, Switzerland, 24–28 September 2018; Optical Society of America: Washington, DC, USA, 2018; p. ThE22. [Google Scholar]
  13. Shi, Y.; Wang, Y.; Zhao, L.; Fan, Z. An event recognition method for Φ-OTDR sensing system based on deep learning. Sensors 2019, 19, 3421. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Tahir, S.; Sadek, I.; Abdulrazak, B. A CNN-ELM-Based Method for Ballistocardiogram Classification in a Clinical Environment. In Proceedings of the 2021 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Virtually, 12–17 September 2021; pp. 1–6. [Google Scholar]
  15. Jayawickrema, U.; Herath, H.; Hettiarachchi, N.; Sooriyaarachchi, H.; Epaarachchi, J. Fibre-optic sensor and deep learning-based structural health monitoring systems for civil structures: A review. Measurement 2022, 199, 111543. [Google Scholar] [CrossRef]
  16. Schenato, L.; Palmieri, L.; Camporese, M.; Bersan, S.; Cola, S.; Pasuto, A.; Galtarossa, A.; Salandin, P.; Simonini, P. Distributed optical fibre sensing for early detection of shallow landslides triggering. Sci. Rep. 2017, 7, 1–7. [Google Scholar] [CrossRef] [Green Version]
  17. Kornienko, V.V.; Nechepurenko, I.A.; Tananaev, P.N.; Chubchev, E.D.; Baburin, A.S.; Echeistov, V.V.; Zverev, A.V.; Novoselov, I.I.; Kruglov, I.A.; Rodionov, I.A.; et al. Machine learning for optical gas sensing: A leaky-mode humidity sensor as example. IEEE Sens. J. 2020, 20, 6954–6963. [Google Scholar] [CrossRef]
  18. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  19. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  20. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  21. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  22. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Assoc. Comput. Mach. 2017, 60. [Google Scholar] [CrossRef] [Green Version]
  23. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  24. Arief, H.A.; Wiktorski, T.; Thomas, P.J. A survey on distributed fibre optic sensor data modelling techniques and machine learning algorithms for multiphase fluid flow estimation. Sensors 2021, 21, 2801. [Google Scholar] [CrossRef]
  25. Silkina, T. Application of Distributed Acoustic Sensing to Flow Regime Classification. Master’s Thesis, Institutt for Petroleumsteknologi og Anvendt Geofysikk, Trondheim, Norway, 2014. [Google Scholar]
  26. Al-Naser, M.; Elshafei, M.; Al-Sarkhi, A. Artificial neural network application for multiphase flow patterns detection: A new approach. J. Pet. Sci. Eng. 2016, 145, 548–564. [Google Scholar] [CrossRef]
  27. Andrianov, N. A machine learning approach for virtual flow metering and forecasting. IFAC-PapersOnLine 2018, 51, 191–196. [Google Scholar] [CrossRef]
  28. Vahabi, N.; Willman, E.; Baghsiahi, H.; Selviah, D.R. Fluid flow velocity measurement in active Wells using fiber optic distributed acoustic sensors. IEEE Sens. J. 2020, 20, 11499–11507. [Google Scholar] [CrossRef]
  29. Loh, K.; Omrani, P.S.; van der Linden, R. Deep learning and data assimilation for real-time production prediction in natural gas wells. arXiv 2018, arXiv:1802.05141. [Google Scholar]
  30. Li, J.; Wang, Y.; Wang, P.; Bai, Q.; Gao, Y.; Zhang, H.; Jin, B. Pattern recognition for distributed optical fiber vibration sensing: A review. IEEE Sens. J. 2021, 21, 11983–11998. [Google Scholar] [CrossRef]
  31. Wood, R.W. XLII. On a remarkable case of uneven distribution of light in a diffraction grating spectrum. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1902, 4, 396–402. [Google Scholar] [CrossRef] [Green Version]
  32. Mie, G. Beiträge zur Optik trüber Medien, speziell kolloidaler Metallösungen. Ann. Der Phys. 1908, 330, 377–445. [Google Scholar] [CrossRef]
  33. Fano, U. The theory of anomalous diffraction gratings and of quasi-stationary waves on metallic surfaces (Sommerfeld’s waves). JOSA 1941, 31, 213–222. [Google Scholar] [CrossRef]
  34. Ritchie, R.H. Plasma losses by fast electrons in thin films. Phys. Rev. 1957, 106, 874. [Google Scholar] [CrossRef]
  35. Hessel, A.; Oliner, A. A new theory of Wood’s anomalies on optical gratings. Appl. Opt. 1965, 4, 1275–1297. [Google Scholar] [CrossRef]
  36. Hamza, M.E.; Othman, M.A.; Swillam, M.A. Plasmonic Biosensors: Review. Biology 2022, 11, 621. [Google Scholar] [CrossRef]
  37. Hirsch, L.R.; Stafford, R.J.; Bankson, J.; Sershen, S.R.; Rivera, B.; Price, R.; Hazle, J.D.; Halas, N.J.; West, J.L. Nanoshell-mediated near-infrared thermal therapy of tumors under magnetic resonance guidance. Proc. Natl. Acad. Sci. USA 2003, 100, 13549–13554. [Google Scholar] [CrossRef]
  38. Rifat, A.A.; Ahmed, R.; Mahdiraji, G.A.; Adikan, F.M. Highly sensitive D-shaped photonic crystal fiber-based plasmonic biosensor in visible to near-IR. IEEE Sens. J. 2017, 17, 2776–2783. [Google Scholar] [CrossRef]
  39. Sánchez-Purrà, M.; Carré-Camps, M.; de Puig, H.; Bosch, I.; Gehrke, L.; Hamad-Schifferli, K. Surface-enhanced Raman spectroscopy-based sandwich immunoassays for multiplexed detection of Zika and Dengue viral biomarkers. ACS Infect. Dis. 2017, 3, 767–776. [Google Scholar] [CrossRef] [PubMed]
  40. Mauriz, E.; Dey, P.; Lechuga, L.M. Advances in nanoplasmonic biosensors for clinical applications. Analyst 2019, 144, 7105–7129. [Google Scholar] [CrossRef]
  41. Masson, J.F.; Breault-Turcot, J.; Faid, R.; Poirier-Richard, H.P.; Yockell-Lelièvre, H.; Lussier, F.; Spatz, J.P. Plasmonic nanopipette biosensor. Anal. Chem. 2014, 86, 8998–9005. [Google Scholar] [CrossRef]
  42. Saylan, Y.; Akgönüllü, S.; Denizli, A. Plasmonic sensors for monitoring biological and chemical threat agents. Biosensors 2020, 10, 142. [Google Scholar] [CrossRef]
  43. Balbinot, S.; Srivastav, A.M.; Vidic, J.; Abdulhalim, I.; Manzano, M. Plasmonic biosensors for food control. Trends Food Sci. Technol. 2021, 111, 128–140. [Google Scholar] [CrossRef]
  44. Mauriz, E.; Calle, A.; Lechuga, L.M.; Quintana, J.; Montoya, A.; Manclus, J. Real-time detection of chlorpyrifos at part per trillion levels in ground, surface and drinking water samples by a portable surface plasmon resonance immunosensor. Anal. Chim. Acta 2006, 561, 40–47. [Google Scholar] [CrossRef]
  45. Wang, D.; Pillai, S.C.; Ho, S.H.; Zeng, J.; Li, Y.; Dionysiou, D.D. Plasmonic-based nanomaterials for environmental remediation. Appl. Catal. Environ. 2018, 237, 721–741. [Google Scholar]
  46. Wei, H.; Abtahi, S.M.H.; Vikesland, P.J. Plasmonic colorimetric and SERS sensors for environmental analysis. Environ. Sci. Nano 2015, 2, 120–135. [Google Scholar] [CrossRef] [Green Version]
  47. Erdem, Ö.; Saylan, Y.; Cihangir, N.; Denizli, A. Molecularly imprinted nanoparticles based plasmonic sensors for real-time Enterococcus faecalis detection. Biosens. Bioelectron. 2019, 126, 608–614. [Google Scholar] [CrossRef] [PubMed]
  48. Kołataj, K.; Krajczewski, J.; Kudelski, A. Plasmonic nanoparticles for environmental analysis. Environ. Chem. Lett. 2020, 18, 529–542. [Google Scholar] [CrossRef]
  49. Farhadi, S.; Miri, M.; Farmani, A. Plasmon-induced transparency sensor for detection of minuscule refractive index changes in ultra-low index materials. Sci. Rep. 2021, 11, 1–10. [Google Scholar]
  50. Nishijima, Y.; Hashimoto, Y.; Balčytis, A.; Seniutinas, G.; Juodkazis, S. Alloy materials for plasmonic refractive index sensing. Sens. Mater. 2017, 29, 1233–1239. [Google Scholar]
  51. Xu, Y.; Bai, P.; Zhou, X.; Akimov, Y.; Png, C.E.; Ang, L.K.; Knoll, W.; Wu, L. Optical refractive index sensors with plasmonic and photonic structures: Promising and inconvenient truth. Adv. Opt. Mater. 2019, 7, 1801433. [Google Scholar] [CrossRef]
  52. Nugroho, F.A.A.; Albinsson, D.; Antosiewicz, T.J.; Langhammer, C. Plasmonic metasurface for spatially resolved optical sensing in three dimensions. ACS Nano 2020, 14, 2345–2353. [Google Scholar] [CrossRef] [Green Version]
  53. Zhang, J.; Ou, J.; MacDonald, K.; Zheludev, N. Optical response of plasmonic relief meta-surfaces. J. Opt. 2012, 14, 114002. [Google Scholar] [CrossRef]
  54. Hess, O.; Gric, T. Phenomena of Optical Metamaterials; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  55. Harter, T.; Muehlbrandt, S.; Ummethala, S.; Schmid, A.; Nellen, S.; Hahn, L.; Freude, W.; Koos, C. Silicon–plasmonic integrated circuits for terahertz signal generation and coherent detection. Nat. Photonics 2018, 12, 625–633. [Google Scholar] [CrossRef] [Green Version]
  56. Tuniz, A.; Bickerton, O.; Diaz, F.J.; Käsebier, T.; Kley, E.B.; Kroker, S.; Palomba, S.; de Sterke, C.M. Modular nonlinear hybrid plasmonic circuit. Nat. Commun. 2020, 11, 1–8. [Google Scholar] [CrossRef]
  57. Sorger, V.J.; Oulton, R.F.; Ma, R.M.; Zhang, X. Toward integrated plasmonic circuits. MRS Bull. 2012, 37, 728–738. [Google Scholar] [CrossRef] [Green Version]
  58. Duan, Q.; Liu, Y.; Chang, S.; Chen, H.; Chen, J.h. Surface plasmonic sensors: Sensing mechanism and recent applications. Sensors 2021, 21, 5262. [Google Scholar] [CrossRef]
  59. Andam, N.; Refki, S.; Hayashi, S.; Sekkat, Z. Plasmonic mode coupling and thin film sensing in metal–insulator–metal structures. Sci. Rep. 2021, 11, 1–12. [Google Scholar] [CrossRef]
  60. Mohammadi, A.; Sandoghdar, V.; Agio, M. Gold, copper, silver and aluminum nanoantennas to enhance spontaneous emission. J. Comput. Theor. Nanosci. 2009, 6, 2024–2030. [Google Scholar] [CrossRef] [Green Version]
  61. Shafkat, A. Analysis of a gold coated plasmonic sensor based on a duplex core photonic crystal fiber. Sens. Bio-Sens. Res. 2020, 28, 100324. [Google Scholar] [CrossRef]
  62. Hemsley, S.J. Physical properties of gold electrodeposits and their effect on thickness measurement. Gold Bull. 1996, 29, 19–25. [Google Scholar] [CrossRef] [Green Version]
  63. Li, G. Nano-Inspired Biosensors for Protein Assay with Clinical Applications; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  64. Ekgasit, S.; Tangcharoenbumrungsuk, A.; Yu, F.; Baba, A.; Knoll, W. Resonance shifts in SPR curves of nonabsorbing, weakly absorbing, and strongly absorbing dielectrics. Sens. Actuators Chem. 2005, 105, 532–541. [Google Scholar] [CrossRef]
  65. Willets, K.A.; Van Duyne, R.P. Localized surface plasmon resonance spectroscopy and sensing. Annu. Rev. Phys. Chem. 2007, 58, 267–297. [Google Scholar] [CrossRef] [Green Version]
  66. Barnes, W.L.; Dereux, A.; Ebbesen, T.W. Surface plasmon subwavelength optics. Nature 2003, 424, 824–830. [Google Scholar] [CrossRef]
  67. Wang, J.; Gao, M.; He, Y.; Yang, Z. Ultrasensitive and ultrafast nonlinear optical characterization of surface plasmons. APL Mater. 2022, 10, 030701. [Google Scholar] [CrossRef]
  68. Philip, A.; Kumar, A.R. The performance enhancement of surface plasmon resonance optical sensors using nanomaterials: A review. Coord. Chem. Rev. 2022, 458, 214424. [Google Scholar] [CrossRef]
  69. Rodrigues, M.S.; Borges, J.; Lopes, C.; Pereira, R.M.; Vasilevskiy, M.I.; Vaz, F. Gas sensors based on localized surface plasmon resonances: Synthesis of oxide films with embedded metal nanoparticles, theory and simulation, and sensitivity enhancement strategies. Appl. Sci. 2021, 11, 5388. [Google Scholar] [CrossRef]
  70. Ekgasit, S.; Thammacharoen, C.; Yu, F.; Knoll, W. Influence of the metal film thickness on the sensitivity of surface plasmon resonance biosensors. Appl. Spectrosc. 2005, 59, 661–667. [Google Scholar] [CrossRef]
  71. Ashley, J.; Shahbazi, M.A.; Kant, K.; Chidambara, V.A.; Wolff, A.; Bang, D.D.; Sun, Y. Molecularly imprinted polymers for sample preparation and biosensing in food analysis: Progress and perspectives. Biosens. Bioelectron. 2017, 91, 606–615. [Google Scholar] [CrossRef] [Green Version]
  72. Drescher, D.G.; Drescher, M.J.; Ramakrishnan, N.A. Surface Plasmon Resonance (SPR) Analysis of Binding Interactions of Proteins in Inner-Ear Sensory Epithelia. In Auditory and Vestibular Research; Springer: Berlin/Heidelberg, Germany, 2009; pp. 323–343. [Google Scholar]
  73. Chlebus, R.; Chylek, J.; Ciprian, D.; Hlubina, P. Surface plasmon resonance based measurement of the dielectric function of a thin metal film. Sensors 2018, 18, 3693. [Google Scholar] [CrossRef] [Green Version]
  74. Kravets, V.G.; Wu, F.; Yu, T.; Grigorenko, A.N. Metal-dielectric-graphene hybrid heterostructures with enhanced surface plasmon resonance sensitivity based on amplitude and phase measurements. Plasmonics 2022, 17, 973–987. [Google Scholar] [CrossRef]
  75. LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 1995, 3361, 1995. [Google Scholar]
  76. Hochreiter, S.; Schmidhuber, J. Long short-term memory Neural computation. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  77. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef] [Green Version]
  78. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
  79. Aggarwal, A.; Mittal, M.; Battineni, G. Generative adversarial network: An overview of theory and applications. Int. J. Inf. Manag. Data Insights 2021, 1, 100004. [Google Scholar] [CrossRef]
  80. Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. Deep reinforcement learning: A brief survey. IEEE Signal Process. Mag. 2017, 34, 26–38. [Google Scholar] [CrossRef] [Green Version]
  81. Zhang, N.; Ding, S.; Zhang, J.; Xue, Y. An overview on restricted Boltzmann machines. Neurocomputing 2018, 275, 1186–1199. [Google Scholar] [CrossRef]
  82. Chen, S.; Guo, W. Auto-Encoders in Deep Learning—A Review with New Perspectives. Mathematics 2023, 11, 1777. [Google Scholar] [CrossRef]
  83. Taud, H.; Mas, J. Multilayer perceptron (MLP). In Geomatic Approaches for Modeling Land Change Scenarios; Springer: Berlin/Heidelberg, Germany, 2018; pp. 451–455. [Google Scholar]
  84. Scherer, D.; Müller, A.; Behnke, S. Evaluation of pooling operations in convolutional architectures for object recognition. In Proceedings of the International Conference on Artificial Neural Networks, Thessaloniki, Greece, 15–18 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 92–101. [Google Scholar]
  85. Lai, J.; Wang, X.; Xiang, Q.; Song, Y.; Quan, Q. Review on autoencoder and its application. J. Commun. 2021, 42, 218. [Google Scholar] [CrossRef]
  86. Li, G.; Liu, Y.; Qin, Q.; Zou, X.; Wang, M.; Yan, F. Deep learning based optical curvature sensor through specklegram detection of multimode fiber. Opt. Laser Technol. 2022, 149, 107873. [Google Scholar] [CrossRef]
  87. Yao, Z.; He, D.; Chen, Y.; Liu, B.; Miao, J.; Deng, J.; Shan, S. Inspection of exterior substance on high-speed train bottom based on improved deep learning method. Measurement 2020, 163, 108013. [Google Scholar] [CrossRef]
  88. Wei, X.; Yang, Z.; Liu, Y.; Wei, D.; Jia, L.; Li, Y. Railway track fastener defect detection based on image processing and deep learning techniques: A comparative study. Eng. Appl. Artif. Intell. 2019, 80, 66–81. [Google Scholar] [CrossRef]
  89. Zheng, Z.; Qi, H.; Zhuang, L.; Zhang, Z. Automated rail surface crack analytics using deep data-driven models and transfer learning. Sustain. Cities Soc. 2021, 70, 102898. [Google Scholar] [CrossRef]
  90. Wei, X.; Wei, D.; Suo, D.; Jia, L.; Li, Y. Multi-target defect identification for railway track line based on image processing and improved YOLOv3 model. IEEE Access 2020, 8, 61973–61988. [Google Scholar] [CrossRef]
  91. Wang, S.; Liu, F.; Liu, B. Research on application of deep convolutional network in high-speed railway track inspection based on distributed fiber acoustic sensing. Opt. Commun. 2021, 492, 126981. [Google Scholar] [CrossRef]
  92. Fan, C.; Ai, F.; Liu, Y.; Xu, Z.; Wu, G.; Zhang, W.; Liu, C.; Yan, Z.; Liu, D.; Sun, Q. Rail Crack Detection by Analyzing the Acoustic Transmission Process Based on Fiber Distributed Acoustic Sensor. In Proceedings of the Optical Fiber Communication Conference, San Diego, CA, USA, 3–7 March 2019; Optical Society of America: Washington, DC, USA, 2019; p. TH2A.17. [Google Scholar]
  93. Chen, J.; Li, A.; Bao, C.; Dai, Y.; Liu, M.; Lin, Z.; Niu, F.; Zhou, T. A deep learning forecasting method for frost heave deformation of high-speed railway subgrade. Cold Reg. Sci. Technol. 2021, 185, 103265. [Google Scholar] [CrossRef]
  94. Li, Z.; Zhang, J.; Wang, M.; Chai, J.; Wu, Y.; Peng, F. An anti-noise ϕ-OTDR based distributed acoustic sensing system for high-speed railway intrusion detection. Laser Phys. 2020, 30, 085103. [Google Scholar] [CrossRef]
  95. Ma, Y.; Song, Y.; Xiao, Q.; Song, Q.; Jia, B. MI-SI Based Distributed Optical Fiber Sensor for No-blind Zone Location and Pattern Recognition. J. Light. Technol. 2022, 40, 3022–3030. [Google Scholar] [CrossRef]
  96. Pan, Y.; Wen, T.; Ye, W. Time attention analysis method for vibration pattern recognition of distributed optic fiber sensor. Optik 2022, 251, 168127. [Google Scholar] [CrossRef]
  97. Xu, C.; Guan, J.; Bao, M.; Lu, J.; Ye, W. Pattern recognition based on time-frequency analysis and convolutional neural networks for vibrational events in φ-OTDR. Opt. Eng. 2018, 57, 016103. [Google Scholar] [CrossRef]
  98. Zhu, P.; Xu, C.; Ye, W.; Bao, M. Self-learning filtering method based on classification error in distributed fiber optic system. IEEE Sens. J. 2019, 19, 8929–8933. [Google Scholar] [CrossRef]
  99. Yang, Y.; Li, Y.; Zhang, T.; Zhou, Y.; Zhang, H. Early safety warnings for long-distance pipelines: A distributed optical fiber sensor machine learning approach. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 22 February–1 March 2021; Volume 35, pp. 14991–14999. [Google Scholar]
  100. Wu, H.; Qian, Y.; Zhang, W.; Tang, C. Feature extraction and identification in distributed optical-fiber vibration sensing system for oil pipeline safety monitoring. Photonic Sensors 2017, 7, 305–310. [Google Scholar] [CrossRef] [Green Version]
  101. Liu, K.; Tian, M.; Liu, T.; Jiang, J.; Ding, Z.; Chen, Q.; Ma, C.; He, C.; Hu, H.; Zhang, X. A high-efficiency multiple events discrimination method in optical fiber perimeter security system. J. Light. Technol. 2015, 33, 4885–4890. [Google Scholar]
  102. Zhu, C.; Pu, Y.; Yang, K.; Yang, Q.; Philip Chen, C.L. Distributed Optical Fiber Intrusion Detection by Image Encoding and SwinT in Multi-Interference Environment of Long-Distance Pipeline. IEEE Trans. Instrum. Meas. 2023, 2515012. [Google Scholar] [CrossRef]
  103. Bublin, M. Event Detection for Distributed Acoustic Sensing: Combining Knowledge-Based, Classical Machine Learning, and Deep Learning Approaches. Sensors 2021, 21, 7527. [Google Scholar] [CrossRef]
  104. Sun, M.; Yu, M.; Lv, P.; Li, A.; Wang, H.; Zhang, X.; Fan, T.; Zhang, T. Man-made threat event recognition based on distributed optical fiber vibration sensing and SE-WaveNet. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
  105. Ding, S.; Zhao, H.; Zhang, Y.; Xu, X.; Nie, R. Extreme learning machine: Algorithm, theory and applications. Artif. Intell. Rev. 2015, 44, 103–115. [Google Scholar] [CrossRef]
  106. Hatamian, F.N.; Ravikumar, N.; Vesal, S.; Kemeth, F.P.; Struck, M.; Maier, A. The effect of data augmentation on classification of atrial fibrillation in short single-lead ECG signals using deep neural networks. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1264–1268. [Google Scholar]
  107. Yoo, H.; Han, S.; Chung, K. A frequency pattern mining model based on deep neural network for real-time classification of heart conditions. In Proceedings of the Healthcare, Virtual, 23–24 November 2020; Multidisciplinary Digital Publishing Institute: Basel, Switzerland, 2020; Volume 8, p. 234. [Google Scholar]
  108. Wang, Q.; Lyu, W.; Cheng, Z.; Yu, C. Noninvasive Measurement of Vital Signs with the Optical Fiber Sensor Based on Deep Learning. J. Light. Technol. 2023, 1–11. [Google Scholar] [CrossRef]
  109. Fernandez-Navamuel, A.; Magalhães, F.; Zamora-Sánchez, D.; Omella, Á.J.; Garcia-Sanchez, D.; Pardo, D. Deep learning enhanced principal component analysis for structural health monitoring. Struct. Health Monit. 2022, 21, 1710–1722. [Google Scholar] [CrossRef]
  110. Wang, H.; Guo, J.K.; Mo, H.; Zhou, X.; Han, Y. Fiber Optic Sensing Technology and Vision Sensing Technology for Structural Health Monitoring. Sensors 2023, 23, 4334. [Google Scholar] [CrossRef] [PubMed]
  111. Lyu, C.; Huo, Z.; Cheng, X.; Jiang, J.; Alimasi, A.; Liu, H. Distributed optical fiber sensing intrusion pattern recognition based on GAF and CNN. J. Light. Technol. 2020, 38, 4174–4182. [Google Scholar] [CrossRef]
  112. Wu, J.; Zhuo, R.; Wan, S.; Xiong, X.; Xu, X.; Liu, B.; Liu, J.; Shi, J.; Sun, J.; He, X.; et al. Intrusion location technology of Sagnac distributed fiber optical sensing system based on deep learning. IEEE Sens. J. 2021, 21, 13327–13334. [Google Scholar] [CrossRef]
  113. Liu, Y.; Li, G.; Qin, Q.; Tan, Z.; Wang, M.; Yan, F. Bending recognition based on the analysis of fiber specklegrams using deep learning. Opt. Laser Technol. 2020, 131, 106424. [Google Scholar] [CrossRef]
  114. Sun, K.; Ding, Z.; Zhang, Z. Fiber directional position sensor based on multimode interference imaging and machine learning. Appl. Opt. 2020, 59, 5745–5751. [Google Scholar] [CrossRef]
  115. Ding, Z.; Zhang, Z. 2D tactile sensor based on multimode interference and deep learning. Opt. Laser Technol. 2021, 136, 106760. [Google Scholar] [CrossRef]
  116. Wei, M.; Tang, G.; Liu, J.; Zhu, L.; Liu, J.; Huang, C.; Zhang, J.; Shen, L.; Yu, S. Neural network based perturbation-location fiber specklegram sensing system towards applications with limited number of training samples. J. Light. Technol. 2021, 39, 6315–6326. [Google Scholar] [CrossRef]
  117. Gupta, R.K.; Bruce, G.D.; Powis, S.J.; Dholakia, K. Deep learning enabled laser speckle wavemeter with a high dynamic range. Laser Photonics Rev. 2020, 14, 2000120. [Google Scholar] [CrossRef]
  118. Cuevas, A.R.; Fontana, M.; Rodriguez-Cobo, L.; Lomer, M.; López-Higuera, J.M. Machine learning for turning optical fiber specklegram sensor into a spatially-resolved sensing system. Proof of concept. J. Light. Technol. 2018, 36, 3733–3738. [Google Scholar] [CrossRef] [Green Version]
  119. Razmyar, S.; Mostafavi, M.T. Deep learning for estimating deflection direction of a multimode fiber from specklegram. J. Light. Technol. 2020, 39, 1850–1857. [Google Scholar] [CrossRef]
  120. Bender, D.; Çakır, U.; Yüce, E. Deep Learning-Based Fiber Bending Recognition for Sensor Applications. IEEE Sens. J. 2023, 23, 6956–6962. [Google Scholar] [CrossRef]
  121. Lu, J.; Gao, H.; Liu, Y.; Hu, H. Deep learning based force recognition using the specklegrams from multimode fiber. Instrum. Sci. Technol. 2023, 1–11. [Google Scholar] [CrossRef]
  122. Grant-Jacob, J.A.; Jain, S.; Xie, Y.; Mackay, B.S.; McDonnell, M.D.; Praeger, M.; Loxham, M.; Richardson, D.J.; Eason, R.W.; Mills, B. Fibre-optic based particle sensing via deep learning. J. Phys. Photonics 2019, 1, 044004. [Google Scholar] [CrossRef]
  123. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
  124. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  125. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6 July–11 July 2015; pp. 448–456. [Google Scholar]
  126. Yazan, E.; Talu, M.F. Comparison of the stochastic gradient descent based optimization techniques. In Proceedings of the 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 16–17 September 2017; pp. 1–5. [Google Scholar]
  127. Aktas, M.; Akgun, T.; Demircin, M.U.; Buyukaydin, D. Deep learning based multi-threat classification for phase-OTDR fiber optic distributed acoustic sensing applications. In Proceedings of the Fiber Optic Sensors and Applications XIV, International Society for Optics and Photonics, Washington, DC, USA, 11–12 April 2017; Volume 10208, p. 102080G. [Google Scholar]
  128. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 2, 2672–2680. [Google Scholar]
  129. Hernandez, P.D.; Ramirez, J.A.; Soto, M.A. Deep-Learning-Based Earthquake Detection for Fiber-Optic Distributed Acoustic Sensing. J. Light. Technol. 2021, 40, 2639–2650. [Google Scholar] [CrossRef]
  130. Xie, Y.; Wang, M.; Zhong, Y.; Deng, L.; Zhang, J. Label-Free Anomaly Detection Using Distributed Optical Fiber Acoustic Sensing. Sensors 2023, 23, 4094. [Google Scholar] [CrossRef]
  131. Kaya, G.U. Development of hybrid optical sensor based on deep learning to detect and classify the micro-size defects in printed circuit board. Measurement 2023, 206, 112247. [Google Scholar] [CrossRef]
  132. Nguyen, L.V.; Nguyen, C.C.; Carneiro, G.; Ebendorff-Heidepriem, H.; Warren-Smith, S.C. Sensing in the presence of strong noise by deep learning of dynamic multimode fiber interference. Photonics Res. 2021, 9, B109–B118. [Google Scholar] [CrossRef]
  133. Smith, D.L.; Nguyen, L.V.; Ottaway, D.J.; Cabral, T.D.; Fujiwara, E.; Cordeiro, C.M.; Warren-Smith, S.C. Machine learning for sensing with a multimode exposed core fiber specklegram sensor. Opt. Express 2022, 30, 10443–10455. [Google Scholar] [CrossRef] [PubMed]
  134. Nicholaus, I.T.; Park, J.R.; Jung, K.; Lee, J.S.; Kang, D.K. Anomaly Detection of Water Level Using Deep Autoencoder. Sensors 2021, 21, 6679. [Google Scholar] [CrossRef] [PubMed]
  135. Zhu, Z.C.; Chu, C.W.; Bian, H.T.; Jiang, J.C. An integration method using distributed optical fiber sensor and Auto-Encoder based deep learning for detecting sulfurized rust self-heating of crude oil tanks. J. Loss Prev. Process Ind. 2022, 74, 104623. [Google Scholar] [CrossRef]
Figure 1. (a) A diagram of the mechanism of plasmonic optical sensors, and (b) stages of the SPR sensor from detecting analytes to detachment to be reused [36].
Figure 1. (a) A diagram of the mechanism of plasmonic optical sensors, and (b) stages of the SPR sensor from detecting analytes to detachment to be reused [36].
Sensors 23 06486 g001aSensors 23 06486 g001b
Figure 2. CNN architecture.
Figure 2. CNN architecture.
Sensors 23 06486 g002
Figure 3. Diagram of the autoencoder.
Figure 3. Diagram of the autoencoder.
Sensors 23 06486 g003
Figure 4. Schematic representation of an MLP with two hidden layers.
Figure 4. Schematic representation of an MLP with two hidden layers.
Sensors 23 06486 g004
Figure 5. General view of DL with optical sensor applications.
Figure 5. General view of DL with optical sensor applications.
Sensors 23 06486 g005
Figure 6. Experimental setup for the detection of fibre specklegrams under different curvatures.
Figure 6. Experimental setup for the detection of fibre specklegrams under different curvatures.
Sensors 23 06486 g006
Figure 7. The adopted VGG-Nets architecture to build the CCN.
Figure 7. The adopted VGG-Nets architecture to build the CCN.
Sensors 23 06486 g007
Figure 8. Structural diagram of the CNN used in [95].
Figure 8. Structural diagram of the CNN used in [95].
Sensors 23 06486 g008
Figure 9. The structure of vibration-sensing system working with F-OTDR.
Figure 9. The structure of vibration-sensing system working with F-OTDR.
Sensors 23 06486 g009
Figure 10. The architecture of the proposed model used in [99].
Figure 10. The architecture of the proposed model used in [99].
Sensors 23 06486 g010
Figure 11. Two methods to detect events by DAS: classic ML approach (left) and DNN approach (right). Note the role of human knowledge [103].
Figure 11. Two methods to detect events by DAS: classic ML approach (left) and DNN approach (right). Note the role of human knowledge [103].
Sensors 23 06486 g011
Figure 12. CNN to detect an excavator. (a) Input image, (b) convolutional layer, (c) max pooling layer, and (d) fully connected layer [103].
Figure 12. CNN to detect an excavator. (a) Input image, (b) convolutional layer, (c) max pooling layer, and (d) fully connected layer [103].
Sensors 23 06486 g012
Figure 13. The structure of the model proposed in [104].
Figure 13. The structure of the model proposed in [104].
Sensors 23 06486 g013
Figure 14. The CNN architecture proposed in [14].
Figure 14. The CNN architecture proposed in [14].
Sensors 23 06486 g014
Figure 15. The model architecture of the BCG and non-BCG chunks.
Figure 15. The model architecture of the BCG and non-BCG chunks.
Sensors 23 06486 g015
Figure 16. The experimental schematic setup. (a) Fibre specklegram detection. (b) A graphical depiction of the moving distance x and bending radius R of the translation stage.
Figure 16. The experimental schematic setup. (a) Fibre specklegram detection. (b) A graphical depiction of the moving distance x and bending radius R of the translation stage.
Sensors 23 06486 g016
Figure 17. The model structure of the CNN proposed in [113].
Figure 17. The model structure of the CNN proposed in [113].
Sensors 23 06486 g017
Figure 18. The training procedure of the model proposed in [122].
Figure 18. The training procedure of the model proposed in [122].
Sensors 23 06486 g018
Figure 19. The CLDNN architecture.
Figure 19. The CLDNN architecture.
Sensors 23 06486 g019
Figure 20. The architecture of the intelligent alarm system proposed in [8].
Figure 20. The architecture of the intelligent alarm system proposed in [8].
Sensors 23 06486 g020
Figure 21. Structure of an F-OTDR and a CNN used for threat classification in [127].
Figure 21. Structure of an F-OTDR and a CNN used for threat classification in [127].
Sensors 23 06486 g021
Figure 22. The multilayer perceptron architecture proposed in [132].
Figure 22. The multilayer perceptron architecture proposed in [132].
Sensors 23 06486 g022
Figure 23. (a) Schematic of the DNN. (b) Schematic representation of the CNN used in [133].
Figure 23. (a) Schematic of the DNN. (b) Schematic representation of the CNN used in [133].
Sensors 23 06486 g023
Figure 24. AEs used for the detection of abnormal temperature changes.
Figure 24. AEs used for the detection of abnormal temperature changes.
Sensors 23 06486 g024
Figure 25. A pipeline for anomaly detection using a DNN model based on AEs in [135].
Figure 25. A pipeline for anomaly detection using a DNN model based on AEs in [135].
Sensors 23 06486 g025
Table 1. Events distribution.
Table 1. Events distribution.
EventLocation
Crevice120 m, 790 m, 830 m, 1010 m, 1270 m
Beam Gap400 m, 500 m, 600 m, 700 m, 800 m, 900 m, 1000 m, 1100 m, 1200 m, 1300 m, 1400 m
Cracking100 m, 480 m, 730 m, 1030 m, 1420 m
Bulge420 m, 560 m, 730 m, 1030 m, 1420 m
Switches200 m, 350 m, 450 m, 1350 m
Highway Below300 m, 750 m, 1250 m
Table 2. Dataset obtained after augmentation and the training dataset.
Table 2. Dataset obtained after augmentation and the training dataset.
SizeChannelsEventDatasetTraining Data
5 × 6 3Crevice68532708
5 × 6 3Beam gap68532788
5 × 6 3Cracking68532735
5 × 6 3Bulge68532558
5 × 6 3Switches68532776
5 × 6 3Highway below68532736
5 × 6 3No event68532716
Table 3. Structural hyperparameters.
Table 3. Structural hyperparameters.
HyperparametersOptions
Structure of the deep learning networkVGG-16, ResNet, Inception-v3, AlexNet, Mobilenet-v3, LeNet
Data balance methodSMOTE-TL, S-ENN, Border-1, MWMOTE, Safe-level
SSL modelFix-match, Tri-training, UDA
Time interval (t)[0, 54]
Table 4. Dataset details.
Table 4. Dataset details.
TypeAmount
Cutting948
Impacting737
Knocking1235
Rocking550
Trampling849
Wind1169
Total5488
Table 5. Performance comparison of the CNN and MLP algorithms.
Table 5. Performance comparison of the CNN and MLP algorithms.
ML AlgorithmAccuracy 99%Exec. Time (µs)
MLP + featureextraction99.88% 554.63
CNN99.91%34.33
Table 6. The number of each event in dataset.
Table 6. The number of each event in dataset.
Event TypeClimbingViolent Demolition NetDiggingElectric Drill DamageTotal Number
Training set431936163713382517,564
Validation set5395425625492192
Testing set5395425625492192
Total number539754285625549821,948
Table 7. The proposed CNN architecture.
Table 7. The proposed CNN architecture.
Event TypeClimbingViolent Demolition NetDiggingElectric Drill DamageTotal Number
LayerNo of filtersActivation functionKernel sizeStridesOutput size
Input (50, 1)
Conv1D50ReLU51(46, 40)
Maxpooling1D Max22(23, 50)
Conv1D50ReLU41(20, 50)
Maxpooling1D Max22(10, 50)
Conv1D50ReLU41(7, 50)
Maxpooling1D Max22(3, 50)
Flatten (None, 150)
FC1 ReLU (None, 50)
FC2 Softmax (None, 2)
Table 8. Results of the proposed CNN-ELM.
Table 8. Results of the proposed CNN-ELM.
Data-balancing methodUndersamplingAccuracy0.89
Precision0.9
Recall0.87
F-score0.88
OversamplingAccuracy0.88
Precision0.93
Recall0.81
F-score0.87
GANAccuracy0.94
Precision0.9
Recall0.98
F-score0.94
Table 9. Hyperparameters.
Table 9. Hyperparameters.
Bath SizeEpochPatience in Early StoppingAdam
α β 1 β 2 ϵ
12850005000.001090.009 1.0 × 10 8
Table 10. Results of the average accuracy.
Table 10. Results of the average accuracy.
FibreNumber of Training DataNumber of Testing DataAverage Accuracy
1056300210092.8%
2006300210096.6%
Table 11. The number of each data type.
Table 11. The number of each data type.
Event TypeIIIIIIIVV
Training Set307112211011237748
Validation Set77280275310187
Total Number384140213761547935
Table 12. The common CNNs performance.
Table 12. The common CNNs performance.
Model Name TrainingModel Size (MB)Speed (step/s)Classification Accuracy (%)Top 2 (%)
LeNet39.390.96086.5
AlexNet554.719.694.2599.08
VggNet1638.42.5395.25100
GoogLeNet292.24.197.0899.25
ResNet282.47.3591.997.75
Table 13. The classification accuracy achieved for the five events.
Table 13. The classification accuracy achieved for the five events.
Type of AccuracyIIIIIIIVV
Accuracy (%)98.0298.6710092.195.5
Top 2 Accuracy (%)10010010099100
Table 14. The comparison between the optimized model and the Inception-v3.
Table 14. The comparison between the optimized model and the Inception-v3.
NetworkAccuracy %Top 3 Accuracy %Training SpeedModel Size (MB)
The Optimized Network96.6799.7535.6120
Inception-v397.0899.254.35292.2
Table 15. Structures of the MLP-AE.
Table 15. Structures of the MLP-AE.
CompositionTypeInput SizeOutput Size
EncoderLinear14032
Linear324
DecoderLinear432
Linear32140
Table 16. Structures of the MLP-VAE.
Table 16. Structures of the MLP-VAE.
CompositionTypeInput SizeOutput Size
EncoderLinear14064
Linear Reparametrization644
DecoderLinear464
Linear64140
Table 17. Structures of the LSTM-AE.
Table 17. Structures of the LSTM-AE.
CompositionTypeInputOutput
EncoderLSTM Repeat132
DecoderLSTM3232
Dense321
Table 18. Structures of the CNN-AE.
Table 18. Structures of the CNN-AE.
CompositionTypeInput ChannelsOutput ChannelsKernel Size/Stride/Padding
EncoderConv.1165/2/1
ReLU
Maxpool 2/2/0
Conv. 168 5/2/1
ReLU
Maxpool 2/2/0
Conv. 82 3/1/1
ReLU
Maxpool 2/2/0
DecoderConv-Transpose2165/4/0
ReLU
Conv-Transpose1685/4/0
ReLU
Conv-Transpose813/2/1
Table 19. A summary of the surveyed DL for optical sensor applications.
Table 19. A summary of the surveyed DL for optical sensor applications.
NoApplicationsFindingReferences
1
  • Realizing an optical fibre curvature sensor.
  • A large number of specklegrams were detected from the facet of a multimode fibre (MMF) automatically in the experiments.
94.7%[86]
2Detection of a high-speed railway track97.91%[9]
3Solving the problems of the inability of a conventional hybrid structure to locate in the near field and flawed frequency responses.97.83%[95]
4Extracting the correlation of a time–frequency sequence from signals and spectrograms to improve the robustness of the recognition system.96.02%[96]
5Describing the situation of long-distance oil–gas PSEW systems.99.26% and 97.20%[99]
6Event detection.99.91%[103]
7Discriminate between ballistocardiogram (BCG) and non-BCG signals.94%, 90%, 98%, and 94%[14]
8Recognize a man-made threat event97.73%[104]
9Bridge structure damage detection.96.9%[11]
10An intrusion pattern recognition model97.67%[111]
11Bending recognition model using the analysis of MMF specklegrams with diameters of 105 and 200 μ m.92.8% and 96.6%[113]
12Demonstrate the capability for the identification of specific species of pollen from backscattered light.97%[122]
13Event recognition.99.75%[13]
14Identification and classification of external intrusion signals in a real environment using a 33 km optical fibre-sensing system.97%[8]
15Detection and localization of seismic events.94%[12]
16Identification of six activities, walking, digging with a shovel, digging with a harrow, digging with a pickaxe, strong wind and facility noise.93%[127]
17Temperature sensing based on a sapphire crystal optical fibre (SOF).99%[132]
18Sensing water immersion length measurements and air temperature.N/A[133]
19Detecting water-level anomalies and reporting abnormal situation.99.9%[134]
20Collection of temperature data along an optic fibre set and identify the anomaly detected temperature in the early phase.98%[135]
21Detection of defects on large-scale PCBs and measure their copper thickness before the mass production process[131]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al-Ashwal, N.H.; Al Soufy, K.A.M.; Hamza, M.E.; Swillam, M.A. Deep Learning for Optical Sensor Applications: A Review. Sensors 2023, 23, 6486. https://doi.org/10.3390/s23146486

AMA Style

Al-Ashwal NH, Al Soufy KAM, Hamza ME, Swillam MA. Deep Learning for Optical Sensor Applications: A Review. Sensors. 2023; 23(14):6486. https://doi.org/10.3390/s23146486

Chicago/Turabian Style

Al-Ashwal, Nagi H., Khaled A. M. Al Soufy, Mohga E. Hamza, and Mohamed A. Swillam. 2023. "Deep Learning for Optical Sensor Applications: A Review" Sensors 23, no. 14: 6486. https://doi.org/10.3390/s23146486

APA Style

Al-Ashwal, N. H., Al Soufy, K. A. M., Hamza, M. E., & Swillam, M. A. (2023). Deep Learning for Optical Sensor Applications: A Review. Sensors, 23(14), 6486. https://doi.org/10.3390/s23146486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop