Next Article in Journal
CoFeBP Micro Flowers (MFs) for Highly Efficient Hydrogen Evolution Reaction and Oxygen Evolution Reaction Electrocatalysts
Previous Article in Journal
On-Line pH Measurement Cation Exchange Kinetics of Y3+-Exchanged Alginic Acid for Y2O3 Nanoparticles Synthesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Exploring Types of Photonic Neural Networks for Imaging and Computing—A Review

by
Svetlana N. Khonina
,
Nikolay L. Kazanskiy
,
Roman V. Skidanov
and
Muhammad A. Butt
*
Samara National Research University, 443086 Samara, Russia
*
Author to whom correspondence should be addressed.
Nanomaterials 2024, 14(8), 697; https://doi.org/10.3390/nano14080697
Submission received: 13 March 2024 / Revised: 13 April 2024 / Accepted: 15 April 2024 / Published: 17 April 2024

Abstract

:
Photonic neural networks (PNNs), utilizing light-based technologies, show immense potential in artificial intelligence (AI) and computing. Compared to traditional electronic neural networks, they offer faster processing speeds, lower energy usage, and improved parallelism. Leveraging light’s properties for information processing could revolutionize diverse applications, including complex calculations and advanced machine learning (ML). Furthermore, these networks could address scalability and efficiency challenges in large-scale AI systems, potentially reshaping the future of computing and AI research. In this comprehensive review, we provide current, cutting-edge insights into diverse types of PNNs crafted for both imaging and computing purposes. Additionally, we delve into the intricate challenges they encounter during implementation, while also illuminating the promising perspectives they introduce to the field.

1. Introduction

Photonic neural networks (PNNs) mark a pioneering approach to neural computing, exploiting the velocity and concurrency of light to enhance information processing efficiency [1,2,3]. By capitalizing on optical components and principles, PNNs present compelling remedies to long-standing impediments in traditional electronic neural networks, such as speed constraints and energy consumption [2,4]. PNNs embrace a spectrum of architectures, spanning feedforward, recurrent, convolutional, and spiking neural networks, each meticulously crafted for distinct tasks and domains [5,6]. Noteworthy advantages of PNNs encompass their capacity for lightning-fast computation, vast parallelism, and innate adaptability to certain data processing challenges like image recognition and optimization tasks [7,8]. Furthermore, PNNs demonstrate potential in surmounting emerging hurdles in artificial intelligence (AI), photonics, and information processing, heralding a new era of computing paradigms poised to revolutionize an array of fields, from healthcare to telecommunications [9,10].
Quantum neural networks (QNNs) and PNNs represent two distinct paradigms in advanced computing, each employing unique principles for data processing and analysis [11]. QNNs leverage quantum mechanics principles like superposition and entanglement, whereas PNNs utilize photonics for neural network operations. The primary distinction lies in their hardware platforms; QNNs rely on quantum processors manipulating qubits, while PNNs use photonic devices. Regarding scalability, QNNs face challenges related to qubit coherence times and error rates, whereas PNNs encounter obstacles in fabricating precise photonic components and integrating them with electronic infrastructure. Additionally, QNNs show potential for exponential speedup in tasks such as optimization and cryptography due to quantum parallelism and annealing, while PNNs may excel in applications requiring low latency and high bandwidth, like telecommunications and data processing [12]. Despite their differences, both QNNs and PNNs hold promise for advancing machine learning and computing, contingent upon specific task requirements and underlying hardware capabilities. In our view, photonics stands out for its exceptional capabilities in interconnects and communications, particularly due to its high bandwidth potential, effectively addressing the trade-offs associated with bandwidth and interconnectivity [13,14,15]. Decades ago, the benefits of photonics for neural networks were predictable, with pioneering work led by Psaltis and others, who introduced spatial multiplexing methods, permitting comprehensive all-to-all interconnection [16]. However, practical applications of PNNs faced obstacles due to limitations in low-level photonic integration and packaging technologies at that time. Nonetheless, the landscape of PNNs has seen significant changes with the advent of large-scale photonic assembly and integration methods [17,18]. For example, silicon photonics has become a leading platform for producing extensive and cost-effective optical systems [19,20,21]. Concurrently, various evolving applications, such as resolving nonlinear optimization problems and processing multichannel GHz analog signals in real time, are seeking innovative computing platforms to fulfill their computational needs [22]. These advancements illuminated fresh opportunities and pathways for advancing PNNs [23].
Particularly compelling is their application in deep learning and pattern recognition, where PNNs harness the parallel processing capabilities of light to execute intricate neural network operations at remarkable speeds, facilitating swift inference and training tasks that strain conventional electronic systems [24,25,26]. Moreover, PNNs boast exceptional energy efficiency owing to the minimal losses inherent in photonics, rendering them well-suited for deployment in energy-constrained settings and portable devices. In addition, PNNs show promise in optical computing, where they excel in tasks like image processing, cryptography, and optimization with unmatched efficiency [27,28,29,30]. In the realm of telecommunications and data processing, PNNs stand to revolutionize optical signal processing, enabling rapid data transmission and processing for cutting-edge communication networks. Furthermore, PNNs hold significant potential in advancing biomedical imaging and sensing technologies, enabling the real-time analysis of biological data with precision and sensitivity. Overall, the versatility and significance of PNNs highlight their capacity to propel innovation across diverse fields, offering unprecedented speed, efficiency, and scalability for the next generation of computing and information processing systems. PNNs leverage optical technologies to perform certain aspects of neural network computation, offering potential benefits in terms of speed, energy efficacy, and parallelism. Several types of PNNs were proposed and studied [2], which are discussed in Section 2. The prospects and challenges of the PNNs are briefly discussed in Section 3, and the paper ends with a brief discussion and concluding remarks.
The promising potential of PNNs is hindered in real-world usage cases for several reasons. Firstly, the complexity and cost associated with fabricating photonic components capable of performing neural network operations limited their widespread adoption. Photonic devices require precise manufacturing processes and sophisticated materials, resulting in high production costs and scalability limitations. Additionally, challenges arose in integrating photonic components with existing electronic infrastructure due to compatibility and interoperability issues. Furthermore, the efficient implementation of PNNs has been impeded by the lack of standardized design methodologies and optimization algorithms tailored specifically for them. Concerted efforts across multiple domains are necessary to make PNNs a realistic prospect for real-world adoption. Technological advancements in materials science and fabrication techniques could reduce manufacturing costs and enhance the performance of photonic devices. Moreover, crucial research efforts are needed to develop standardized design frameworks, optimization algorithms, and integration strategies tailored for PNNs. Collaborations among academia, industry, and government bodies to invest in research and development initiatives can accelerate progress in these areas. Additionally, educational programs aimed at fostering interdisciplinary expertise bridging photonics and machine learning domains would cultivate a skilled workforce capable of driving innovation in PNN technology.

2. Types of PNNs

PNNs represent a paradigm shift in computing by harnessing light’s inherent advantages over traditional electronic systems. This emerging technology promises breakthroughs in speed, energy efficiency, and scalability, crucial for addressing the escalating demands of data-intensive tasks like ML and AI. When assessing the outcomes of processing speed, energy consumption, and accuracy between PNNs and traditional electronic neural networks, notable advantages emerge in specific domains [2]. PNNs exhibit remarkable processing speed, owing to the inherent parallelism ingrained in optical computing, facilitating simultaneous data processing across numerous channels. Empirical evidence from various studies underscores processing speeds several orders of magnitude faster than those achieved by electronic counterparts. Additionally, PNNs boast lower energy consumption per computation, owing to the fundamental properties of light propagation, leading to diminished heat dissipation and power consumption in contrast to electronic devices [31,32]. Nonetheless, despite excelling in processing speed and energy efficiency, potential trade-offs regarding accuracy may arise. While PNNs demonstrate promising outcomes in tasks such as pattern recognition and image processing, their accuracy may fluctuate based on the neural network architecture’s complexity and the precision of optical components employed [30]. Comparative analyses between PNNs and electronic neural networks elucidate these trade-offs, delineating areas where PNNs excel, as well as other areas necessitating further optimization to attain comparable accuracy levels [33]. Figure 1 presents a comprehensive overview contrasting the functionalities of photonic and electronic implementations of neurons.
In this section, we delved into several pivotal types of PNNs that stand as focal points in imaging and computing research, illuminating their significance and widespread exploration in the field, as presented in Figure 2. Feedforward neural networks (FNNs) provide a foundational framework for pattern recognition and classification tasks by mapping input data to output predictions through layers of interconnected neurons [34]. Recurrent neural networks (RNNs) host the crucial concept of feedback loops, enabling them to process sequential data with temporal dependencies, making them essential for tasks, for instance, natural language processing and time series analysis [35]. Convolutional neural networks (CNNs) excel in image and video processing tasks, leveraging shared weights and local connectivity to derive hierarchical features, making them indispensable in computer vision applications [36,37]. Reservoir computing (RC), a subset of recurrent networks, offers advantages in processing temporal data efficiently, particularly in tasks where memory and context play vital roles [19]. Spiking neural networks (SNNs), stimulated by biological neurons’ spiking behavior, offer low-power neuromorphic computing capabilities appropriate for brain-inspired computing tasks and efficient event-based processing [19]. Photonic Ising machines (PIMs) exploit principles from statistical physics to solve optimization problems efficiently, while optoelectronic neural networks (ONNs) leverage light-based communication for high-speed, parallel processing, offering promising solutions for large-scale computational tasks [19]. In our opinion, each of these architectures brings its own set of strengths to the table in the realm of computing and imaging. Together, they push the boundaries of AI forward, opening new possibilities and paving the way for tackling a wide range of real-world problems.
Photonics offers a multitude of advantages across various types of neural networks. In FNNs, photonics enables high-speed processing due to the intrinsic speed of light, enhancing computational efficiency. The parallel nature of photonics allows for the simultaneous processing of multiple inputs, enhancing the network’s throughput. In RNNs, photonics facilitates the efficient handling of time-varying signals, which is crucial for temporal processing tasks. Additionally, the inherent parallelism of photonics can accelerate computations in CNNs, which excel in tasks involving spatial relationships. Photonics is also well-suited for RC, where its high bandwidth and low latency enable rapid information processing. SNNs benefit from photonics’ ability to efficiently transmit and process sparse, asynchronous signals akin to biological neurons. PIMs exploit photonics’ parallelism for solving optimization problems efficiently. Lastly, in ONNs, photonics seamlessly integrates with electronics, offering low-latency communication and high-bandwidth connections, thus improving network performance. Overall, photonics presents a promising avenue for enhancing the speed, efficiency, and performance of diverse neural network architectures.
Considering the distinct characteristics of different types of PNN described earlier, we classified them into two primary groups. On the right side of Figure 2, we assembled a category corresponding to traditional artificial neural networks (ANNs) that could potentially be realized using optical or photonic technologies. Conversely, architectures tailored specifically to exploit the unique properties of PNN are positioned on the left side of Figure 2. Subsequent subsections will delve into each of these categories in greater depth.

2.1. Feedforward Neural Networks (FNNs)

FNNs represent a fundamental architecture in ANNs, categorized by the unidirectional flow of information from input nodes through one or more hidden layers to output nodes, as reported in Figure 3a [38]. In essence, FNNs process input data by passing them through a series of interconnected layers of neurons, each layer transforming the data representation to derive increasingly abstract features [39]. These networks are trained using approaches such as backpropagation, where the inconsistency between the projected output and the actual target is minimized through iterative adjustments to the network’s parameters. FNNs find widespread applications in countless spheres, containing image and speech recognition, natural language processing, and financial forecasting, owing to their capacity to learn multifaceted patterns and associations in data [40]. Despite their simplicity compared to more complex architectures, FNNs serve as foundational models upon which more advanced network designs are built, making them a cornerstone of modern ML and AI [41].
Differential equations manifest across numerous domains of science and engineering, offering a valuable means of describing various physical phenomena. They typically manifest as initial or boundary value problems, wherein conditions at the inception of a process or boundary points are stipulated to yield an explicit solution. Employing numerical methods [42], such as finite difference methods, serves as a useful strategy for approximating these equations. Furthermore, neural networks appeared as a viable instrument for this purpose [43,44]. The realm of neural network architectures boasts a vast array of possibilities. Notably, FNNs demonstrated utility in solving differential equations, as evidenced by seminal works in the field [45]. Within the framework of FNNs, two specific approaches (trial solution method and modified trial solution method) [46,47] have gathered substantial attention in the literature over recent decades, displaying substantial promise.
FNNs present a potential method for resolving differential equations. Nevertheless, the consistency and precision of the approximation still pose unresolved challenges within the existing literature. Computational methodologies are generally heavily reliant on various computational parameters and the selection of optimization techniques, which must be considered in conjunction with the structure of the cost function. In [48], the resolution of a straightforward yet pivotal stiff ordinary differential equation representing a damped system is proposed. Two computational strategies are proposed for resolving differential equations using neural forms: the conventional but still relevant approach of trial solutions defining the cost function and a recent direct formulation of the cost function associated with the trial solution process. It is worth noting that these configurations can be readily extended to encompass the solution of partial differential equations. Through an exhaustive computational analysis, the potential to discern preferable choices for parameters and methodologies is demonstrated. Additionally, light was shed on intriguing phenomena observable in the simulations of neural networks.

2.2. Recurrent Neural Networks (RNNs)

RNNs characterize a specialized class of ANNs engineered to manage sequential data [49,50]. They achieve this by incorporating connections that create directed cycles within the network graph, facilitating dynamic temporal behavior and the processing of sequences of variable lengths. Diverging from the linear flow of information characteristic of FNNs, RNNs feature connections that loop back, empowering them to retain and exploit information from past states. The assembly of RNN is depicted in Figure 3b.
This unique trait renders RNNs particularly adept at tasks revolving around sequential data, including but not limited to time series prediction, natural language processing, speech recognition, and music generation. Notably, RNNs serve crucial roles in language modeling and text generation, exemplified in machine translation systems where they encode a source sentence into a fixed-length vector depiction before decoding it into a target sentence, enabling seamless translation across languages. Similarly, in sentiment analysis within natural language processing, RNNs excel at discerning sentiment in a text by analyzing the context of individual words within the broader sentence context. Moreover, RNNs contribute significantly to speech recognition systems by effectively modeling the temporal dependencies present in audio data, thereby accurately transcribing spoken words. In summary, recurrent neural networks hold pivotal importance in managing sequential data across diverse domains, owing to their proficiency in capturing temporal dependencies and processing sequential information with precision.
RNNs offer powerful capabilities for sequential data processing, but they come with several challenges and limitations. One significant challenge is the vanishing gradient problem, where the gradients diminish exponentially as they propagate backward in time during training, hindering long-range dependencies learning. Additionally, RNNs struggle with capturing long-term dependencies due to their inherent sequential nature, making it difficult to retain information over extended sequences. Moreover, training RNNs can be computationally expensive and time-consuming, especially with large datasets. Another limitation is their exertion in managing variable-length sequences efficiently, as they necessitate fixed-length input and output vectors. Lastly, RNNs are prone to overfitting, particularly when addressing noisy or sparse data, necessitating careful regularization techniques to mitigate this issue. Despite these challenges, advancements like Long Short-Term Memory [51] and Gated Recurrent Unit architectures [52] were established to alleviate some of these constraints and improve the effectiveness of RNNs in various tasks.

2.3. Reservoir Computing (RC)

RC is a cutting-edge archetype in the field of ML, particularly within the domain of RNNs. Unlike traditional RNNs, where the recurrent connections are subject to training, RC employs a fixed, randomly generated recurrent network called the “reservoir.” This reservoir acts as a dynamic memory system that preserves temporal information and captures complex temporal dependencies within sequential data, as shown in Figure 3c. The input signals are injected into the reservoir, where they undergo nonlinear transformations, leading to rich representations of the input data. The key revolution of RC lies in the separation of the training phase from the reservoir dynamics, allowing for simpler and more efficient learning algorithms. During the training phase, only the output weights are adjusted utilizing simple linear regression or other optimization techniques, enabling rapid training and efficient adaptation to various tasks. RC has shown remarkable performance across a range of applications, including time series prediction, speech recognition, natural language processing, and robotics, making it a promising approach for addressing complex temporal problems in both research and practical applications. Its flexibility, simplicity, and superior performance propelled it to the forefront of modern ML methodologies, fostering ongoing research and development in the field.
In the work presented by Sakemi et al. [53], a novel technique aimed at diminishing the reservoir’s size by incorporating either past states or evolving dynamics directly into the output layer at the current time step was proposed. To shed light on the underlying principle of model size reduction, a thorough analysis is conducted leveraging the data processing capability framework proposed by Dambre et al. [54]. Furthermore, the efficacy of these techniques was assessed through rigorous evaluations of time-series forecast tasks, including the general Hénon-map and NARMA. Remarkably, these findings demonstrate that the anticipated approaches can achieve a reduction in reservoir size of up to one-tenth without significantly amplifying the regression error.
RC presents a promising approach to sequential data processing, yet it also confronts several challenges and limitations [55]. One significant challenge is the model and optimization of the reservoir itself, as finding the right architecture and parameters can be highly task-dependent and nontrivial. Additionally, training the readout layer to effectively derive information from the reservoir states requires careful tuning and regularization to prevent overfitting, especially in the presence of noisy or high-dimensional data. Furthermore, RC systems may struggle with capturing long-term dependencies in sequential data, particularly in cases where the underlying dynamics are highly complex or chaotic. Moreover, scalability can be an issue with large-scale reservoirs, as the computational and memory requirements grow proportionally with the size of the reservoir, potentially limiting its applicability to real-world problems [56]. Despite these challenges, ongoing research aims to address these limitations and further enhance the capabilities of RC for a wide range of tasks and applications [57].

2.4. Convolutional Neural Networks (CNNs)

CNNs are a cornerstone in the realm of AI, predominantly in computer vision tasks. These networks are inspired by the biological visual cortex’s structure and operation, leveraging layers of interconnected neurons to process visual information [58]. At the core of CNNs lies the convolution operation, where filters or kernels are utilized to input data to extract meaningful features [59]. Through a process of convolution, nonlinear activation, pooling, and often repeated layers, CNNs can effectively learn hierarchical representations of features from raw input data. This hierarchical learning enables CNNs to automatically learn and identify patterns, textures, and shapes within images, making them exceptionally powerful for tasks such as image classification, object detection, and image segmentation. CNNs demonstrated extraordinary performance across various domains, including healthcare, autonomous vehicles, and satellite imagery scrutiny, continually pushing the boundaries of what is possible in computer vision and pattern recognition [60].
One of the pivotal aspects of a CNN model lies in its ability to generalize effectively to unseen data. Overfitting stands out as a prevalent issue within CNN networks, manifesting when the model fits the training dataset well but struggles to generalize to new examples outside its training scope [61]. This phenomenon occurs due to the model memorizing training examples without truly learning from them. Mitigating overfitting entails strategies such as expanding the training dataset, employing data augmentation methods, simplifying architecture, applying regularization methods, and implementing early stopping mechanisms. Furthermore, other challenges in training a CNN model include the occurrence of exploding gradients and class imbalances. Exploding gradients become apparent when the training model fails to learn from the data after a certain number of epochs, resulting in overflow and NaN loss values for the error gradient. This instability in learning can be addressed through measures like redesigning the network architecture, gradient clipping, and selecting suitable activation functions. Class imbalance represents another hurdle, characterized by a significant nonuniform distribution of specimen classes [62]. Addressing this issue during model training has long been a substantial challenge in ML.
Recently, CNNs have gathered noteworthy consideration for their inspiring advancements in computer vision. Many research endeavors employed comparative parallel analysis to juxtapose the divergence patterns of CNN and functional magnetic resonance imaging (fMRI) representations [63,64]. These explorations revealed similarities, suggesting that the human visual cortex shares hierarchical depictions akin to CNNs. Consequently, CNN-built encoding models gained widespread acceptance and verified exceptional performance [65,66]. However, it is vital to recognize that despite the triumph of CNNs in encoding tasks, the differences in how CNNs and the brain process visual data should not go unnoticed. Precisely predicting brain responses to numerous stimuli remains a noteworthy challenge in neuroscience. Despite recent progress in neural encoding through CNNs in fMRI studies, significant disparities persist between the computational principles of traditional artificial neurons and actual biological neurons. To tackle this challenge, a framework based on spiking CNNs (SCNNs) for neural encoding was proposed, aiming for greater alignment with biological plausibility [67]. This framework exploits unsupervised SCNNs to extract visual features from image stimuli and utilized a receptive field-based regression algorithm to forecast fMRI responses from these SCNN features. Encoding models were constructed based on SCNNs using four image-fMRI datasets (Figure 3d). Subsequently, image reconstruction and identification tasks were performed using the pre-trained encoding models (Figure 3e,f). Experimental outcomes on handwritten characters, digits, and natural images validate that the projected method achieves notably high encoding performance and can be applied to “brain reading” tasks such as image rebuilding and identification. This study recommended that SNNs hold promise as a valuable approach for neural encoding.
Moreover, a groundbreaking photonic matrix architecture leveraging the real part of a nonuniversal N × N unitary MZI mesh to represent the real-value matrix was proposed by Tian et al. [68]. This innovative approach promises significant advancements, particularly in applications such as PNNs, where it potentially decreases the required MZIs to O(Nlog2 N) level while incurring minimal cost to learning capability. In the experimental validation, a 4 × 4 photonic neural chip was successfully realized, and its performance was meticulously assessed in a CNN tasked with handwriting recognition. Remarkably, this 4 × 4 chip demonstrates remarkably low learning capability loss compared to its conventional counterpart, which relies on O(N2) MZIs. Furthermore, this architecture showcases superior characteristics across various metrics, including optical loss, chip size, power consumption, and encoding error [68].
Figure 3. Structure of (a) FNN, (b) RNN, (c) typical RC [53]. (d) The depiction of the encoding prototype. The model employs a 2-layer SCNN to derive graphic topographies from input pictures and utilizes linear regression models to forecast the fMRI responses for each voxel. (e) The schematic for the image-rebuilding endeavor, targeting the reconstruction of perceived images from brain activity. (f) An illustration for the image identification chore, focused on discerning the perceived image based on fMRI responses [67,69,70].
Figure 3. Structure of (a) FNN, (b) RNN, (c) typical RC [53]. (d) The depiction of the encoding prototype. The model employs a 2-layer SCNN to derive graphic topographies from input pictures and utilizes linear regression models to forecast the fMRI responses for each voxel. (e) The schematic for the image-rebuilding endeavor, targeting the reconstruction of perceived images from brain activity. (f) An illustration for the image identification chore, focused on discerning the perceived image based on fMRI responses [67,69,70].
Nanomaterials 14 00697 g003

2.5. Spiking Neural Networks (SNNs)

SNNs signify a novel class of ANNs inspired by the biological neurons’ spiking behavior found in the brain. Unlike traditional ANNs that rely on continuous-valued activations, SNNs communicate through discrete, asynchronous spikes or pulses of activity [17,71]. This spike-based communication enables SNNs to better emulate the dynamics of biological neural systems and potentially achieve higher efficiency in terms of computational resources and energy consumption. In SNNs, neurons integrate incoming spike signals over time, and once a certain threshold is reached, they emit a spike, propagating information to downstream neurons. This temporal aspect of communication allows SNNs to encode data in the specific timing of spikes, enabling them to capture complex temporal patterns and process information more efficiently. SNNs garnered significant interest due to their potential for low-power neuromorphic hardware implementations and their ability to model dynamic spatiotemporal computations, such as sensory processing and event-based vision [72,73]. Despite facing challenges in training and computational complexity, ongoing research into SNNs continues to advance our understanding of neural computation and holds promise for achieving brain-like intelligence in artificial systems [74].
Developing software for a neuromorphic computer often involves designing an SNN tailored for deployment on such hardware. SNNs draw substantial inspiration from biological neural systems, incorporating temporal dynamics into their computation. Within most neuromorphic computers, neurons and synapses in SNNs exhibit time-dependent behaviors. For instance, spiking neurons may gradually lose charge over time according to specific time constants, while SNN elements like neurons and synapses may introduce time delays. The process of crafting algorithms for neuromorphic systems often revolves around defining an SNN suitable for a given application. Various algorithmic strategies exist within neuromorphic computing, broadly categorized into two types: (1) algorithms focused on training or learning an SNN for deployment on a neuromorphic platform (see Figure 4), and (2) non-ML algorithms where SNNs are manually built to address specific tasks. It is important to clarify that in this context, training and learning algorithms pertain to the techniques for enhancing SNN constraints, typically synaptic weights, to tackle a particular problem.
Backpropagation and stochastic gradient descent demonstrated remarkable efficacy in the realm of deep learning. Nonetheless, these methodologies do not directly translate to SNNs due to the nondifferentiable nature of many spiking neuron activation functions, often employing threshold functions. Moreover, the temporal processing aspect of SNNs poses additional challenges in training and learning within these frameworks. Algorithms that excel in deep learning applications require adaptation to operate effectively with SNNs (refer to Figure 4a), with such adjustments potentially compromising the precision of the SNN related to a similar ANN [75].
A groundbreaking integrated end-to-end photonic deep neural network (PDNN) designed for sub-nanosecond image classification was presented in [30]. This innovative system operates by directly processing optical waves on an on-chip pixel array as they traverse through layers of neurons. Within each neuron, linear computation occurs optically, while the nonlinear activation function is implemented opto-electronically, resulting in an impressive classification time of under 570 ps, equivalent to a single clock cycle of contemporary digital platforms. The utilization of a homogeneously distributed supply light ensures a consistent per-neuron optical output range, permitting seamless scalability to large-scale PDNNs. Demonstrating remarkable accuracy, the PDNN achieves two-class and four-class classification of handwritten letters with accuracies exceeding 93.8% and 89.8%, respectively [30]. By directly processing optical data without the need for analog-to-digital conversion or large memory modules, this approach promises faster and more energy-efficient neural networks, shaping the next generation of deep learning systems.
Given the established training mechanisms of deep neural networks (DNNs), many efforts towards deploying a neuromorphic solution begin by training a DNN and subsequently converting it to an SNN for inference purposes (see Figure 4b). These approaches have generally yielded performance close to the state-of-the-art, offering significant energy savings by utilizing accumulated computations instead of multiplying and accumulating computations commonly found in DNNs, particularly on datasets like MNIST, CIFAR-10, and ImageNet. Initial conversion techniques often involve weight or activation normalization alongside the use of average pooling instead of max pooling. Some approaches also involve training DNNs under constraints to iteratively shape the neuron’s activation function to resemble that of a spiking neuron. Stockl and colleagues introduced a novel mapping strategy utilizing the Few Spikes neuron model (FS-neuron), capable of temporally representing complex activation functions with at most two spikes. Their method demonstrated near-DNN precisions on standard image classification datasets, meaningfully requiring fewer time steps per inference related to formerly established conversion strategies [76]. Numerous applications showcased on neuromorphic hardware leveraged various mapping techniques discussed above. Tasks such as keyword spotting, medical image analysis, and object detection were proficiently executed on existing platforms such as Intel’s Loihi and IBM’s TrueNorth [77,78].
RC, also known as liquid state machines (refer to Figure 4c), is another prominent algorithm utilized in SNNs. In this approach, a sparse recurrent SNN serves as the “reservoir” or “liquid.” The reservoir, typically randomly configured, must exhibit two critical properties: input separability, ensuring distinct inputs yield distinct outputs, and fading memory, ensuring signals eventually dissipate rather than endlessly propagate through the reservoir. In addition to the untrained reservoir, RC involves a readout mechanism, often implemented as linear regression, which is trained to interpret the reservoir’s output. The main benefit of RC is its elimination of the need to directly train the SNN module. RC in SNNs utilizes sparse and recurrent connections, along with synaptic delays within networks of spiking neurons, to map input into a higher dimensional space, both spatially and temporally. Numerous demonstrations of spike-based RC underscored its effectiveness in processing temporally varying signals. This computing framework comes in various forms, ranging from basic reservoir networks employed in bio-signal processing and prosthetic control applications to more complex architectures, such as hierarchical layers of liquid-state machines. These interconnected layers, often combined with supervised-mode-trained layers, are utilized for tasks involving video and audio signal processing.
Evolutionary strategies for training or crafting SNNs (refer to Figure 4d) were also employed. Within an evolutionary algorithm, an initial population is established by generating a random array of potential solutions. Each member of this group is assessed and assigned a score, influencing the selection process (favoring superior performers) and reproduction to yield a fresh population. In the domain of SNNs for neuromorphic computing, evolutionary methods can govern various parameters, including neuron thresholds or synaptic delays, as well as the network’s architecture, such as neuron quantity and synaptic connections. These strategies are attractive due to their lack of reliance on activation function differentiability and network structure constraints (e.g., feed-forward vs. recurrent). They also provide the flexibility to evolve both network structure and parameters. However, this adaptability comes with a drawback; evolutionary approaches typically converge more slowly compared to other training techniques. Evolutionary methodologies primarily excelled in control scenarios like video games and autonomous robot navigation.
Numerous neurobiological investigations elucidated the dynamic regulation of synaptic strength driven by the activity of interconnected neurons. This phenomenon was proposed as a fundamental mechanism underlying learning across a spectrum of tasks. Central to this concept is Spike-timing-dependent plasticity (STDP), a pivotal mechanism in the realm of neuromorphic research. STDP operates by fine-tuning synaptic weights according to the precise temporal relationship between spikes from pre- and post-synaptic neurons (See Figure 4e). It stands as one of the most widely utilized synaptic plasticity mechanisms in the burgeoning field of neuromorphic computing, showcasing its significance in mimicking biological learning processes [79].
Several neuroscientific studies delineated the control of synaptic efficacy based on interconnected neuronal activity, proposed as an instrument for learning various tasks. Spike-timing-dependent plasticity (STDP) emerges as the predominant synaptic adaptability mechanism in neuromorphic research, operating by modifying synaptic weights following the timing of spikes between pre- and post-synaptic neurons. Numerous mathematical models of this phenomenon were assessed using datasets such as MNIST, CIFAR-10, and ImageNet. Shrestha et al. proposed a hardware-friendly variant of the exponential STDP rule, although it demonstrated inferior performance in classifying MNIST data compared to the optimal outcomes achieved with SNNs [80]. STDP-inspired principles displayed promise in mimicking diverse ML methodologies such as clustering and Bayesian inference. In applications involving brain–machine interfaces, STDP functions as a clustering mechanism, acting as a sorter for spikes. Moreover, combinations of spiking reservoirs and STDP were incorporated into a framework termed NeuCube, which has been utilized in tasks such as detecting sleep states and controlling prosthetics, processing signals from electroencephalograms, and functional magnetic resonance imaging [81].
Figure 4. Various training approaches exist for SNNs: (a) One approach involves directly training the SNN using spike-based quasi-backpropagation, as illustrated by the network structure depicted [79]. (b) Alternatively, traditional ANNs can be trained first and then mapped into SNNs [79]. (c) Reservoir computing offers another solution, comprising an input layer, a reservoir, and a readout layer in its typical structure [79]. (d) An evolutionary approach involves the gradual evolution of SNN structures and parameters over time [79]. (e) Spike-timing-dependent plasticity is characterized by adjusting synaptic weights (Δw) based on the relative spike timings between pre- and post-synaptic neurons [79].
Figure 4. Various training approaches exist for SNNs: (a) One approach involves directly training the SNN using spike-based quasi-backpropagation, as illustrated by the network structure depicted [79]. (b) Alternatively, traditional ANNs can be trained first and then mapped into SNNs [79]. (c) Reservoir computing offers another solution, comprising an input layer, a reservoir, and a readout layer in its typical structure [79]. (d) An evolutionary approach involves the gradual evolution of SNN structures and parameters over time [79]. (e) Spike-timing-dependent plasticity is characterized by adjusting synaptic weights (Δw) based on the relative spike timings between pre- and post-synaptic neurons [79].
Nanomaterials 14 00697 g004

2.6. Photonic Ising Machines (PIMs)

PIMs represent a revolutionary approach to solving complex optimization problems by leveraging principles from statistical physics and optical computing. Inspired by the Ising model from physics, which describes interactions between spins in a lattice, PIMs utilize networks of interconnected optical components to simulate the behavior of these spins [4]. In PIMs, optical signals represent the spins, and the interactions between them are encoded in the physical properties of light, such as phase or intensity. By exploiting the inherent parallelism and massive computational capacity of light, PIMs can efficiently explore large solution spaces and find optimal configurations for a wide range of optimization tasks [82]. Moreover, PIMs offer benefits in terms of energy efficiency and scalability compared to conventional electronic computing systems. They hold promise for solving combinatorial optimization problems, such as the traveling salesman problem, protein folding, and data clustering, with unprecedented speed and accuracy [82]. While PIMs are still in the early phases of growth, ongoing research and advancements in photonic technologies are driving the realization of practical PIM-based systems, paving the way for transformative applications in various domains, including AI, logistics, finance, and materials science [83].
Ising machines leverage diverse physical systems to efficiently tackle combinatorial optimization problems. Central to their effectiveness is the adaptability of the spin–spin interaction parameter within the Ising model. Choosing an appropriate physical system is crucial for practical machine development. Quantum mechanical phenomena, such as those found in superconducting circuits [84] and trapped ions [85], exploit quantum annealing [86] based on quantum fluctuation. Semiconductor integrated circuits, including CMOS annealing machines [87] and digital annealers [88], emulate simulated annealing (SA). Photonics-based approaches stand out as highly promising for handling large-scale problems due to light’s inherent capabilities for parallel and high-speed processing, along with system robustness. A notable example is the coherent Ising machine [89], employing optical pulses generated by degenerate optical parametric oscillators to implement pseudo-spins [90]. Another innovation is the integrated nanophotonic recurrent Ising sampler, utilizing coherent optical amplitudes for pseudo-spin representation.
A highly promising method for large-scale light control is spatial light modulation, commonly employed in computing, harnessing light’s parallel propagation traits. An exciting application of this optical technique is the spatial–photonic Ising machine (SPIM) [91], where spins are represented by modulating light waves utilizing a spatial light modulator (SLM). Spin–spin interactions are realized by overlapping light waves through free-space propagation. In comparison to alternative physical implementations, SPIMs offer a simpler configuration and exceptional scalability in spin handling, leveraging light’s parallel propagation based on Fourier optics. These attributes garnered significant attention for SPIMs, leading to the exploration of numerous enhancement avenues. Various approaches, including annealing methods [92], spin encoding techniques [93], interaction models utilizing the transmission matrix of scattering mediums [94], and those exploiting nonlinear optical effects [95], were proposed to advance SPIM capabilities.
Sakabe et al. introduce a novel approach, the space-division multiplexed SPIM (SDM-SPIM), offering a versatile system configuration for optically computing the sum of multi-component Hamiltonians while retaining high flexibility in the interaction matrix [96]. The concept of SDM-SPIM is described in Figure 5a. In the SDM scheme, each component’s beams are autonomously controlled to regulate specific optical intensities, enabling the simultaneous physical multiplication of weight coefficients. Consequently, the sum of Ising Hamiltonians for each component is derived by superimposing these beams. Moreover, SDM-SPIM facilitates the physical tuning of optical parameters, including weight coefficients associated with problem constraint conditions, allowing for dynamic optimization processes. This research aimed to authenticate the technique and its capabilities through physical parameter tuning, realized by executing an SPIM with spatial-division multiplexing. A prototype is demonstrated and applied to knapsack problems—a type of combinatorial optimization problem featuring constraint terms. Additionally, the influence of physical parameters is analyzed on this method’s search characteristics and explored techniques to enhance search performance within the SDM-SPIM framework.
Figure 5b,c show histograms displaying the aggregate weight and value of the obtained solutions. These findings unmistakably demonstrate the achievement of the best possible solution, with a total value of 95. Throughout the experimental demonstration, the ultimate solution was identified by selecting the specimen with the uppermost value under the weight constraint from all explored specimens. Figure 5d demonstrates the distributions of specimens attained throughout iterations. The horizontal axis represents the total weight of each specimen, while the vertical axis represents their total value. Notably, with the progression of iterations, the search area gradually converges towards a region surrounding the optimal solution. In Figure 5e, the evolution of the Ising Hamiltonian over iterations is exemplified. After the iteration process, as evident in Figure 5d,e, the near-ground state of the Ising Hamiltonian attained in the experiment unmistakably agrees with specimens resembling the near-optimal solution [96].

2.7. Optoelectronic Neural Networks (ONNs)

ONNs represent a convergence of optical and electronic technologies to create powerful and efficient computing systems inspired by the brain’s neural networks [97]. By integrating optical components, such as lasers, photodetectors, and waveguides, with electronic components like transistors and resistors, ONNs harness the strengths of both domains. Optical signals, which travel at the speed of light, enable parallel processing and high-bandwidth communication between neurons, while electronic components provide precise control and computation capabilities. This hybrid architecture allows ONNs to achieve ultra-fast processing speeds and energy-efficient operation, making them well-suited for tasks requiring large-scale parallelism and complex computations, such as pattern recognition, deep learning, and neuromorphic computing. Moreover, ONNs hold promise for addressing challenges in conventional electronic computing systems, including power consumption constraints and interconnect bottlenecks [98]. Ongoing research in optoelectronic materials and device integration is driving the development of increasingly sophisticated and scalable ONN architectures, with potential applications spanning diverse fields such as AI, biomedical engineering, and communication networks. As these technologies continue to advance, ONNs are ready to take on a transformative role in molding the trajectory of computing and information processing [99,100].
ONN represents a capable frontier in AI computing, harnessing parallelization, power efficiency, and speed for advanced applications. Among these, diffractive neural networks stand out, leveraging encoded light transmitted through trained optical modules. Despite their appeal, the expansion of diffractive networks encounters obstacles attributable to the computational and memory demands linked to optical diffraction modeling. To tackle these obstacles, a revolutionary dual-neuron optical artificial learning framework called DANTE is proposed [101]. In DANTE, optical neurons manage the complexities of optical diffraction, while artificial neurons efficiently estimate the demanding optical diffraction computations using lightweight functions. What distinguishes DANTE is its novel convergence strategy, which merges iterative global artificial-learning steps with local optical learning steps. Through rigorous simulation experiments, DANTE achieves unprecedented results, magnificently training large-scale ONNs with 150 million neurons on ImageNet—a milestone earlier thought unfeasible. Furthermore, DANTE notably hastens training paces on the CIFAR-10 benchmark when related to traditional single-neuron learning approaches. In real-world validation, a two-layer ONN system established on DANTE demonstrates its ability to successfully withdraw features and advance the sorting accuracy of natural images. This empirical validation underscores the practical value of DANTE and highlights its potential to propel advancements in ONN technology.
A tailor-made ONN system was devised, harnessing off-the-shelf optical modulation devices to affirm the practical viability of DANTE (illustrated in Figure 6a,b) [101]. This system streamlines the incorporation of optical computing functionalities via a dedicated optical modulation layer. Input signals were modulated through SLM-1, while network parameters underwent modulation via SLM-2, with computing results captured by a CMOS sensor. Additionally, the performance of ONNs was scrutinized using benchmark datasets, including MNIST, CIFAR-10, and ImageNet. The executed two-layer ONN architecture (depicted in Figure 6c) comprised a foundational layer with a solitary optical modulation layer and a subsequent layer with multiple parallel optical modulation layers. The outputs of the second layer were directed to the readout layer to forecast ultimate results [101]. In Figure 6d, MNIST specimen 7 outputs are showcased. The optical intensity maps taken by the sensor closely resemble the simulated outcomes. However, there were discrepancies primarily attributable to imperfect coherent wavefronts and assembly errors in optical modulation devices. To mitigate these errors, the FC layer in the readout layer was re-tuned. In Figure 6e, outputs from the ImageNet-32 dataset, specifically a leopard image, are presented. The variances between simulation and optical results were more pronounced because of the image’s complexity. Nevertheless, similar optical intensity distributions were observed. The optical results appear blurrier, again attributed to assembly errors and system noise.
Figure 6f presents quantitative analysis results for DANTE. When applied to simple binary-like MNIST datasets, DANTE achieves around 96% accuracy, slightly lower than full simulation results by 2%. The training process involves a global artificial-learning stage that converges in 60 epochs, taking ~2 min 15 s, and a local optical-learning stage requiring around 4 min 40 s to optimize two-phase masks. Retuning the FC layer adds about ½ min, resulting in a total training time of approximately 7 min 25 s. This represents a significant acceleration related to prevailing single-neuron learning approaches such as DPU9, which takes over 5 h for MNIST benchmark training. Looking ahead, integrating the physical ONN system with high-accuracy nanofabrication methods holds promise for meaningfully enhancing its computational capabilities [101].

3. Navigating the Landscape of PNNs: Challenges and Prospects

PNNs represent a promising frontier in computing, leveraging the unique properties of light to potentially revolutionize traditional computing architectures [102,103,104]. However, they also face significant challenges that need to be addressed for widespread adoption [105]. One major obstacle is the difficulty of integrating optical components with existing electronic systems, requiring complex and costly hybrid setups. Moreover, the scalability of PNNs remains a challenge, with issues arising from the need for the precise alignment of optical components and limitations in the number of neurons that can be interconnected efficiently [106]. Additionally, noise and signal degradation in optical systems pose significant hurdles to achieving high accuracy and reliability in computation. Despite these challenges, the prospects of PNNs are bright. In our opinion, the advancements in materials science, particularly the creation of innovative photonic materials and nanophotonic devices, offer a hopeful solution to existing constraints, paving the way for more effective and expandable PNNs [107]. Additionally, the natural parallelism and rapidity of optical processing present the opportunity for significant leaps in computational efficiency, especially in areas like pattern recognition and extensive optimization tasks. As ongoing research drives the evolution of optical computing, PNNs stand poised to emerge as a fundamental technology in future computing systems, introducing fresh possibilities and applications across diverse fields [2,108].
The distinctive challenges associated with training PNNs, encompassing the nondifferentiability of numerous spiking neuron activation functions and the temporal processing aspects of SNNs, are indeed notable. These challenges hinder the straightforward application of conventional optimization techniques used in traditional neural networks. However, researchers are actively devising strategies to address these training hurdles. One approach involves developing specialized optimization algorithms tailored for PNN architectures, considering the unique characteristics of photonic devices and spiking neurons [109]. Additionally, advancements in hardware technologies are being explored to enhance the training efficiency of PNNs, such as the integration of neuromorphic computing elements and photonic components optimized for neural network operations [79]. Furthermore, novel approaches that hold promise for streamlining the training process and augmenting the learning capabilities of PNNs are on the horizon. These include the exploration of unsupervised learning methods, reinforcement learning techniques, and leveraging quantum-inspired optimization algorithms to overcome the challenges associated with training PNNs [110]. Overall, ongoing research efforts aim to overcome the inherent training complexities of PNNs and unlock their full potential for a wide range of applications.
Within optical systems, a multitude of factors contribute to performance degradation, particularly notable within PNNs. Thermal noise, generated by random thermal motion within optical components, introduces fluctuations in signal intensity, while shot noise, a consequence of the discrete nature of light, adds inherent randomness to photon arrival times. Additionally, signal attenuation arises from scattering, absorption, and modal dispersion, collectively reducing signal strength and impairing transmission quality over distances. To address these challenges within PNNs, various strategies were proposed. These encompass advanced error correction codes customized for optical communication systems, adaptive signal processing algorithms adept at discerning noise from genuine signals, and innovative optical amplifier designs aimed at amplifying signal strength while mitigating noise interference [111,112,113]. Furthermore, ongoing research explores techniques such as optical phase modulation and dispersion compensation to counteract signal attenuation and distortion, thereby enhancing the resilience and efficacy of optical systems within neural network frameworks [114,115].
The realm of PNN architecture is expansive, encompassing a rich tapestry of techniques and devices utilized to manifest these structures. Within this domain, our exploration unveiled a diverse array of architectures, each leveraging distinct methodologies. Our examination organizes the extensive literature into distinct categories: resonator-based operations, interferometer-based operations, diffractive optics-based operations, and optical amplification/lasing-based operations. Each PNN architecture possesses its unique set of advantages and limitations, and the appropriateness of a specific architecture hinges upon the application context. Resonators, for instance, can store and manipulate light efficiently, facilitating computations. Moreover, their nonlinear behavior is advantageous for implementing activation functions in neural networks. However, resonators are vulnerable to environmental factors like temperature fluctuations and mechanical vibrations, which can adversely impact their performance. Furthermore, the fabrication of high-quality resonators can be both challenging and costly.
Interferometers, on the other hand, excel in manipulating the phase and amplitude of optical signals, thereby enabling complex computations. Their inherent parallel processing capability allows for the simultaneous handling of multiple inputs. Nonetheless, interferometers are susceptible to phase variations and necessitate precise alignment for optimal functionality. Moreover, coherence and stability issues may arise in practical implementations.
Diffractive optics offer the capacity to execute intricate mathematical operations using diffraction patterns. Their parallel computing capability and scalability are notable advantages. However, the introduction of noise and aberrations by diffractive elements can compromise computational accuracy. Additionally, achieving high precision in the fabrication of diffractive optical elements poses a significant challenge.
Optical amplification and lasing mechanisms play a crucial role in amplifying optical signals, facilitating long-distance communication and high-speed processing. Their ability to deliver high gain with low noise characteristics is advantageous. Nonetheless, ensuring stability and preventing instabilities such as mode hopping and noise requires sophisticated control mechanisms. Moreover, the fabrication and integration of optical amplifiers/lasers can be complex and expensive.
In selecting the most suitable PNN architecture for a specific application, various factors must be considered. These include the nature of computations required (e.g., linear vs. nonlinear operations), scalability to handle large-scale neural networks and datasets, robustness against environmental factors and noise, ease of integration with existing photonic or electronic systems, and cost considerations encompassing fabrication, operation, and maintenance.
To advance the implementation of PNNs beyond the existing bulky prototypes built on optical tables, several promising directions can be explored for denser integration. One key approach is the development of integrated photonic circuits, where photonic components are miniaturized and integrated onto a single chip or substrate. This integration could involve leveraging technologies like silicon photonics or photonic integrated circuits (PICs), which enable the creation of complex optical systems on a small footprint [2]. Another direction involves exploring novel materials and structures that can manipulate light at smaller scales, such as metasurfaces or nanophotonic devices. These technologies could allow for the realization of compact and efficient photonic components tailored for neural network applications [116]. Additionally, exploring advanced packaging techniques that facilitate dense stacking and the interconnection of optical elements could lead to more compact and portable PNN implementations. Furthermore, investigating new architectures that optimize the use of light for neural network computations, such as employing reconfigurable photonic networks or hybrid photonic–electronic systems, holds promise for denser and more practical PNN designs. By focusing on these directions, researchers can pave the way for the realization of much denser and scalable PNNs, enabling their integration into a wide range of applications, including efficient deep learning, optical computing, and brain-inspired computing paradigms.
Ultimately, the selection process necessitates a thorough trade-off analysis of these factors to pinpoint the architecture that best aligns with the requirements of the target application. Furthermore, experimental validation and performance evaluation play a pivotal role in assessing the suitability of a particular PNN architecture in practical implementations. For a comprehensive overview, we present a synthesis of the covered architectures in Table 1.
Furthermore, let us contrast two specific ANN setups by examining their respective mathematical operations. This comparison will enable us to gauge the resulting advantages of optical implementation. The initial configuration is centered around the 4-f Fourier-correlator arrangement (refer to Figure 7), representing an optical CNN with a solitary layer denoted as H(u,v) [131]. This layer H(u,v) is defined through the Fourier transformation of the kernel extracted from a standard CNN’s convolutional layer. It can be manifested either as a diffractive optical element (offering damage resistance) or as an SLM (offering dynamic or adaptable properties). The mathematical equivalent to the Fourier-correlator framework comprises two Fourier transformations:
W f u , v = 1 λ f R 2 w x , y exp i 2 π λ f x u + y v d x d y ,
where f is the focal length of a lens and λ is the wavelength of the optical radiation and also includes the spatial filtering operation:
G ( u , v ) = W f ( u , v ) H ( u , v ) .
The second configuration (refer to Figure 8) involves a sequence of diffractive optical elements that form pre-trained diffraction layers within a deep diffraction neural network (DDNN) [132]. The mathematical correspondence was established by iteratively applying two operations across successive layers. Specifically, at the pth layer, we perform field propagation over a defined distance using the Rayleigh–Sommerfeld diffraction formula [133,134]:
w p u , v = d p 2 π R 2 w ^ p 1 x , y exp i 2 π R p / λ R p 2 i 2 π λ 1 R p d x d y ,
where R p = x u 2 + y v 2 + d p 2 and includes a multiplication of w p u , v by complex values T p ( u , v ) of the corresponding pre-trained diffraction layer:
w ^ p ( u , v ) = w p ( u , v ) T p ( u , v ) .
Note, in paraxial approximation, one can use the Fresnel–Kirchhoff integral instead of Equation (3):
w p ( u , v ) = i exp i 2 π d p / λ λ d p R 2 w ^ p 1 x , y w ^ p ( x , y ) exp i π λ d p ( x u ) 2 + ( y v ) 2 d x d y .
The use of Equation (5) can provide the use of some fast calculation algorithms [135].
Various composite and hybrid versions of the systems discussed above are possible [136], for example, as in Figure 9.
Considering the fact that the operation of many PNNs is assumed to be closely coupled with a classical computer through input-output devices in the form of SLMs and light-sensitive matrices, it is possible to make a rough estimate of the computing speed in such devices. When calculating, it should be considered that no operations on floating point numbers (Flops) are performed in such systems. I/O devices generate a signal with a bit depth of 1 to 16 bits (most often 8 bits). Therefore, we can talk about either the number of bits per second or operations per second (Ops).
Let us perform a comparative estimation of the speed and efficiency of calculations for PNN using the above two ANN configuration examples (Figure 7 and Figure 8).
When estimating speed, we can assume that the optical system performs the same mathematical transformations that are used when simulating its operation on a computer. In particular, for the Fourier-correlator scheme (Figure 7), the main computational cost comes from performing two Fourier transforms.
The computational complexity of the discrete Fourier transform (the discrete analog of expression (1)) without the use of fast algorithms is proportional to N4, where N is the dimension of the transformed array. Then, one can write a simple formula to estimate the speed of calculations:
V = 2 × N4 × r/t,
where N is the size of the camera matrix, t is the camera exposure time, r is the camera bit depth. The speed V, in this case, is expressed in bits per second. So, if the SLM with 1024 × 1024 pixels and an operating frequency of 30 Hz is applied with the camera bit depth of 8 bit (approximately corresponds to the physical experiment in [101]), the speed is
V = 2.6 × 1015 bits/s 3.2 × 1014 bytes/s = 3.2 × 1014 Ops = 320 TOps,
The authors of [101] believe that the speed calculation should be based on the fact that the Fourier transform can be calculated using the fast Fourier transform (FFT) and obtain an estimate that is several orders of magnitude lower: V = 9 × 108 Ops. However, they point out that the TOps metric may not always accurately reflect actual performance. For example, running two FFTs and one elementwise multiply on an RTX 3090 GPU takes approximately 0.9 ms, which is slower than their PNN prototype (0.7 ms). At the same time, the performance of the RTX 3090, according to the manufacturer, is about 30 TFLops.
In the second scenario, utilizing a PNN built upon a sequence of DOEs (see Figure 8), the significance of estimation increases notably due to the absence of standard rapid algorithms for computing the Rayleigh–Sommerfeld or Kirchhoff integrals. Within discrete analogs of expressions (3) and (5), the most substantial computational challenge lies in evaluating the exponent. Typically, computing the exponent demands approximately two orders of magnitude more operations compared to the simplest binary operation. Therefore, it can be inferred that when computing the Rayleigh–Sommerfeld integral, the number of basic operations will be approximately 100 × N4.
If we use SLM with the resolution of 2048 × 2048 pixels and the frequency of 60 Hz as each diffraction element, the number of simple operations on one layer performed in 1 s will be
100 × 20484 × 60 Hz ≈ 1017 bits/s ≈ 1.2 × 1016 bytes/s = 1.2 × 1016Ops = 1.2 × 104 TOps.
The value (8) is multiplied by the number of layers. It is also quite easy to determine the energy efficiency of calculations for a PNN. In particular, the energy efficiency of 0.02 Tops/W is indicated in [101] for an optoelectronic circuit power of 65 W. However, considering the above calculations, when training a PNN using the Kirchhoff integral, this estimate can be significantly increased up to 100 TOps/W.

4. Discussion on PNNs and Concluding Remarks

The escalating demand for processing vast amounts of data at ever-increasing speeds has prompted a critical need to surpass the limitations posed by the traditional von Neumann architecture. Innovations in neural network training and testing are imperative, driving the exploration of novel structures capable of accommodating this demand efficiently [137,138]. The surge in research interest is evidenced by the proliferation of publications and patents worldwide, as illustrated in Figure 10. Optical processors, renowned for their exceptional speed and minimal power consumption, emerged as a frontrunner in this pursuit, spurred by the rapid advancements in dedicated hardware for PNNs. Leveraging the inherent advantages of optics, such as high-speed parallel computing and low-energy consumption, optical neural networks exhibit tremendous potential [139]. The trajectory of development in optical neuromorphic computing seems inexorable, with significant strides being made in research and experimentation [103]. Although still in its nascent stages, photonic neuromorphic computing has witnessed the emergence of diverse optimization solutions, indicative of a promising path forward.
PNNs offer significant advantages over traditional analog neural networks, making them a compelling focus for research and development. One primary benefit is their potential for ultra-fast processing speeds, driven by the high propagation velocity of light compared to electrical signals. This speed advantage can substantially decrease inference times and facilitate the real-time processing of complex data. Additionally, photonic systems inherently support massive parallelism, as light waves can be processed simultaneously across multiple channels, aligning well with the parallel nature of neural network computations. Furthermore, photonic systems are naturally suited for low-power operations, essential for energy-efficient computing. Prioritizing research and development in PNNs over analog alternatives can unlock the unique capabilities of light-based computing, leading to breakthroughs in deep learning, optical computing, and neuromorphic engineering. Leveraging photonics’ advantages may help overcome scalability and speed limitations faced by traditional electronic neural networks, fostering transformative advancements in AI and computational neuroscience.
Designing PNNs often requires specialized software and hardware resources tailored to the unique demands of photonics-based computing. NeuralDesigner is a powerful software tool designed to facilitate the creation, training, and deployment of neural networks for various tasks, including classification, regression, clustering, and forecasting [140]. At its core, NeuralDesigner employs a user-friendly interface that allows users to construct neural network architectures through a visual drag-and-drop approach, eliminating the need for intricate coding. The software employs sophisticated algorithms to automatically optimize network parameters, such as weights and biases, during the training process, thereby enhancing performance and accuracy. Utilizing advanced techniques like backpropagation and gradient descent, NeuralDesigner iteratively refines the network’s parameters based on the provided dataset, ultimately yielding a model capable of making precise predictions or classifications. Furthermore, NeuralDesigner offers comprehensive tools for data preprocessing, validation, and model evaluation, ensuring robustness and reliability in the developed neural networks. With its intuitive interface and robust functionality, NeuralDesigner empowers users to harness the power of neural networks effectively, even without extensive expertise in machine learning algorithms. Additionally, hardware resources like PICs offer a scalable platform for realizing PNNs, leveraging the inherent advantages of photonics, such as high bandwidth, low energy consumption, and parallel processing capabilities. Companies like Lightmatter [141] and Intel’s Silicon Photonics division [142] are at the forefront of developing PICs optimized for PNN applications, providing researchers and engineers with the necessary hardware infrastructure to explore and deploy PNNs effectively.
In the complex architecture of a PNN, the propagation of light signals encounters numerous optical modules, each introducing its own set of challenges. One significant issue is the degradation of light signals as they traverse through these modules due to losses, dispersion, and noise accumulation. To combat this degradation and maintain signal integrity, the network employs a sophisticated mechanism for signal regeneration. This process involves strategically placed regeneration nodes that detect and amplify weakened signals, restoring them to their original strength and clarity. By incorporating such regeneration mechanisms, the PNN ensures that light signals retain their fidelity throughout the intricate pathways of optical processing, facilitating efficient and reliable information transmission and computation. Despite the strides made, challenges persist for PNNs, yet ongoing efforts hold promise for overcoming these hurdles. The comparison of various neural network architectures reveals a spectrum of capabilities and trade-offs, each tailored to specific computational and imaging tasks [9].
The training process of PNNs diverges significantly from that of ANNs. Unlike ANNs, which rely on electronic signals for computation, PNNs leverage photons, the fundamental particles of light, to perform calculations. PNNs encode data into optical signals, which propagate through photonic circuits that emulate the functionalities of neurons and synapses. This optical computing paradigm offers unique advantages such as high parallelism, low energy consumption, and potentially high-speed processing due to the intrinsic speed of light [143]. However, the training of PNNs can be more challenging compared to ANNs due to the specialized hardware requirements and the complexity of optical signal processing. While PNNs hold promise for ultra-fast computation and the efficient processing of massive datasets, their training speed may not necessarily surpass that of ANNs in all scenarios. The optimization and calibration of photonic components and the development of suitable training algorithms are ongoing areas of research aimed at enhancing the efficiency and scalability of PNN training [144]. Therefore, while PNNs offer exciting prospects for future computing paradigms, their training speed and effectiveness currently depend on various factors and remain an active area of investigation in the field of photonics and neuromorphic computing [2].
FNNs offer simplicity and efficiency, making them suitable for structured data analysis and pattern recognition [145,146]. FNNs have a multitude of potential implications across various domains. In finance, they can be employed for stock market prediction and algorithmic trading [147,148]. In healthcare, they may aid in disease diagnosis and drug discovery. Within the realm of autonomous vehicles, FNNs can contribute to advanced driver assistance systems and collision avoidance. Moreover, in natural language processing, they can enhance language translation and sentiment analysis. Overall, the potential implications of FNNs span across industries, promising advancements in efficiency, accuracy, and decision-making processes.
RC is a novel approach to ML that harnesses the dynamics of a fixed RNN known as the reservoir. This network is untrained, acting as a reservoir of dynamics to process input signals [149]. The output layer, which is trained to perform a specific task, reads the state of the reservoir to generate predictions or classifications. The potential implications of RC are vast. In fields like time series prediction, RC offers remarkable accuracy and efficiency, outperforming traditional methods. It also shows promise in areas such as speech recognition, where its ability to capture temporal dependencies leads to improved performance. Moreover, RC has applications in robotics, control systems, and cognitive modeling, suggesting its potential for advancing various domains of AI [150,151]. Its efficient training process and adaptability make RC a promising avenue for tackling complex real-world problems.
CNNs, renowned for their hierarchical feature extraction and translational invariance, dominate image-related tasks with their ability to capture spatial hierarchies effectively [152]. The effectiveness of classification, measured by metrics like accuracy, misclassification rate, precision, and recall, is significantly influenced by the configuration of convolutional layers in a CNN. Factors such as the number of pooling layers, filters, filter sizes, stride rates, and pooling layer placements play pivotal roles in shaping CNN performance. Given the resource-intensive nature of CNN training, reliant on potent hardware like GPUs, extensive experimentation with different parameter combinations demands substantial time and computational resources [153]. Hyper-parameter selection profoundly impacts CNN performance, with even minor adjustments capable of yielding significant changes. Therefore, meticulous consideration of parameter choices is imperative when devising optimization strategies. Over time, CNN architectures have evolved from modest layer counts (e.g., AlexNet) to encompass hundreds of layers, thereby enhancing compactness and effectiveness (e.g., ResNet, ResNext, DenseNet). However, these advancements introduce immense model complexities, necessitating large datasets and powerful GPUs for training. Consequently, there is a burgeoning interest in developing lightweight networks to mitigate redundancy further. Choosing the optimal detection network for a specific application and embedded hardware entails striking a balance between speed, memory utilization, and accuracy. Preferably, compact models with fewer parameters should be prioritized, even if it entails sacrificing detection accuracy initially [154]. Techniques like hint learning, knowledge distillation, and refined pre-training methods offer avenues for compensating for this reduction in accuracy. These enhancements empower CNNs to glean insights from data at varying depths and structural configurations. Recent studies advocate for utilizing blocks instead of conventional layers, showcasing considerable potential for enhancing CNN performance.
RNNs offer several advantages in the realm of sequential data processing [155]. One significant advantage is their ability to capture and utilize temporal dependencies within sequential data, making them well-suited for tasks such as time series prediction, natural language processing, and speech recognition. Furthermore, RNNs are highly flexible and adaptable, capable of processing sequences of varying lengths, which is crucial for handling real-world data with irregular temporal structures. Additionally, advancements such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures address the vanishing gradient problem associated with traditional RNNs, enhancing their ability to capture long-term dependencies and improving overall performance on complex sequential tasks [156].
SNNs, inspired by biological neurons, excel in energy efficiency and event-driven computation, making them ideal for neuromorphic computing and real-time applications. SNNs offer promising applications due to their resemblance to biological brains and unique computational capabilities [157]. Particularly in neuromorphic engineering, SNNs closely mimic biological brain functions, making them ideal for tasks like sensory processing, pattern recognition, and motor control in robots and autonomous systems [158,159]. Their event-driven nature also makes them advantageous for low-power computing environments, making them suitable for IoT sensors and wearable electronics. Moreover, SNNs contribute to neuroscience research by modeling neural dynamics and enhancing the understanding of brain functions, thereby advancing cognitive science and brain–computer interfaces [160].
PIMs harness the power of optics for parallel processing and optimization tasks, potentially outperforming conventional computing for specific problems [83]. Utilizing principles from both quantum and classical physics enables the tackling of mathematical computations that pose challenges to conventional electronics. Recently, PIMs have emerged, showcasing the ability to compute spin Hamiltonian minima, offering a pathway to groundbreaking hardware for accelerated ML [91]. However, existing systems face scalability issues or are constrained by a limited number of spins. In response, a large-scale optical Ising machine using a straightforward setup with a spatial light modulator was demonstrated. The experiments achieve configurations encompassing thousands of spins, converging to ground states within a low-temperature ferromagnetic-like phase, featuring all-to-all and adjustable pairwise interactions. These findings pave the way for classical and quantum PIMs, harnessing light’s spatial degrees of freedom for parallel processing of extensive spin systems with programmable couplings [91].
ONNs blend optics and electronics, offering high-speed processing and massive parallelism, albeit with challenges in hardware complexity and fabrication costs [99,161]. ONNs offer vast potential across diverse applications thanks to their unique integration of optics and electronics, which grants them advantages in speed, energy efficiency, and parallel processing capabilities. A notable application lies within AI and ML tasks, where these networks excel in accelerating intricate computations, facilitating the real-time analysis of extensive datasets. This capability proves invaluable in domains such as image and pattern recognition, natural language processing, and autonomous systems [162]. Furthermore, ONNs exhibit promise in areas like medical diagnostics, where the swift and precise analysis of medical imaging data is essential for timely diagnosis and treatment planning. Leveraging their parallel processing capability enhances the speed and accuracy of interpreting medical images, thereby improving healthcare outcomes [163,164]. Moreover, these networks hold potential in communication and data processing networks, offering high bandwidth and low power consumption, thereby enhancing the efficiency of data transmission and processing. This contributes to the development of faster and more energy-efficient communication systems [165]. In computing and imaging, the choice among these architectures depends on factors such as data characteristics, computational requirements, and performance objectives. While each architecture presents unique strengths and weaknesses, ongoing research and technological advancements continue to refine these models, promising further innovations in neural network computing and imaging applications and shaping the future of AI-driven solutions across diverse domains.
In the end, we would like to conclude the paper by stating that there are still several hurdles that must be overcome for PNNs to become practical, even for niche applications. One significant challenge is the development of efficient and compact photonic components that can perform neural network operations reliably. Current photonic devices often rely on bulky and expensive setups, requiring precise alignment and stabilization, which limits their practical deployment. Miniaturizing and integrating these components into scalable systems, such as PICs, is essential for practical PNN implementation. Another obstacle is achieving compatibility between photonic and electronic systems for data interfacing and processing, as seamless integration with existing computing platforms is crucial for adoption. Additionally, addressing issues related to noise, nonlinearities, and signal loss in photonic systems is essential to ensure the accuracy and robustness of PNNs. Moreover, developing efficient training algorithms specifically tailored for photonic hardware and exploring novel architectures optimized for light-based computations are critical steps toward practical PNNs. Overcoming these hurdles will unlock the full potential of PNNs, enabling their use in diverse applications ranging from high-speed computing to neuromorphic engineering and beyond.

Author Contributions

Conceptualization, M.A.B. and S.N.K.; methodology, N.L.K.; software, M.A.B.; validation, M.A.B., S.N.K., R.V.S. and N.L.K.; formal analysis, S.N.K. and R.V.S.; investigation, M.A.B.; resources, N.L.K.; data curation, S.N.K.; writing—original draft preparation, M.A.B.; writing—review and editing, M.A.B., S.N.K. and R.V.S.; visualization, S.N.K.; supervision, N.L.K.; project administration, N.L.K.; funding acquisition, N.L.K. and R.V.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Analytical Center for the Government of the Russian Federation (agreement identifier 000000D730324P540002, grant No 70-2023-001317 dated 28 December 2023).

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge the equal contribution of all the authors in the completion of this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. El Srouji, L.; Krishnan, A.; Ravichandran, R.; Lee, Y.; On, M.; Xiao, X.; Ben Yoo, S.J. Photonic and optoelectronic neuromorphic computing. APL Photonics 2022, 7, 051101. [Google Scholar] [CrossRef]
  2. Liao, K.; Dai, T.; Yan, Q.; Hu, X.; Gong, Q. Integrated Photonic Neural Networks: Opportunities and Challenges. ACS Photonics 2023, 10, 2001–2010. [Google Scholar] [CrossRef]
  3. Cheng, Y.; Zhang, J.; Zhou, T.; Wang, Y.; Xu, Z.; Yuan, X.; Fang, L. Photonic neuromorphic architecture for tens-of-task lifelong learning. Light Sci. Appl. 2024, 13, 56. [Google Scholar] [CrossRef] [PubMed]
  4. Suzuki, H.; Tanida, J.; Hashimoto, M. (Eds.) Photonic Neural Networks with Spatiotemporal Dynamics: Paradigms of Computing and Implementation; Springer Nature: Singapore, 2024; ISBN 978-981-9950-71-3. [Google Scholar]
  5. Brunner, D.; Soriano, M.C.; Fan, S. Neural network learning with photonics and for photonic circuit design. Nanophotonics 2023, 12, 773–775. [Google Scholar] [CrossRef]
  6. Woods, D.; Naughton, T.J. Photonic neural networks. Nat. Phys. 2012, 8, 257–259. [Google Scholar] [CrossRef]
  7. Biasi, S.; Donati, G.; Lugnan, A.; Mancinelli, M.; Staffoli, E.; Pavesi, L. Photonic Neural Networks Based on Integrated Silicon Microresonators. Intell. Comput. 2024, 3, 0067. [Google Scholar] [CrossRef]
  8. Training of Photonic Neural Networks through In Situ Backpropagation and Gradient Measurement. Available online: https://opg.optica.org/optica/fulltext.cfm?uri=optica-5-7-864&id=395466 (accessed on 24 October 2023).
  9. Huang, C.; Sorger, V.J.; Miscuglio, M.; Al-Qadasi, M.; Mukherjee, A.; Lampe, L.; Nichols, M.; Tait, A.N.; Ferreira de Lima, T.; Marquez, B.A.; et al. Prospects and applications of photonic neural networks. Adv. Phys. X 2022, 7, 1981155. [Google Scholar] [CrossRef]
  10. Bodunov, A.P.; Khonina, S.N. Recognition of Half-Integer Order Vortex Beams Using Convolutional Neural Networks. Opt. Mem. Neural Netw. 2022, 31, 14–21. [Google Scholar] [CrossRef]
  11. Zhou, M.-G.; Liu, Z.-P.; Yin, H.-L.; Li, C.-L.; Xu, T.-K.; Chen, Z.-B. Quantum Neural Network for Quantum Neural Computing. Research 2023, 6, 0134. [Google Scholar] [CrossRef]
  12. Zhou, M.-G.; Liu, Z.-P.; Liu, W.-B.; Li, C.-L.; Bai, J.-L.; Xue, Y.-R.; Fu, Y.; Yin, H.-L.; Chen, Z.-B. Neural network-based prediction of the secret-key rate of quantum key distribution. Sci. Rep. 2022, 12, 8879. [Google Scholar] [CrossRef]
  13. A Dual-Polarization Silicon-Photonic Coherent Transmitter Supporting 552 Gb/s/Wavelength|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9083976 (accessed on 17 February 2024).
  14. Shastri, B.J.; Huang, C.; Tait, A.N.; Lima, T.F.d.; Prucnal, P.R. Silicon Photonics for Neuromorphic Computing and Artificial Intelligence: Applications and Roadmap. In Proceedings of the 2022 Photonics & Electromagnetics Research Symposium (PIERS), Hangzhou, China, 25–29 April 2022; pp. 18–26. [Google Scholar]
  15. Ahmed, A.H.; Sharkia, A.; Casper, B.; Mirabbasi, S.; Shekhar, S. Silicon-Photonics Microring Links for Datacenters—Challenges and Opportunities. IEEE J. Sel. Top. Quantum Electron. 2016, 22, 194–203. [Google Scholar] [CrossRef]
  16. Psaltis, D.; Farhat, N. Optical information processing based on an associative-memory model of neural nets with thresholding and feedback. Opt. Lett. 1985, 10, 98–100. [Google Scholar] [CrossRef] [PubMed]
  17. Feldmann, J.; Youngblood, N.; Wright, C.D.; Bhaskaran, H.; Pernice, W.H.P. All-optical spiking neurosynaptic networks with self-learning capabilities. Nature 2019, 569, 208–214. [Google Scholar] [CrossRef] [PubMed]
  18. Roadmap on Silicon Photonics—IOPscience. Available online: https://iopscience.iop.org/article/10.1088/2040-8978/18/7/073003 (accessed on 17 February 2024).
  19. High-Speed and Energy-Efficient Non-Volatile Silicon Photonic Memory Based on Heterogeneously Integrated Memresonator|Nature Communications. Available online: https://www.nature.com/articles/s41467-024-44773-7 (accessed on 19 February 2024).
  20. Xu, S.; Liu, B.; Yi, S.; Wang, J.; Zou, W. Analog spatiotemporal feature extraction for cognitive radio-frequency sensing with integrated photonics. Light Sci. Appl. 2024, 13, 50. [Google Scholar] [CrossRef] [PubMed]
  21. Corcione, E.; Jakob, F.; Wagner, L.; Joos, R.; Bisquerra, A.; Schmidt, M.; Wieck, A.D.; Ludwig, A.; Jetter, M.; Portalupi, S.L.; et al. Machine learning enhanced evaluation of semiconductor quantum dots. Sci. Rep. 2024, 14, 4154. [Google Scholar] [CrossRef] [PubMed]
  22. Solving High-Dimensional Partial Differential Equations Using Deep Learning|PNAS. Available online: https://www.pnas.org/doi/abs/10.1073/pnas.1718942115 (accessed on 17 February 2024).
  23. Khonina, S.N.; Khorin, P.A.; Serafimovich, P.G.; Dzyuba, A.P.; Georgieva, A.O.; Petrov, N.V. Analysis of the wavefront aberrations based on neural networks processing of the interferograms with a conical reference beam. Appl. Phys. B 2022, 128, 60. [Google Scholar] [CrossRef]
  24. Sanchez, M.; Everly, C.; Postigo, P.A. Advances in machine learning optimization for classical and quantum photonics. JOSA B 2024, 41, A177–A190. [Google Scholar] [CrossRef]
  25. Tang, C.; Yang, D.; Cheng, T.; Yang, S. Bidirectional Design for SPR-Photonic Crystal Fiber Magnetic Field Sensor Based on Deep Learning. IEEE Sens. J. 2024, 24, 4091–4101. [Google Scholar] [CrossRef]
  26. Zhang, J.; Wu, Z.; Wang, Y. Improved error tolerance of programmable photonic integrated circuits for MNIST handwritten digit classification. Opt. Laser Technol. 2024, 169, 110089. [Google Scholar] [CrossRef]
  27. Consoli, A.; Caselli, N.; López, C. Networks of random lasers: Current perspective and future challenges [Invited]. Opt. Mater. Express 2023, 13, 1060–1076. [Google Scholar] [CrossRef]
  28. Dermanis, D.; Bogris, A.; Rizomiliotis, P.; Mesaritakis, C. Photonic Physical Unclonable Function Based on Integrated Neuromorphic Devices. J. Light. Technol. 2022, 40, 7333–7341. [Google Scholar] [CrossRef]
  29. Di Lauro, L.; Alamgir, I.; Sciara, S.; Dmitriev, P.; Mazoukh, C.; Yu, H.; Kamali, S.N.; Fazili, R.; Rahim, A.A.; Fischer, B.; et al. Multimode nonlinear integrated optics for quantum and machine learning-assisted signal processing. In Proceedings of the 2023 IEEE Photonics Society Summer Topicals Meeting Series (SUM), Sicily, Italy, 17–19 July 2023; pp. 1–2. [Google Scholar]
  30. Ashtiani, F.; Geers, A.J.; Aflatouni, F. An on-chip photonic deep neural network for image classification. Nature 2022, 606, 501–506. [Google Scholar] [CrossRef] [PubMed]
  31. Electronic Hardware Implementations of Neural Networks. Available online: https://opg.optica.org/ao/abstract.cfm?uri=ao-26-23-5085 (accessed on 2 April 2024).
  32. Electronics|Free Full-Text|Electricity Consumption Prediction in an Electronic System Using Artificial Neural Networks. Available online: https://www.mdpi.com/2079-9292/11/21/3506 (accessed on 2 April 2024).
  33. Electronic vs. Optical Implementations of Neural Networks*. Available online: https://opg.optica.org/abstract.cfm?uri=OPTCOMP-1989-MA2 (accessed on 2 April 2024).
  34. A Numerical Verification Method for Multi-Class Feed-Forward Neural Networks—ScienceDirect. Available online: https://www.sciencedirect.com/science/article/pii/S0957417424002100?via%3Dihub (accessed on 19 February 2024).
  35. Jellyfish Optimized Recurrent Neural Network for State of Health Estimation of Lithium-Ion Batteries—ScienceDirect. Available online: https://www.sciencedirect.com/science/article/pii/S0957417423024065?via%3Dihub (accessed on 19 February 2024).
  36. Convolutional Neural Network Based Data Interpretable Framework for Alzheimer’s Treatment Planning|Visual Computing for Industry, Biomedicine, and Art|Full Text. Available online: https://vciba.springeropen.com/articles/10.1186/s42492-024-00154-x (accessed on 19 February 2024).
  37. Chen, L.; Li, S.; Bai, Q.; Yang, J.; Jiang, S.; Miao, Y. Review of Image Classification Algorithms Based on Convolutional Neural Networks. Remote Sens. 2021, 13, 4712. [Google Scholar] [CrossRef]
  38. Montesinos López, O.A.; Montesinos López, A.; Crossa, J. Fundamentals of Artificial Neural Networks and Deep Learning. In Multivariate Statistical Machine Learning Methods for Genomic Prediction; Montesinos López, O.A., Montesinos López, A., Crossa, J., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 379–425. ISBN 978-3-030-89010-0. [Google Scholar]
  39. Feedback Process Neural Networks. In Process Neural Networks: Theory and Applications; He, X.; Xu, S. (Eds.) Advanced Topics in Science and Technology in China; Springer: Berlin/Heidelberg, Germany, 2010; pp. 128–142. ISBN 978-3-540-73762-9. [Google Scholar]
  40. Ichikawa, Y.; Sawa, T. Neural network application for direct feedback controllers. IEEE Trans. Neural Netw. 1992, 3, 224–231. [Google Scholar] [CrossRef]
  41. Djarfour, N.; Aïfa, T.; Baddari, K.; Mihoubi, A.; Ferahtia, J. Application of feedback connection artificial neural network to seismic data filtering. Comptes Rendus Geosci. 2008, 340, 335–344. [Google Scholar] [CrossRef]
  42. Antia, H.M. Numerical Methods for Scientists and Engineers; Springer: Berlin/Heidelberg, Germany, 2012; ISBN 978-93-86279-52-1. [Google Scholar]
  43. Parisi, D.R.; Mariani, M.C.; Laborde, M.A. Solving differential equations with unsupervised neural networks. Chem. Eng. Process. Process Intensif. 2003, 42, 715–721. [Google Scholar] [CrossRef]
  44. Dissanayake, M.W.M.G.; Phan-Thien, N. Neural-network-based approximations for solving partial differential equations. Commun. Numer. Methods Eng. 1994, 10, 195–201. [Google Scholar] [CrossRef]
  45. Solution of Nonlinear Ordinary Differential Equations by Feedforward Neural Networks—ScienceDirect. Available online: https://www.sciencedirect.com/science/article/pii/089571779400160X (accessed on 17 February 2024).
  46. Lagaris, I.E.; Likas, A.; Fotiadis, D.I. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 1998, 9, 987–1000. [Google Scholar] [CrossRef] [PubMed]
  47. Piscopo, M.L.; Spannowsky, M.; Waite, P. Solving differential equations with neural networks: Applications to the calculation of cosmological phase transitions. Phys. Rev. D 2019, 100, 016002. [Google Scholar] [CrossRef]
  48. Schneidereit, T.; Breuß, M. Computational characteristics of feedforward neural networks for solving a stiff differential equation. Neural Comput. Appl. 2022, 34, 7975–7989. [Google Scholar] [CrossRef]
  49. Das, S.; Tariq, A.; Santos, T.; Kantareddy, S.S.; Banerjee, I. Recurrent Neural Networks (RNNs): Architectures, Training Tricks, and Introduction to Influential Research. In Machine Learning for Brain Disorders; Colliot, O., Ed.; Neuromethods; Springer: New York, NY, USA, 2023; pp. 117–138. ISBN 978-1-07-163195-9. [Google Scholar]
  50. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network. Phys. Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  51. Li, X.; Wu, X. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, Australia, 19–24 April 2015; pp. 4520–4524. [Google Scholar]
  52. Frontiers|Gated Recurrent Unit Neural Network (GRU) Based on Quantile Regression (QR) Predicts Reservoir Parameters through Well Logging Data. Available online: https://www.frontiersin.org/articles/10.3389/feart.2023.1087385/full (accessed on 19 February 2024).
  53. Model-Size Reduction for Reservoir Computing by Concatenating Internal States through Time|Scientific Reports. Available online: https://www.nature.com/articles/s41598-020-78725-0 (accessed on 19 February 2024).
  54. Information Processing Capacity of Dynamical Systems|Scientific Reports. Available online: https://www.nature.com/articles/srep00514 (accessed on 19 February 2024).
  55. Köster, F.; Ehlert, D.; Lüdge, K. Limitations of the Recall Capabilities in Delay-Based Reservoir Computing Systems. Cogn. Comput. 2023, 15, 1419–1426. [Google Scholar] [CrossRef]
  56. Ma, H.; Prosperino, D.; Räth, C. A novel approach to minimal reservoir computing. Sci. Rep. 2023, 13, 12970. [Google Scholar] [CrossRef] [PubMed]
  57. Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Netw. 2019, 115, 100–123. [Google Scholar] [CrossRef]
  58. Chang, J.; Sitzmann, V.; Dun, X.; Heidrich, W.; Wetzstein, G. Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification. Sci. Rep. 2018, 8, 12324. [Google Scholar] [CrossRef] [PubMed]
  59. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/7404017 (accessed on 12 March 2024).
  60. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  61. Joshi, S.; Verma, D.K.; Saxena, G.; Paraye, A. Issues in Training a Convolutional Neural Network Model for Image Classification. In Proceedings of the Advances in Computing and Data Sciences, Ghaziabad, India, 12–13 April 2019; Singh, M., Gupta, P.K., Tyagi, V., Flusser, J., Ören, T., Kashyap, R., Eds.; Springer: Singapore, 2019; pp. 282–293. [Google Scholar]
  62. Imbalanced Deep Learning by Minority Class Incremental Rectification|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/8353718 (accessed on 19 February 2024).
  63. Cichy, R.M.; Khosla, A.; Pantazis, D.; Torralba, A.; Oliva, A. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Sci. Rep. 2016, 6, 27755. [Google Scholar] [CrossRef] [PubMed]
  64. Kriegeskorte, N.; Kievit, R.A. Representational geometry: Integrating cognition, computation, and the brain. Trends Cogn. Sci. 2013, 17, 401–412. [Google Scholar] [CrossRef] [PubMed]
  65. Allen, E.J.; St-Yves, G.; Wu, Y.; Breedlove, J.L.; Prince, J.S.; Dowdle, L.T.; Nau, M.; Caron, B.; Pestilli, F.; Charest, I.; et al. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nat. Neurosci. 2022, 25, 116–126. [Google Scholar] [CrossRef]
  66. Khosla, M.; Ngo, G.H.; Jamison, K.; Kuceyeski, A.; Sabuncu, M.R. Cortical response to naturalistic stimuli is largely predictable with deep neural networks. Sci. Adv. 2021, 7, eabe7547. [Google Scholar] [CrossRef]
  67. Wang, C.; Yan, H.; Huang, W.; Sheng, W.; Wang, Y.; Fan, Y.-S.; Liu, T.; Zou, T.; Li, R.; Chen, H. Neural encoding with unsupervised spiking convolutional neural network. Commun. Biol. 2023, 6, 880. [Google Scholar] [CrossRef] [PubMed]
  68. Tian, Y.; Zhao, Y.; Liu, S.; Li, Q.; Wang, W.; Feng, J.; Guo, J. Scalable and compact photonic neural chip with low learning-capability-loss. Nanophotonics 2022, 11, 329–344. [Google Scholar] [CrossRef]
  69. Kay, K.N.; Naselaris, T.; Prenger, R.J.; Gallant, J.L. Identifying natural images from human brain activity. Nature 2008, 452, 352–355. [Google Scholar] [CrossRef]
  70. Tr, T. A New Benchmark Dataset for Handwritten Character Recognition. Tilburg Univ. 2009, 2–5. [Google Scholar]
  71. Pfeiffer, M.; Pfeil, T. Deep Learning With Spiking Neurons: Opportunities and Challenges. Front. Neurosci. 2018, 12, 409662. [Google Scholar] [CrossRef] [PubMed]
  72. Spiking Neural Network in Computer Vision: Techniques, Tools and Trends|SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-981-99-4284-8_16 (accessed on 18 February 2024).
  73. Computing with Spiking Neuron Networks|SpringerLink. Available online: https://link.springer.com/referenceworkentry/10.1007/978-3-540-92910-9_10 (accessed on 18 February 2024).
  74. Sen, S.; Venkataramani, S.; Raghunathan, A. Approximate computing for spiking neural networks. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE), Lausanne, Switzerland, 27–31 March 2017; pp. 193–198. [Google Scholar]
  75. Kulkarni, S.R.; Rajendran, B. Spiking neural networks for handwritten digit recognition—Supervised learning and network optimization. Neural Netw. 2018, 103, 118–127. [Google Scholar] [CrossRef] [PubMed]
  76. Optimized Spiking Neurons Can Classify Images with High Accuracy through Temporal Coding with Two Spikes|Nature Machine Intelligence. Available online: https://www.nature.com/articles/s42256-021-00311-4 (accessed on 18 February 2024).
  77. Deep Medical Image Analysis with Representation Learning and Neuromorphic Computing|Interface Focus. Available online: https://royalsocietypublishing.org/doi/10.1098/rsfs.2019.0122 (accessed on 18 February 2024).
  78. Frontiers|REMODEL: Rethinking Deep CNN Models to Detect and Count on a NeuroSynaptic System. Available online: https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2019.00004/full (accessed on 18 February 2024).
  79. Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2022, 2, 10–19. [Google Scholar] [CrossRef] [PubMed]
  80. Stable Spike-Timing Dependent Plasticity Rule for Multilayer Unsupervised and Supervised Learning|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/7966096 (accessed on 18 February 2024).
  81. NeuCube: A Spiking Neural Network Architecture for Mapping, Learning and Understanding of Spatio-Temporal Brain Data—ScienceDirect. Available online: https://www.sciencedirect.com/science/article/pii/S0893608014000070 (accessed on 18 February 2024).
  82. Van der Sande, G.; Böhm, F.; Van Vaerenbergh, T.; Verschaffelt, G. Compact and inexpensive photonic Ising machines based on optoelectronic oscillators. In Proceedings of the 2021 Optical Fiber Communications Conference and Exhibition (OFC), San Francisco, CA, USA, 6–10 June 2021; pp. 1–3. [Google Scholar]
  83. Roques-Carmes, C.; Shen, Y.; Zanoci, C.; Prabhu, M.; Atieh, F.; Jing, L.; Dubček, T.; Mao, C.; Johnson, M.R.; Čeperić, V.; et al. Heuristic recurrent algorithms for photonic Ising machines. Nat. Commun. 2020, 11, 249. [Google Scholar] [CrossRef] [PubMed]
  84. Quantum Annealing with Manufactured Spins|Nature. Available online: https://www.nature.com/articles/nature10012 (accessed on 17 February 2024).
  85. Quantum Simulation of Frustrated Ising Spins with Trapped Ions|Nature. Available online: https://www.nature.com/articles/nature09071 (accessed on 17 February 2024).
  86. Rajak, A.; Suzuki, S.; Dutta, A.; Chakrabarti, B.K. Quantum annealing: An overview. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2022, 381, 20210417. [Google Scholar] [CrossRef]
  87. A 20k-Spin Ising Chip to Solve Combinatorial Optimization Problems with CMOS Annealing|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/7350099 (accessed on 17 February 2024).
  88. Frontiers|Physics-Inspired Optimization for Quadratic Unconstrained Problems Using a Digital Annealer. Available online: https://www.frontiersin.org/articles/10.3389/fphy.2019.00048/full (accessed on 17 February 2024).
  89. A Coherent Ising Machine for 2000-Node Optimization Problems|Science. Available online: https://www.science.org/doi/10.1126/science.aah4243 (accessed on 17 February 2024).
  90. Network of Time-Multiplexed Optical Parametric Oscillators as a Coherent Ising Machine|Nature Photonics. Available online: https://www.nature.com/articles/nphoton.2014.249 (accessed on 17 February 2024).
  91. Phys. Rev. Lett. 122, 213902 (2019)—Large-Scale Photonic Ising Machine by Spatial Light Modulation. Available online: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.213902 (accessed on 17 February 2024).
  92. Noise-Enhanced Spatial-Photonic Ising Machine. Available online: https://www.degruyter.com/document/doi/10.1515/nanoph-2020-0119/html (accessed on 17 February 2024).
  93. Antiferromagnetic Spatial Photonic Ising Machine through Optoelectronic Correlation Computing|Communications Physics. Available online: https://www.nature.com/articles/s42005-021-00741-x (accessed on 17 February 2024).
  94. Phys. Rev. A 105, 033502 (2022)—Tunable Spin-Glass Optical Simulator Based on Multiple Light Scattering. Available online: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.105.033502 (accessed on 17 February 2024).
  95. Observation of Distinct Phase Transitions in a Nonlinear Optical Ising Machine|Communications Physics. Available online: https://www.nature.com/articles/s42005-023-01148-6 (accessed on 17 February 2024).
  96. Sakabe, T.; Shimomura, S.; Ogura, Y.; Okubo, K.; Yamashita, H.; Suzuki, H.; Tanida, J. Spatial-photonic Ising machine by space-division multiplexing with physically tunable coefficients of a multi-component model. Opt. Express 2023, 31, 44127. [Google Scholar] [CrossRef]
  97. Psaltis, D.; Yamamura, A.A.; Hsu, K.; Lin, S.; Gu, X.-G.; Brady, D. Optoelectronic implementations of neural networks. IEEE Commun. Mag. 1989, 27, 37–40. [Google Scholar] [CrossRef]
  98. Chen, Z.-L.; Xiao, Y.; Huang, W.-Y.; Jiang, Y.-P.; Liu, Q.-X.; Tang, X.-G. In-sensor reservoir computing based on optoelectronic synaptic devices. Appl. Phys. Lett. 2023, 123, 100501. [Google Scholar] [CrossRef]
  99. Optoelectronic Neural-Network Scheduler for Packet Switches. Available online: https://opg.optica.org/ao/abstract.cfm?uri=ao-39-5-788 (accessed on 17 February 2024).
  100. Photonic and Optoelectronic Neuromorphic Computing|APL Photonics|AIP Publishing. Available online: https://pubs.aip.org/aip/app/article/7/5/051101/2835184/Photonic-and-optoelectronic-neuromorphic-computing (accessed on 17 February 2024).
  101. Yuan, X.; Wang, Y.; Xu, Z.; Zhou, T.; Fang, L. Training large-scale optoelectronic neural networks with dual-neuron optical-artificial learning. Nat. Commun. 2023, 14, 7110. [Google Scholar] [CrossRef] [PubMed]
  102. Revival of Optical Computing|SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-981-99-5072-0_1 (accessed on 18 February 2024).
  103. Nanomaterials|Free Full-Text|Neuromorphic Photonics Circuits: Contemporary Review. Available online: https://www.mdpi.com/2079-4991/13/24/3139 (accessed on 30 December 2023).
  104. Kazanskiy, N.L.; Butt, M.A.; Khonina, S.N. Optical Computing: Status and Perspectives. Nanomaterials 2022, 12, 2171. [Google Scholar] [CrossRef] [PubMed]
  105. Chen, H.; Yu, Z.; Zhang, T.; Zang, Y.; Dan, Y.; Xu, K. Advances and Challenges of Optical Neural Networks. Chin. J. Lasers 2020, 47, 0500004. [Google Scholar] [CrossRef]
  106. Li, C.; Zhang, X.; Li, J.; Fang, T.; Dong, X. The challenges of modern computing and new opportunities for optics. PhotoniX 2021, 2, 20. [Google Scholar] [CrossRef]
  107. Prucnal, P.R.; de Lima, T.F.; Huang, C.; Marquez, B.A.; Shastri, B.J. Neuromorphic Photonics: Current Status and Challenges. In Proceedings of the 2020 European Conference on Optical Communications (ECOC), Brussels, Belgium, 6–10 December 2020; pp. 1–4. [Google Scholar]
  108. Akhmetov, L.G.; Porfirev, A.P.; Khonina, S.N. Recognition of Two-Mode Optical Vortex Beams Superpositions Using Convolution Neural Networks. Opt. Mem. Neural Netw. 2023, 32, S138–S150. [Google Scholar] [CrossRef]
  109. Abdolrasol, M.G.M.; Hussain, S.M.S.; Ustun, T.S.; Sarker, M.R.; Hannan, M.A.; Mohamed, R.; Ali, J.A.; Mekhilef, S.; Milad, A. Artificial Neural Networks Based Optimization Techniques: A Review. Electronics 2021, 10, 2689. [Google Scholar] [CrossRef]
  110. Usmani, U.A.; Happonen, A.; Watada, J. A Review of Unsupervised Machine Learning Frameworks for Anomaly Detection in Industrial Applications. In Proceedings of the Intelligent Computing, London, UK, 14–15 July 2022; Arai, K., Ed.; Springer International Publishing: Cham, Switzerland, 2022; pp. 158–189. [Google Scholar]
  111. Chung, K.-L.; Chen, W.-Y. Fast adaptive PNN-based thresholding algorithms. Pattern Recognit. 2003, 36, 2793–2804. [Google Scholar] [CrossRef]
  112. Sibul, H.L.; Roan, J.M.; Babich, A.G. Application of nonlinear adaptive signal processing techniques to blind source separation and interference suppression. J. Acoust. Soc. Am. 1999, 105, 973. [Google Scholar] [CrossRef]
  113. Bosu, S.; Bhattacharjee, B. A design of all-optical read-only memory using reflective semiconductor optical amplifier. J. Opt. 2023, 52, 1083–1093. [Google Scholar] [CrossRef]
  114. Engineering Proceedings|Free Full-Text|Hybrid Spectrum Inversion and Dispersion Compensation for Mitigating Fiber Losses in Optical Systems. Available online: https://www.mdpi.com/2673-4591/59/1/208 (accessed on 2 April 2024).
  115. Pelusi, M.D.; Suzuki, A. Higher-order dispersion compensation using phase modulators. In Ultrahigh-Speed Optical Transmission Technology; Weber, H.-G., Nakazawa, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 301–321. ISBN 978-3-540-68005-5. [Google Scholar]
  116. Luo, X.; Hu, Y.; Ou, X.; Li, X.; Lai, J.; Liu, N.; Cheng, X.; Pan, A.; Duan, H. Metasurface-enabled on-chip multiplexed diffractive neural networks in the visible. Light Sci. Appl. 2022, 11, 158. [Google Scholar] [CrossRef] [PubMed]
  117. Tait, A.N.; de Lima, T.F.; Zhou, E.; Wu, A.X.; Nahmias, M.A.; Shastri, B.J.; Prucnal, P.R. Neuromorphic photonic networks using silicon photonic weight banks. Sci. Rep. 2017, 7, 7430. [Google Scholar] [CrossRef]
  118. Spike Sequence Learning in a Photonic Spiking Neural Network Consisting of VCSELs-SA With Supervised Training|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9018042 (accessed on 12 March 2024).
  119. Mourgias-Alexandris, G.; Dabos, G.; Passalis, N.; Totović, A.; Tefas, A.; Pleros, N. All-Optical WDM Recurrent Neural Networks With Gating. IEEE J. Sel. Top. Quantum Electron. 2020, 26, 1–7. [Google Scholar] [CrossRef]
  120. Dang, D.; Dass, J.; Mahapatra, R. ConvLight: A Convolutional Accelerator with Memristor Integrated Photonic Computing. In Proceedings of the 2017 IEEE 24th International Conference on High Performance Computing (HiPC), Jaipur, India, 18–21 December 2017; pp. 114–123. [Google Scholar]
  121. Digital Electronics and Analog Photonics for Convolutional Neural Networks (DEAP-CNNs)|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/8859364 (accessed on 12 March 2024).
  122. HolyLight: A Nanophotonic Accelerator for Deep Learning in Data Centers|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/8715195 (accessed on 12 March 2024).
  123. LightBulb: A Photonic-Nonvolatile-Memory-Based Accelerator for Binarized Convolutional Neural Networks|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9116494 (accessed on 12 March 2024).
  124. Xiang, S.; Zhang, Y.; Gong, J.; Guo, X.; Lin, L.; Hao, Y. STDP-Based Unsupervised Spike Pattern Learning in a Photonic Spiking Neural Network With VCSELs and VCSOAs. IEEE J. Sel. Top. Quantum Electron. 2019, 25, 1–9. [Google Scholar] [CrossRef]
  125. Shi, B.; Calabretta, N.; Stabile, R. Deep Neural Network Through an InP SOA-Based Photonic Integrated Cross-Connect. IEEE J. Sel. Top. Quantum Electron. 2020, 26, 1–11. [Google Scholar] [CrossRef]
  126. Parallel Reservoir Computing Using Optical Amplifiers|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/5966352 (accessed on 12 March 2024).
  127. Shen, Y.; Harris, N.C.; Skirlo, S.; Prabhu, M.; Baehr-Jones, T.; Hochberg, M.; Sun, X.; Zhao, S.; Larochelle, H.; Englund, D.; et al. Deep learning with coherent nanophotonic circuits. Nat. Photonics 2017, 11, 441–446. [Google Scholar] [CrossRef]
  128. Towards Area-Efficient Optical Neural Networks: An FFT-Based Architecture|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9045156 (accessed on 12 March 2024).
  129. Broadcast and Weight: An Integrated Network For Scalable Photonic Spike Processing|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/6872524 (accessed on 12 March 2024).
  130. Toward Fast Neural Computing using All-Photonic Phase Change Spiking Neurons|Scientific Reports. Available online: https://www.nature.com/articles/s41598-018-31365-x (accessed on 12 March 2024).
  131. Colburn, S.; Chu, Y.; Shilzerman, E.; Majumdar, A. Optical frontend for a convolutional neural network. Appl. Opt. 2019, 58, 3179–3186. [Google Scholar] [CrossRef] [PubMed]
  132. All-Optical Machine Learning Using Diffractive Deep Neural Networks|Science. Available online: https://www.science.org/doi/10.1126/science.aat8084 (accessed on 5 April 2024).
  133. Totzeck, M. Validity of the scalar Kirchhoff and Rayleigh–Sommerfeld diffraction theories in the near field of small phase objects. JOSA A 1991, 8, 27–32. [Google Scholar] [CrossRef]
  134. Khonina, S.N.; Ustinov, A.V.; Kovalyov, A.A.; Volotovsky, S.G. Near-field propagation of vortex beams: Models and computation algorithms. Opt. Mem. Neural Netw. 2014, 23, 50–73. [Google Scholar] [CrossRef]
  135. Photonics|Free Full-Text|Design of Cascaded Diffractive Optical Elements for Optical Beam Shaping and Image Classification Using a Gradient Method. Available online: https://www.mdpi.com/2304-6732/10/7/766 (accessed on 5 April 2024).
  136. Yan, T.; Wu, J.; Zhou, T.; Xie, H.; Xu, F.; Fan, J.; Fang, L.; Lin, X.; Dai, Q. Fourier-space Diffractive Deep Neural Network. Phys. Rev. Lett. 2019, 123, 023901. [Google Scholar] [CrossRef] [PubMed]
  137. Guo, J.; Chen, Z.; Liu, Z.; Li, X.; Xie, Z.; Wang, Z.; Wang, Y. Neural network training method for materials science based on multi-source databases. Sci. Rep. 2022, 12, 15326. [Google Scholar] [CrossRef] [PubMed]
  138. Nguyen, T.-A.; Ly, H.-B.; Mai, H.-V.T.; Tran, V.Q. On the Training Algorithms for Artificial Neural Network in Predicting the Shear Strength of Deep Beams. Complexity 2021, 2021, e5548988. [Google Scholar] [CrossRef]
  139. Yu, L.; Li, X.; Luo, C.; Lei, Z.; Wang, Y.; Hou, Y.; Wang, M.; Hou, X. Bioinspired nanofluidic iontronics for brain-like computing. Nano Res. 2024, 17, 503–514. [Google Scholar] [CrossRef]
  140. Santos, A. ‘Explainable Machine Learning Platform’, Neural Designer. Available online: https://www.neuraldesigner.com/ (accessed on 3 April 2024).
  141. Lightmatter®—The Photonic (Super) Computer Company. Available online: https://lightmatter.co/ (accessed on 3 April 2024).
  142. Intel® Silicon Photonics: How Does It Work? Available online: https://www.intel.com/content/www/us/en/architecture-and-technology/silicon-photonics/silicon-photonics-overview.html (accessed on 3 April 2024).
  143. Efficient Training and Design of Photonic Neural Network through Neuroevolution. Available online: https://opg.optica.org/oe/fulltext.cfm?uri=oe-27-26-37150&id=423946 (accessed on 3 April 2024).
  144. Zhang, D.; Tan, Z. A Review of Optical Neural Networks. Appl. Sci. 2022, 12, 5338. [Google Scholar] [CrossRef]
  145. Jiang, X.; Harvey Kam Siew Wah, A. Constructing and training feed-forward neural networks for pattern classification. Pattern Recognit. 2003, 36, 853–867. [Google Scholar] [CrossRef]
  146. Schmidt, W.F.; Kraaijveld, M.A.; Duin, R.P.W. Feedforward neural networks with random weights. In Proceedings of the 11th IAPR International Conference on Pattern Recognition. Vol.II. Conference B: Pattern Recognition Methodology and Systems, Hague, The Netherlands, 30 August–3 September 1992; pp. 1–4. [Google Scholar]
  147. Application of Feedforward Neural Network in Portfolio Optimization and Geometric Brownian Motion in Stock Price Prediction|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/10193046 (accessed on 13 March 2024).
  148. Jabin, S. Stock Market Prediction using Feed-forward Artificial Neural Network. Int. J. Comput. Appl. 2014, 99, 4–8. [Google Scholar] [CrossRef]
  149. Next Generation Reservoir Computing|Nature Communications. Available online: https://www.nature.com/articles/s41467-021-25801-2 (accessed on 13 March 2024).
  150. Physical Reservoir Computing in Robotics|SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-981-13-1687-6_8 (accessed on 13 March 2024).
  151. Bhovad, P.; Li, S. Physical reservoir computing with origami and its application to robotic crawling. Sci. Rep. 2021, 11, 13002. [Google Scholar] [CrossRef] [PubMed]
  152. Habib, G.; Qureshi, S. Optimization and acceleration of convolutional neural networks: A survey. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 4244–4268. [Google Scholar] [CrossRef]
  153. Accurate and Compact Convolutional Neural Network Based on Stochastic Computing—ScienceDirect. Available online: https://www.sciencedirect.com/science/article/pii/S0925231221016623 (accessed on 13 March 2024).
  154. Applied Sciences|Free Full-Text|Using Convolutional Neural Networks for Blocking Prediction in Elastic Optical Networks. Available online: https://www.mdpi.com/2076-3417/14/5/2003 (accessed on 13 March 2024).
  155. Recurrent Neural Networks for Multivariate Time Series with Missing Values|Scientific Reports. Available online: https://www.nature.com/articles/s41598-018-24271-9 (accessed on 12 March 2024).
  156. BDCC|Free Full-Text|An Approach Based on Recurrent Neural Networks and Interactive Visualization to Improve Explainability in AI Systems. Available online: https://www.mdpi.com/2504-2289/7/3/136 (accessed on 12 March 2024).
  157. Guo, Y.; Huang, X.; Ma, Z. Direct learning-based deep spiking neural networks: A review. Front. Neurosci. 2023, 17, 1209795. [Google Scholar] [CrossRef]
  158. AbouHassan, I.; Kasabov, N.K.; Jagtap, V.; Kulkarni, P. Spiking neural networks for predictive and explainable modelling of multimodal streaming data with a case study on financial time series and online news. Sci. Rep. 2023, 13, 18367. [Google Scholar] [CrossRef] [PubMed]
  159. Sanaullah; Koravuna, S.; Rückert, U.; Jungeblut, T. Exploring spiking neural networks: A comprehensive analysis of mathematical models and applications. Front. Comput. Neurosci. 2023, 17, 1215824. [Google Scholar] [CrossRef] [PubMed]
  160. Yang, G.; Lee, W.; Seo, Y.; Lee, C.; Seok, W.; Park, J.; Sim, D.; Park, C. Unsupervised Spiking Neural Network with Dynamic Learning of Inhibitory Neurons. Sensors 2023, 23, 7232. [Google Scholar] [CrossRef] [PubMed]
  161. An Optoelectronic Neural Network for Simulation of Distributed Systems|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/268581 (accessed on 12 March 2024).
  162. Mu, G.; Sun, Y.; Zhang, Y.; Yang, X. Optoelectronically implemented three-layer neural network for pattern recognition. In Proceedings of the Optical Society of America Annual Meeting 1992, Albuquerque, NM, USA, 20–25 September 1992; Optica Publishing Group: Washington, DC, USA, 1992; p. TuD2. [Google Scholar]
  163. Symington, K.J.; Randle, Y.; Waddie, A.J.; Taghizadeh, M.R.; Snowdon, J.F. Programmable optoelectronic neural network for optimization. Appl. Opt. 2004, 43, 866–876. [Google Scholar] [CrossRef] [PubMed]
  164. Lamela, H.; Ruiz-Llata, M. Optoelectronic neural processor for smart vision applications. Imaging Sci. J. 2007, 55, 197–205. [Google Scholar] [CrossRef]
  165. Eriksson, T.A.; Bülow, H.; Leven, A. Applying Neural Networks in Optical Communication Systems: Possible Pitfalls. IEEE Photonics Technol. Lett. 2017, 29, 2091–2094. [Google Scholar] [CrossRef]
Figure 1. Electronic versus photonic implementation of neuron functions. (a) Electronic implementation, (b) Photonic implementation.
Figure 1. Electronic versus photonic implementation of neuron functions. (a) Electronic implementation, (b) Photonic implementation.
Nanomaterials 14 00697 g001
Figure 2. Types of PNNs.
Figure 2. Types of PNNs.
Nanomaterials 14 00697 g002
Figure 5. (a) Diagram of a SPIM employing physically adjustable SDM-SPIM [96]. Findings from solving the 13-item knapsack problem: (b) Distribution of total weight among obtained solutions, illustrated in a histogram [96]. (c) Distribution of total value among obtained solutions, depicted in a histogram [96]. (d) Demonstration of the distribution of explored specimens [96]. (e) Illustration showcasing the time evolution of Ising Hamiltonian values [96].
Figure 5. (a) Diagram of a SPIM employing physically adjustable SDM-SPIM [96]. Findings from solving the 13-item knapsack problem: (b) Distribution of total weight among obtained solutions, illustrated in a histogram [96]. (c) Distribution of total value among obtained solutions, depicted in a histogram [96]. (d) Demonstration of the distribution of explored specimens [96]. (e) Illustration showcasing the time evolution of Ising Hamiltonian values [96].
Nanomaterials 14 00697 g005
Figure 6. D.A.N.T.E implemented on a physical ONN scheme. (a,b) The prototype system and its optical setup [101]. (c) The network assembly executed in the prototype system [101]. (d) Cropped outputs of trained ONN for MNIST classification [101]. (e) Outputs of trained ONN for ImageNet classification [101]. (f) Analytical and optical accuracy on MNIST, CIFAR-10, and ImageNet datasets [101].
Figure 6. D.A.N.T.E implemented on a physical ONN scheme. (a,b) The prototype system and its optical setup [101]. (c) The network assembly executed in the prototype system [101]. (d) Cropped outputs of trained ONN for MNIST classification [101]. (e) Outputs of trained ONN for ImageNet classification [101]. (f) Analytical and optical accuracy on MNIST, CIFAR-10, and ImageNet datasets [101].
Nanomaterials 14 00697 g006
Figure 7. PNN based on the Fourier-correlator scheme (4f-system). The mask is determined by the Fourier transform of the kernels from the convolutional layer of a typical CNN.
Figure 7. PNN based on the Fourier-correlator scheme (4f-system). The mask is determined by the Fourier transform of the kernels from the convolutional layer of a typical CNN.
Nanomaterials 14 00697 g007
Figure 8. PNN based on the cascade of DOEs, which are pre-trained diffraction layers of a DDNN.
Figure 8. PNN based on the cascade of DOEs, which are pre-trained diffraction layers of a DDNN.
Nanomaterials 14 00697 g008
Figure 9. PNN-based on the hybrid of the Fourier-correlator scheme with the cascade of DOEs.
Figure 9. PNN-based on the hybrid of the Fourier-correlator scheme with the cascade of DOEs.
Nanomaterials 14 00697 g009
Figure 10. The cumulative number of publications and patents spanning the years 1985 to 2024, focusing on the keywords “optical neural networks” as queried within the Scopus database. These data provide a comprehensive view of the global research landscape surrounding these innovative technologies.
Figure 10. The cumulative number of publications and patents spanning the years 1985 to 2024, focusing on the keywords “optical neural networks” as queried within the Scopus database. These data provide a comprehensive view of the global research landscape surrounding these innovative technologies.
Nanomaterials 14 00697 g010
Table 1. Compilation of several noteworthy previous studies on PNN Structure.
Table 1. Compilation of several noteworthy previous studies on PNN Structure.
DevicesApplicationResultsRef.
Micro-ringsLorenz attractor simulation to benchmark against a traditional CPU-based continuous time RNNReports 294× acceleration in simulation over traditional CPU-based continuous time RNN[117]
VSCEL-SasSNN for learning and recognizing arbitrary spike patterns-[118]
SOA-MZIsRNN benchmarked using a finance forecasting application utilizing the FI-2010 datasetGated optical RNN achieved an F1 score of 41.85%[119]
Micro-rings and SOAs.Various benchmarks, including MNIST, tested on photonic CNNsReduction in operation cost when compared to GPU-based implementations, with up to 25× better computational efficiency[120]
Micro-ringsMNIST classification using CNNsFaster when compared to GPU-based implementations and 0.75 times the power consumption[121]
Micro-disksBinarized CNN acceleration for MNIST and ImageNet classification16.9× better FPS and 17.5× FPS/W over [122][123]
VCSOA and VCSELsSNN for learning and recognizing arbitrary spike-[124]
SOAs and AWGsDNN implementation. Tested on Fisher’s Iris classificationA prediction accuracy of 85.8% achieved[125]
SOASpoken digit recognition using RCThe work reports a minimum WER of 4.5% for their coherent SOA-based reservoir[126]
MZIsPhotonic DNN for vowel recognitionAchieved 76.7% accuracy in vowel recognition[127]
MZIsMNIST dataset classification using structured NN98.5% accuracy[128]
MRsThis exercise aimed to demonstrate the viability of B&W-based SNNs. No experiments based on applications were carried out in this study.-[129]
GST-embedded MRsMNIST classification with MLPs98.06% accuracy[130]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khonina, S.N.; Kazanskiy, N.L.; Skidanov, R.V.; Butt, M.A. Exploring Types of Photonic Neural Networks for Imaging and Computing—A Review. Nanomaterials 2024, 14, 697. https://doi.org/10.3390/nano14080697

AMA Style

Khonina SN, Kazanskiy NL, Skidanov RV, Butt MA. Exploring Types of Photonic Neural Networks for Imaging and Computing—A Review. Nanomaterials. 2024; 14(8):697. https://doi.org/10.3390/nano14080697

Chicago/Turabian Style

Khonina, Svetlana N., Nikolay L. Kazanskiy, Roman V. Skidanov, and Muhammad A. Butt. 2024. "Exploring Types of Photonic Neural Networks for Imaging and Computing—A Review" Nanomaterials 14, no. 8: 697. https://doi.org/10.3390/nano14080697

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop