**1. Introduction**

Artificial neural networks (ANNs), created by analogy with biological neural systems, are used to resolve various tasks, such as classification, clustering, and pattern recognition [1–4]. The main element of ANN is a neuron that may have several inputs and one output, and neurons can be connected in different ways, depending on the network architecture [5,6]. The main task of a neuron is to convert input signals to output signal using an activation function [5]. In the history of ANN, three generations of the networks are usually distinguished. The first generation includes simple forward and backward connection networks that operate with binary data and stepwise activation functions [7]. The second generation includes multilayer networks of direct and reverse distribution, operating with rational numbers with continuous activation functions [7]. The third generation of ANN (spiking neural networks (SNN)) uses biosimilar models of neurons that take into account not only the magnitude of the signals arriving at the input, but also the signals' temporal distribution [7,8].

There is a large number of SNNs, which are used to solve practical tasks and are based on mathematical models of neurons (Integrate-and-Fire, Izhikevich, Hodgkin-Huxley) [9–12]. Such SNNs use the resources of computers, video cards, and field-programmable gate arrays to emulate the network operation [9–13]. Although such SNNs currently provide impressive performance results [14], any emulation loses hardware implementation in performance and energy efficiency [15,16]. Therefore, the development of SNNs based on microelectronic elements attracts the active attention of researchers [16–22]. One of the most frequently used functional elements of SNN is a memristor [16], which is used for implementing customizable weights and as a functional element of a neuron. The weights are adjusted during the network training, and, in electric networks, it is implemented by changing the impedance of the lines connecting the outputs and inputs of neurons. A multi-stable resistive memory cell is an ideal object for implementing a wide SNN functionality, due to the possibility of changing the resistance over a wide range of values. However, the application of a resistive memory cell as a bi-stable element with an off state (inactive neuron) and on state (active

neuron) is not an optimal solution because of the high probability of its resistance modification during the operation [23,24].

In the current study, for the manufacturing of artificial neurons, we propose to use elements with a stable S-shaped I – V characteristic, such as switches based on transition metal oxides with a metal-insulator transition [25–27]. Implementations of neuron models on the VO<sup>2</sup> switch are described in References [28–34]. However, a few SNN implementations using such neurons have been proposed so far. Models of VO<sup>2</sup> neurons can be divided into two groups. The first group is an integrate-and-fire model of a neuron [32,33,35], which has three main states: the accumulation of action potential state due to charging the capacitor, the spike generation state, when the capacitor is discharged, and the VO<sup>2</sup> switch goes into a highly conductive state, and the inactivity state of a neuron. The discharge time of the capacitor is treated as a post firing refractory period [30,32], and the initiation of the second pulse is impossible at that time due to shunting of the low resistance of the switch. The second group of models covers neuron circuits that include inductance, and the possibility of generating a burst mode, which is similar to the FitzHugh-Nagumo and FitzHugh-Rinzel models [34,35].

We propose a leaky integrate-and-fire (LIF) circuit for a neuron based on a VO<sup>2</sup> switch that can implement excitatory and inhibitory couplings. Based on VO<sup>2</sup> neurons, in the SPICE simulator, the operation of a two-layer SNN network consisting of nine input and three output neurons was modeled. An image in the form of a 3 × 3 matrix is fed to the network input, and, at the output, one of the three neurons is activated with a certain input pattern, and this neuron suppresses the remaining output neurons according to the winner-take-all (WTA) principle [36]. The coding of information in the proposed network is performed by setting the delay time of the spikes in the input layer relative to the zero time moment (time to the first spike) [37]. Network training is performed according to the spike time-dependent plasticity (STDP) scheme [14,16,19–21,38]. As a result, a model SNN, based on VO<sup>2</sup> neurons, which allows pattern recognition, is presented and investigated in this study.
