**4. Results**

The learning curve of the FCNN on a training dataset and prediction error on a validation dataset are depicted at Figure 8a. We plotted the curves on a logarithmic scale for the convenience of their analysis. During training epochs loss function (mean absolute error) drops down and saturates after 100 epochs. The evolution of the prediction error (orange line) shows that the neural network was not overfitted. Figure 8b shows mean absolute error rates for different activation functions and number of nodes at the hidden layer. Figure 8c demonstrates the predicted positions of the reflectance peaks for all 50 FBGs of the sparse sensors against measured ones in scaled units. Figure 8c is the same curve for a single FBG in nm units. The curves are in close proximity to the straight line corresponding to the ideal case when predicted values are equal to measured values. We found out that FCNN is able to predict the positions of the reflectance peak of the sparse FBG sensors with a mean absolute error equal to 10.9 pm. The root mean square error (RMSE) was 18 pm and the coefficient of determination (R2) was 0.9988.

In the same way we analyzed the performance of CNN (Figure 9). It can be seen that neural networks have similar performance; however, CNN shows the lowest mean absolute error, equal to 7.4 pm, RMSE equal to 14 pm and R2 equal to 0.9993.

The RMSE metric is more sensitive to large errors comparing to MAE. It is clearly seen from Figures 8c and 9c that the mismatch between real, predicted and measured values is not uniformly distributed along different FBGs. For some FBGs, the mean absolute error does not exceed 5 pm; however, for one FBG the mean absolute error reaches 14 pm. We attribute this to the violation of the uniformity of the temperature field across fiber sensors during the mechanical translation of the Peltier cells. Indeed, CNN performed worse at convex temperature distribution when only one Peltier cell was used. The RMSE was equal to 14.48 pm and R2 was equal to 0.9967 for convex temperature distribution comparing to 8.48 pm and 0.9993 for raised gradient temperature distribution. The issue may be solved by adding more Peltier cells with lower size or building more complicated heating/cooling systems, for instance, using laser heating in combination with a spatial light modulator.

**Figure 8.** Performance of the FCNN. (**a**) Learning curve of the model on training dataset and evolution of the loss function of validation dataset. (**b**) Mean absolute error of FCNN for different activation functions and different number of nodes of the hidden layer. (**c**) Predicted reflectance peak positions of the FBGs of the sparse sensors against measured values at normalized scales. (**d**) Predicted positions of the reflectance peak for single FBG of sparse sensor against measured value.

Computational complexities of FCNN and CNN may be estimated as follows:

$$\mathbb{C}\_{\text{FCNN}} = N\_{input} \cdot N\_{hidden} + N\_{hidden} \ast N\_{output} \tag{1}$$

$$\mathcal{L}\_{\text{CNN}} = N\_{\text{input}} \cdot D^2 + N\_{\text{input}} \ast N\_{fcl1} + N\_{fcl1} \cdot N\_{f\text{cm2}} + N\_{f\text{cm2}} \ast N\_{\text{output}} \tag{2}$$

where *Ninpu*t, *Noutput*, *Nhidden*—numbers of neurons of the input, output layers and hidden layers in FCNN, *D*—dimension of the convolution filter, *Nfcl1* and *Nfcl2*—numbers of neurons of the fully connected layers in CNN architecture. Calculation of FCNN output takes around 10 million operations, while calculation of CNN output takes around 16 million operations. Better performance of CNN may be related to increased complexity of the architecture. At any case the computational time of the neural networks output is negligible comparing to hardware acquisition time of reflectance spectrum. Computation time of the CNN output from a single sample of the registered reflectance spectrum takes in average 37 milliseconds running on modest graphical processor unit NVIDEA GeForce GTX 950M.

**Figure 9.** Performance of the CNN (**a**) Learning curve of the model on training dataset and evolution of the loss function of validation dataset. (**b**) Mean absolute error of CNN for different activation functions and different number of nodes of the fully connected layers. (**c**) Predicted reflectance peak positions of the FBGs of the sparse sensors against measured values at normalized scales. (**d**) Predicted positions of the reflectance peak for single FBG of sparse sensor against measured value.

## **5. Conclusions**

Thus, the calibration method of a highly dense FBG temperature sensor is proposed in the paper. It provides a possibility for increasing the spatial resolution of a fiberoptic sensor, avoiding the complications of FBG manufacturing or of an interrogation setup. The method is an alternative to the more common approach, wherein several sparse FBGs sensors are coupled into one optical channel. It was shown that deep learning algorithms are capable of mapping the complex reflectance spectrum of the dense sensor with 50 peaks to position of reflectance peaks of the sparse calibrated FBG temperature sensors. The relatively simple architecture of convolutional neural network allowed us to increase the spatial resolution of the dense FBG sensor by five times while maintaining a high temperature resolution close to hardware resolution. Future improvements of the method may be associated with complication of the architecture of the neural network and increasing the uniformity of the temperature distribution across fiber sensors.

**Author Contributions:** Conceptualization, A.K. and A.W.; methodology, A.K. and A.W.; software, N.S.; resources, A.D., A.W., A.K.; writing—original draft preparation, A.K., N.S., A.W.; writing— review and editing, A.K., A.W., visualization, N.S.; supervision A.K. and A.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** A. Dostovalov and A. Wolf acknowledge the support of the Ministry of Science and Higher Education of the Russian Federation (14.Y26.31.0017). The work of A. Kokhanovskiy was supported by the Russian Science Foundation (Grant No. 17-72-30006-Π).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data presented in this study is available on request to the corresponding author.

**Conflicts of Interest:** The authors declare no conflict of interest.
