2.3.1. Training of the ANN

For this study, a back propagation neuron (BPN) algorithm was trained by supervised learning by providing it with values of the co-located and co-temporal environmental drivers (Figure 3) and the corresponding DIC, TA, and pH (Figure 2) that constitute the final output for this study. Since the BPN algorithm is central to much current work on learning in NN, and has been independently invented several times (e.g., [75,76]), we used this algorithm to perform our desired task. The BPN algorithm feeds forward the input training pattern, which is then followed by the back propagation of the associated error, and which is finally expressed as a weight adjustment.

In order to supply training power to the BPN algorithm, the in situ Spring 2016 dataset (i.e., 16–24 April 2016) measurements (Table 1) were used as the values of the output neurons, while their corresponding (i.e., co-located and co-temporal) physico-chemical and biological drivers (Table 2), which were obtained independently, were used as the values of the input neurons. The location of the sampling points spanned across the entire North Atlantic Ocean, and thus presented the desired wide-ranging variability in both the physico-chemical and the biological conditions, which in turn led to the value range of DIC, TA, and pH observed during that period (Table 1). This process was carried out to optimize the BPN weights, such that the error function became minimal. The choice of the input (predictors) and output (predictands) dataset was targeted towards having a BPN algorithm that was able to model the output variables under different physical environmental conditions within the area of interest.


**Figure** 

BD 6: PIC

(log) (mol.mƺ3)

Scale factor:

0.2

PD 5: Wind stress

(Pa). Scale factor:

0.0001


**Figure 3.**

 *Cont*.

**Figure 3.** *Cont*.

**Figure 3.** *Cont*.

BD 1: Rrs 488

Scale factor: 2 ×

10ƺ6


BD 1: Rrs 469

BD 1: Rrs 443

Scale factor: 2 ×

10ƺ6

Scale factor: 2

× 10ƺ6

**Figure 3.** Multi-source, composite physico-chemical and biological data covering the period of Spring 2016, from which co-located and co-temporal values at points shown in Figure 1 were extracted to train the BPN algorithm. The sources and codes of these multi-source, independent datasets are listed in Table 2.

During this algorithm training, the net output was compared with the target value and the resultant error was calculated. It was here that the error factor was distributed back to the hidden layer, and the weights were updated accordingly. The error factor was calculated in a similar manner for all of the units, and their weights were updated simultaneously.

The ultimate objective here was to reduce this training error for the BPN algorithm until the ANN learned on the basis of the training data. The weights were gradually adjusted by means of a learning rule until they were capable of optimizing the predictive modeling of DIC, TA, and pH, as shown in Equation (1), as follows:
