*2.1. The Fundamental Principle of Artificial Neural Network*

Artificial neural networks (ANNs) are computing techniques motivated by the genetic neural networks that are composed in animal brains [18–20]. An ANN is constructed on an assemblage of coupled nodes so-called artificial neurons, which are essentially archetypal of the neurons in a genetic animal brain. Respective connections such as the synapses in the animal brain can diffuse information to coupled neurons. An artificial neuron obtains information and subsequently processes them and can inform coupled neurons. The "information" at a link is a real number, plus the output of the respective neuron is calculated by a certain non-linear function of the summation of its inputs [18–20]. The links are so-called edges. Generally, neurons and edges are characteristically composed of a weight that adapts as learning progresses. The weight rises or drops the power of the information at a connection. Neurons can have a permissible range in a manner where information is driven exclusively when the total signal intersects that permissible range. Classically, neurons are amassed into layers [18–20]. Distinctive layers can carry out various conversions on the respective inputs. Information travels from the input layer to the output layer, conceivably following crisscrossing the layers numerous times.
