*4.1. Product Structure*

The modified directed assembly state graph was adopted as the structure of the product intended for assembly. The parts in this digraph (or assembled assemblies) are marked as vertices, while the directed edges show possible sequences (paths) of joining them. It was assumed that the assembly of successive elements takes place by attaching to the n-th grade assembly a part or subassembly consisting of more parts (treated as a single assembled part). The directed edges in the digraph connecting the vertices contain information based on the assessment criteria (based on the DFA rating factors) for a specific transition between assembly states. The described digraph can be generated automatically on the basis of a CAD assembly drawing. The basis for carrying out the ASP, due to the specific criteria for a specific assembly process, is to obtain a set of all allowed and feasible assembly sequences. The matrix notation (e.g., in the form of a matrix of assembly states or a matrix of an assembly graph) of assembly units enables the determination of all variants of the order of joining assembly units using an appropriate algorithm (this procedure is not discussed in this paper, but comes down to finding all paths in the digraph leading from the vertex starting xs, constituting the base part, to the end vertex xe, i.e., the last state of the already assembled product—xs,... ,xe) [23–27].

#### *4.2. Assumptions of the Neural Network*

In order to develop a predictive model for the evaluation of the assembly sequence, a set of data to teach the neural network was prepared, including input and output data. The input data were related to the DFA assessment criteria, which were divided into four groups: stability, the ability to change the orientation of the assembly unit, the ease of joining parts, and space availability during the connection process. The assembly time was included in the group of output data. The collection of an appropriate number of training data and the links between them was made. The numerical data entered into the network were normalized using the min–max function to values ranging from 0 to 1, according to the linear transformation, in order to ensure their uniformity and the compatibility between the variables in the process of signal processing by the neural network. For the assembly time prediction task, the network model with the best efficiency was selected, which was obtained by empirically testing its various parameters: the number of hidden neurons, the activation function and the network-learning algorithm. The neurons in the hidden layers process the signals from the input neurons and transform them into intermediate data that are then passed on to the output neuron. Hidden neurons allow for modeling complex relationships between data, and their number should be selected so that the structure of the network is not so extensive, while ensuring correct data processing. The network model takes into account the presence of an additional neuron with a value of +1, which is a generator of an artificial neuron causing neuron polarization and, as a result, improving the stability of the network in the process of learning it. The functions of neuron activation

in the hidden and output layers, which were used in modeling the structure of the neural network, are shown in Table 5.



The following algorithms were used to train the neural network: steepest descent, gradient scaling and Broyden–Fletcher–Goldfarb–Shanno. The principle of their operation boils down to minimizing the error function as a result of an iterative change of weights describing the neurons. The steepest descent method consists of finding the minimum of the error function in a given search direction until it turns out to be tangential to a certain line defining the constant value of the objective function. Successive directions of the search for the optimum of a function are orthogonal to the previous ones. The gradient scaling method does not require so many directions of searching for the minimum function. It consists of finding the correct direction on the multidimensional parabolic-shaped error surface on which the straight line is drawn and the minimum of the function for all points on the straight line is determined. Finding a minimum along a given direction results in determining a new search direction from that point. Repeating the process continues until the minimum of the function is found, according to the implementation of the constant shift towards decreasing values of the error function. The second derivative of the function in the following steps of the algorithm is zero. Maintaining this value is possible due to the existence of directions coupled with the previously selected directions. The Broyden–Fletcher–Goldfarb–Shanno method consists of changing the weights of individual neurons after each iteration of the algorithm as a result of taking into account the average error gradient. The search for the minimum error function is carried out by the steepest slope method, and then by the inverse estimation of the matrix of the second-order partial derivatives (Hessian). The division of the examples introduced into the network (input and output data) into training, testing and verification subsets also influences the achievement of the expected network results. Sufficient data should be provided for each of these subsets: (1) To understand the relationship between the data and learn to adapt to changing conditions. Cases from this set affect the change of other network parameters, e.g., the weight values assigned to individual neurons. (2) To check the results obtained by the network and the ability to generalize. Cases from this set are not used to modify network parameters. (3) To verify the network results on the basis of quantitative or qualitative datasets that were not used before. The selection of the best parameters of the artificial neural network was made on the basis of the correlation coefficient determining the effectiveness of the learning and testing process as well as the interpretation of the standard error function—the sum of the squared differences between the set values and those obtained in the output neuron:

$$\text{SOS} = \sum\_{i=1}^{n} (y\_i - y\_i^\*) \tag{1}$$

where *n* is the number of datasets teaching the neural network; *yi* is the expected value in the output neuron; and *yi*<sup>∗</sup> is the actual value in the output neuron.

#### **5. Example of Determining the Assembly Sequence**

An example of determining the assembly sequence using artificial neural networks was made for a tractor door, consisting of 8 main assembly units such as: door–welded structure (part no. 1), lock (part no. 2), cassette lock (part no. 3), door reinforcement bar with passenger's handle (part no. 4), lock cover (part no. 5), door seal (part no. 6), lower glass (part no. 7) and upper glass (part no. 8). Next, the basic part from which the assembly begins was assumed (door–welded structure, part no. 1). It was assumed that the assembly of the next units takes place by adding another assembly unit to the assembly state of the n-th stage. Then, the digraph and state matrix of the structural limitations of the assembly were built (shown in Figure 3). Then, using the selected graph-searching algorithm (Dijkstry), all feasible (due to the constraints of the structural nature) assembly sequences were determined, which constitute the basis for further analysis.

**Figure 3.** Schematic representation of the artificial neural network.

The neural network model for planning the assembly sequence was selected on the basis of a multiple sampling algorithm that randomly presents 20,000 network variants with variable parameters:


The constant parameters of the neural network are:


The neural network with the best parameters, confirmed by the highest correlation coefficient in the group of test and verification data, is given in Table 6. The model of the MLP 4–2–1 network structure is shown in Figure 4. The network selected for subsequent research consists of four input neurons, two hidden neurons and one output neuron. The Broyden–Fletcher–Goldfarb–Shanno algorithm was used for the network learning process, and the neurons in the hidden layer were activated with the exponential function and the output with the sine function. These network parameters made it possible to observe a

strong correlation between the data, with the highest coefficient in the verification group R<sup>2</sup> > 0.9 and the smallest SOS error < 0.1. This group is the most important, concerns data not included in the earlier stages of the analysis and presents the results of the predictions most reliably.

**Table 6.** Parameters of the MLP 4–2–1 neural network selected for the prediction task.


**Figure 4.** The structure of the MLP 4–2–1artificial neural network.

The minimum of the error function was found in the successive network training cycles. For the MLP 4–2–1 network, the optimum of the function was found in the 12th epoch in which the stabilization of the error value was noticeable. Figure 5 shows a graph of the dependence of the network error value on the number of iterations of the training algorithm.

**Figure 5.** Graph of the error function in the successive epochs of the training algorithm.

The application of the proposed solution requires the verification of the actual data obtained at the network output with the expected data, which are summarized in Table 7, while the linear regression charts for these data are shown in Figure 6.


**Table 7.** Comparison of the prediction results with the expected data (randomly selected examples of the results).

**Figure 6.** Linear regression chart for the normalized expected and actual values (randomly selected examples of the results).
