*Article* **Detecting Machining Defects inside Engine Piston Chamber with Computer Vision and Machine Learning**

**Marian Marcel Abagiu, Dorian Cojocaru \*, Florin Manta and Alexandru Mariniuc**

Faculty of Automation, Computers and Electronics, University of Craiova, 200585 Craiova, Romania

**\*** Correspondence: dorian.cojocaru@edu.ucv.ro

**Abstract:** This paper describes the implementation of a solution for detecting the machining defects from an engine block, in the piston chamber. The solution was developed for an automotive manufacturer and the main goal of the implementation is the replacement of the visual inspection performed by a human operator with a computer vision application. We started by exploring different machine vision applications used in the manufacturing environment for several types of operations, and how machine learning is being used in robotic industrial applications. The solution implementation is re-using hardware that is already available at the manufacturing plant and decommissioned from another system. The re-used components are the cameras, the IO (Input/Output) Ethernet module, sensors, cables, and other accessories. The hardware will be used in the acquisition of the images, and for processing, a new system will be implemented with a human–machine interface, user controls, and communication with the main production line. Main results and conclusions highlight the efficiency of the CCD (charged-coupled device) sensors in the manufacturing environment and the robustness of the machine learning algorithms (convolutional neural networks) implemented in computer vision applications (thresholding and regions of interest).

**Keywords:** computer vision; sensors; machine learning; industry; manufacturing; robotics

#### **1. Introduction**

Computer vision applications are being used intensively in the public area for tedious tasks, e.g., surveillance and license plate detection and reading, as well as in robotics applications for tasks, e.g., object detection, quality inspection, and human machine cooperation [1–3].

In the initial stages of development, implementing a computer vision application (machine vision or robotic vision versions) was considered an exceedingly challenging task. With the increase of the processing power, new hardware development, and new, efficient, and performant image sensors, the development of such applications was made significantly easier [4,5].

A huge boost in popularity for the image processing and computer vision application was achieved with the increase in popularity of Python programming language and the implementation of various image processing frameworks such as OpenCV (for C++ initially and Python afterwards) and the development of machine learning and deep learning frameworks [6,7].

Solutions implemented in the robotic manufacturing environment are based on cameras using CCD sensors and industrial systems, which consider the computer vision application as a black box providing a status. This approach proved to be robust and efficient. The needs of industry are now growing different and becoming more complex. The control systems also need to integrate with computer vision applications to provide full control for the production process [8,9].

The current global and geopolitical context from the last years, the tendency for accelerated car electrification, and recent innovation from Industry 4.0 have encouraged car

**Citation:** Abagiu, M.M.; Cojocaru, D.; Manta, F.; Mariniuc, A. Detecting Machining Defects inside Engine Piston Chamber with Computer Vision and Machine Learning. *Sensors* **2023**, *23*, 785. https:// doi.org/10.3390/s23020785

Academic Editor: Xinyu Li

Received: 30 November 2022 Revised: 24 December 2022 Accepted: 29 December 2022 Published: 10 January 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

manufacturers to integrate more computer vision applications in the production process. Applications are mostly used for counting parts and ensuring traceability, e.g., barcode readings, QR code readings, OCR, and defect detection in distinct stages of the manufacturing process, e.g., painting, assembly, and machining. In this environment more complex applications can be found, e.g., high precision measurement tools based on computer vision, complex scanning, and applications based on artificial intelligence (machine learning) [10].

The solution presented in this paper is based on the integration of a CCD sensor camera with a robotic control system that is also able to provide all the information needed in the robotic manufacturing environment for traceability and planning while detecting complex defects in real time. Two algorithms are used for detecting a class of defects inside the cylinder chamber of an engine block. The main role of the computer vision algorithms is the reducing the number of input features for the convolutional neural network by isolating the region of interest (walls of the cylinder chamber). The convolutional neural network scope is to process the newly generated image for providing a decision.

The future actions of the entire robotic system that manipulates these mechanical parts depends on the results provided by the visual inspection system. Moreover, based on the global results obtained on the entire visual inspection process, reprogramming or even reconfiguration of the robotic systems involved in the manufacturing process of mechanical parts will take place [11].

In order to implement this solution, the goal was to develop a computer vison system that is able to detect machining defects from the cylinder chamber of the engine block. This was achieved by developing the following steps:


#### **2. Related Work**

Defect detection technologies are used in the manufacturing industry for identifying the surfaces (spots, pits, scratches, and color differences) and internal parts (defects holes, cracks, and other flaws) of the products having problems. Computer vision defect detection applications must be fast, non-destructive, and accurate, and they have become widely used in the recent years. Zhou et al. [12] developed an artificial imaging system for the detection of discrete surface defects on vehicle bodies using a three-level scale detection method for extracting the defects that might appear on the vehicle surface. The method distinguishes the defect location, comparing the features of the background from the defect images, which allows for detection in concave areas or areas with abrupt changes in the surface style lines, edges, and corners. It extracts defects that are hardly perceived by the human eyes.

In various computer vision industrial applications, the basic setups for image acquisition are similar. For example, in the automotive manufacturing industry, a basic computer vision application is needed a light source alongside a camera and a computer powerful enough to process the acquired image. As light sources, LEDs are mostly used. LED light sources offer high efficiency and versatility when it comes to triggers and dimming control. Infrared light sources used with monochrome industrial cameras (or as infrared panels) as well as multiple light sources are frequently used. For settings and environment closer to the laboratory, in the majority of the computer vision application, cameras and light sources are placed in a light absorbing room where the lighting can be controlled. A special application, e.g., an assembly robot, may require a special camera. In this case, the light

source and the camera will be attached to an actuator (servomotor, robotic arm, etc.). Industrial cameras contain CCD (charge-coupled device) or CMOS (complementary metal-oxide semiconductor) sensors and the lenses are chosen having in focus the environment and the vision application. Trying to achieve real time processing, the software algorithms must be executed on powerful machines. Algorithms are developed by customizing to each particular application and each hardware configuration (camera and lighting). For detecting different defects of a car after the painting process, a four-camera setup can be used to achieve stable light and multiple cameras (e.g., five cameras) to acquire the same affected area from multiple angles (light conditions different). In the acquired images, the region on interest will be isolated, several specific filters for noise reduction will be also applied, in addition to a feature extraction algorithm (specific to the vision application) for isolating the different defects detected [11].

When a certain amount of data can be acquired and used, then a deep learning model training supervised learning is adopted instead of a conventional recognition based on feature descriptors. A classification module, an integrated attention module with an image segmentation module, is used for weekly supervised learning. The classification module extracts the defect features from the image. The integrated module has as a purpose the detection of different irregular defects (e.g., for metal pieces) which can appear after casting or shaping processes. The segmentation module is used to determine if a pixel from the image is associated to a defect area [13].

Other common defect detection methods are ultrasonic testing, osmosis testing, and X-ray testing [14]. The ultrasonic methods are used in the detection of defects in the internal structure of the product under test (like X-ray testing). These methods are based on filtering for feature extraction and the ability to describe the identified defect.

Alongside common methods, in recent years, deep-learning defect detection methods have been used in various applications. Some of these algorithms are based on the use of a deep neural network, e.g., a convolutional neural network, residual networks, or recurrent neural networks. Computer vision defect detection applications have shown good accuracy in binary defect detection [15].

In their paper, Zhonghe et al. [16] address the state of the art in defect detection-based machine vision, presenting an effective method to reduce the adverse impact of product defects. They claim that artificial visual inspection is limited in the field of applications with possible dangerous consequences in the case of a failure because of the low sampling rate, slow real-time performance, and the relatively poor detection confidence.

The replacement of artificial visual inspection is machine vision, which can cover the whole electromagnetic spectrum, from gamma rays to radio waves. Machine vision has a great ability to work in harsh environments for a long time and greatly improves the real time control and response. Therefore, it can improve many robotic manufacturing processes to support industrial activities. In this paper, the proposal for an industrial visual inspection module consists of three modules: optical illumination, image acquisition, and the image processing and defect detection module. It is stated that an optical illumination platform should be designed. Then, CCD cameras or other acquisition hardware should be use in such a way that the information carried by them to the computer should have an extremely high quality. Finally, either classical image processing algorithms or better, deep learning algorithms should be used, which are able to extract the features and perform the classification, localization, segmentation, and other image operations, image processing being the key technology in machine vision. In industry, this architecture can be used as a guideline for designing a visual inspection system. It is given as an example in the paper for inspecting surface characteristics in designing a highly reflective metal surface.

Wang Liqun et al. [17] focused on the idea of detecting defects using deep learning. They also based their research on convolutional neural networks for training and learning big sets of image acquisition data and they claim that it can effectively extract the features and classify them accurately and efficiently. They use PatMax software, which can recognize twenty-five different filter shapes, and determines the location of the filter while

being 99% accurate. The process first collects the information from the camera, reads the image preprocessed result and trains on the processed images, then establishes a complete information model and obtains the target image. A diffuse bright led backlight illumination is used. Light sensitive components were used for image acquisition, and a wavelet smoothing was used for image preprocessing, after which Otsu threshold was used to segment the image. In the end, the support vector machine classifier was designed for defect classification. The goals should be high precision, high efficiency, and strong robustness. Therefore, the system needs an excellent coordination of the three modules. The features are afterwards matched with the template, and the quality of the assembly process is judge according to the matching result. Difficulties remain in detecting component defects due to the variety of vehicle parts which have different shapes, and due to the fact, the defects are very diversified. Moreover, the image structure of the parts is more complex, incorporating irrelevant factors around the image and a lot of noise, which makes feature extraction difficult. The authors managed to improve the VGG16 network model structure by adding the inceptionv3 module, increasing the width of the model based on depth. Their resulted accuracy was improved from 94.36% up to 95.29%, which is almost 1% more accurate than previously.

Zehelein et al. [18] presented in their paper a way of inspecting the suspension dampers on the autonomous driving vehicles between inspections. Their theory claims that in a normal vehicle, the driver always monitors the health state and reliability of a vehicle, and that it could be dangerous for an autonomous driving vehicle to not be monitored between inspections. To solve this problem, they discussed one of the problems in defect diagnosis while driving, namely the robustness of such a system concerning the vehicle's configuration. The main problems are the variable factors, such as tire characteristics, mass variation, or varying road conditions. They decided to combine a data driven approach with a signal-based approach, which led to a machine learning algorithm which can incorporate the variations in different vehicle configurations and usage scenarios. In their paper, it is stated that they used a support vector machine for classifying signal features, and they also needed features that can distinguish between different health states of the vehicle. Convolutional neural networks can deal with multidimensional data and demonstrate good feature extraction, which makes them perfect for the job. They used the driving data of the longitudinal and lateral acceleration as well as the yaw rate and wheel speeds. Using FFT (fast Fourier transform), input data were shown to give the best results regarding classification performance.

The authors were not able to check the real time implementation of the system because there is not a specific value for the computing power of an automotive ECU (electronic control unit). Therefore, the algorithm might not run optimally for every vehicle on the market [19]. They also propose the feature extraction method and divide the defects into three categories (pseudo-defects, dents, and scratches) using the linear SVM (scan velocity modulation) classifier. Their detection results were close to the accuracy of 95.6% for dents and 97.1% for scratches, while maintaining a speed of detection of 1 min and 50 s per vehicle. They state that their system could be improved using deflectometry techniques for image defect contrast enhancement, or by improving the intelligence of the method, but the latter could slow down the detection speed. Moreover, if they could use parallel computing techniques on graphic processing units, the speed of detection could be further improved.

A conventional computer vision approach is implementing the following algorithm [20,21]:


A drawback of this approach would be the high processing time of the high-resolution input image and the volatile environment from where the image is acquired, which will lead to a repeatability issue in the image due to dynamic shapes and contrast of the emulsion marks [22].

In this paper, a description of a combination between a conventional approach and a machine learning approach is given.

#### **3. Solution Overview**

*3.1. Process and Issue Description*

The purpose of the inspection machine is to detect a certain class of defects, to sort the engine blocks on the production line, and to wash the bottom part of the block using a special system for removing dirt, dust, and other mechanical process impurities. When a defect is detected, the engine block is removed trough a conveyor from the production line. In the washing process, some special solvents and emulsions are used.

The CCD sensor camera was configured to match a high range of environment conditions with a fixed exposure, fixed gain, and a special set of lenses. The specification of the lens used are further described in Figure 1. The implementation completes the already installed inspection machine by adding a new station with the purpose to automate the visual inspection performed until now by the operator. The complete layout of the process can be observed in Figure 2.

**Figure 1.** Fujinon lens dimensions.

The next step after washing will be the drying and cleaning of the block with an airflow through the bottom part and the cylinder chambers. The drying process leaves behind dried cleaning emulsion, which will make the automated inspection more difficult. In Figure 3a,b, the traces of dried emulsion can be observed on a flawless cylinder chamber. Figure 4a,b describes the defect to be automatically detected from the cylinder chamber alongside dried emulsion.

The engine block is made from cast iron with green sand insertions. In the process of filling the mold with liquid metal, some environment factors can interact with the product in an unwanted way. The damaged sand core can generate inclusions inside or at the surface of the part. Another defect is generated by the impossibility of removing all the gases from the mold when the liquid metal takes contact with the surface of the mold. This process involves generating blowholes.

**Figure 2.** Architecture of the washing and sorting machine on the production line.

**Figure 3.** No defects in the cylinder chamber. In (**a**,**b**) can be observed a part with no defects and dried emulsion marks indicated by the arrows.

**Figure 4.** Defects in the cylinder chamber. In (**a**–**e**) parts with machining defects can be observed. (**a**) presents a barely observable defect and (**b**–**e**) presents a more prominent one.

#### *3.2. Architecture Description*

Figure 5 describes the system architecture including the sensor. The solution was implemented using a single camera capturing an image of the area that needs to be inspected. For moving the camera, a PLC that is controlled directly by the main inspection machine was used. When the PLC receives the trigger, a stepper motor is actuated. The camera is hovered over each of the cylinders for acquisition and is connected to the stepper motor with a coupling in order to create a linear movement. When the camera is in position, the acquisition and processing system is triggered.

**Figure 5.** System architecture of the video inspection operation.

For ensuring a more increased degree of repeatability in the image acquisition, an infrared LED flash is used. The LED is controlled by the acquisition system. The main controller used for acquisition and processing is represented by a Jetson Nano development kit, which has a high computing power, especially in artificial intelligence applications. The Jetson interacts with the PLC trough an industrial remote I/O module from Brainbox by controlling it over ethernet. The Brainbox module also triggers the LED flash. The control of the CCD camera is also implemented over Ethernet, in this case POE (power over Ethernet) because the camera is also powered by the ethernet switch.

#### *3.3. Hardware Description*

For implementing the computer vision solution, the following hardware components were used:


The Jetson controller is connected to the Ethernet switch alongside the camera with a CCD sensor and remote I/O module. The LED Flash is connected with a pull-up resistor (24 V) to the remote I/O module and is controlled via ethernet requests by the Jetson controller.

#### *3.4. Software Description*

The application was implemented using Python programming language. All the used hardware components integrate Python software development kits provided by the manufacturer. Therefore, the choice of programming language for implementation came naturally. The human–machine interface was implemented using the PyQt framework (PyQt5-Qt5 version 5.15.2, developed by Riverbank Computing, open-source), which is a Python wrapper of the open-source framework Qt developed by the Qt company. Software was designed to cover all of the manufacturing necessities, e.g., logging, user management, process handling, and so on [21–30]. In Figure 6, a complete sequence diagram of the process can be observed.

**Figure 6.** Process sequence diagram.

#### *3.5. Processing Algorithm Description*

The processing algorithm has two main parts: the conventional processing and the prediction using a machine learning model. As input, the algorithm takes a grayscale image with a resolution of 1920 × 1200. In the conventional processing, the ROI of the inner chamber of the cylinder is extracted by the algorithm, normalized, and a gaussian filter is applied. After applying the filter, an adaptive thresholding is also performed by the algorithm because the defects have a lower grayscale level and can be isolated this way. When the defects are isolated by the thresholding, they are marked with a contour function. This function returns the area of the detected contours (each contour detected represents a possible defect).

The area can be evaluated for establishing a verdict. The conventional processing works verry well when there are no significant traces of cleaning emulsion on the cylinder. When the emulsion becomes mixed with dust, traces become increasingly noticeable and with a lower grayscale level. Because of that, the thresholding is no longer able to distinguish between traces of emulsion and actual defects [22–30].

The second part of the processing algorithm is the convolutional neural network implemented using the PyTorch framework. The first layer takes as input the three RGB channels of the image and splits it in eight features for the next layer. The second layer is a max pooling layer with a kernel size of 3 × 3 and with padding enabled for looping through the entire image. Third layer is another convolutional layer, similar to the first layer, followed by a fully connected layer [31–33].

The spatial size of the output is obtained:

$$\frac{W - K + 2P}{S} \tag{1}$$

where *W* is the volume input size, *K* is the kernel size of the convolutional layer neurons, *S* represents the stride, and *P* is the amount of zero padding at the borders of the neural network [20].

A typical pooling layer is calculated with the following formula:

$$f(x, y) = \max\_{a, b=0}^{1} \mathbb{S}\_{2x+a, 2y+b} \tag{2}$$

The activation functions for the convolutional layers are ReLu (rectified linear unit), applying the following non-saturating activation function for removing the negative values from the activation map by setting them to zero [20].

$$f(\mathbf{x}) = \max(0, \mathbf{x}) \tag{3}$$

In Figure 7, an architecture of the neural network is proposed [19–30].

**Figure 7.** Convolutional network architecture.

Hyper-parameters:


#### **4. Results**

The model was trained using old defective engine blocks as well as on fixed periods of time with new batches of images evaluated by the model as defects. The false defects were labeled as no defects in the dataset and the actual defects were added in the dataset. The model did not perform very well, as can be observed in the results, due to the high number of features that needed to be extracted and processed before setting a verdict.

The architecture presented in Figure 7 has five convolutional layers with max pooling and a fully connected layer. The end image now has a resolution of 60 × 37 with 128 unique features. In Figure 8, we can see that the loss function generated during training with the new model has better performance and is able to detect the defects much faster.

**Figure 8.** Training progress of the convolutional neural network. The X-axis describes the loss function value can be observed. The Y-axis shows the corresponding epoch.

Below, the training results can be observed:


The main indicators tracked during commissioning was the number of the false detections reported by the neural network and the rate of detection for real defects. The number of false detections was initially high due to emulsion marks and system calibrations (refer to Figure 9). After the dataset was established and the system calibrated, the indicator decreased substantially, below a desired threshold such that we can consider that the algorithm is reliable enough.

**Figure 9.** Evolution of false detections.

It was observed that after including images with prominent marks of emulsion and with small imperfections generated by improper lighting (camera stabilization), the number of false detections decreased considerably (refer to Figures 10 and 11).

**Figure 10.** Evolution of detections. Blue line represents the real number of defects provided to algorithm and the green line represents the defects detected by the algorithm over 8 days.

**Figure 11.** False detection evolution after final training.

Table 1 also shows the type of defects detected during operation in relation to false detections and real defects. It can be observed that the highlight is still on the emulsion marks resulting from the washing and drying process in the presence of the dust or other impurities, this being the main false detection generator.



There is a lot of research being conducted on how different algorithms are responding to already established datasets. The main approach used involves software pre-processing, the use of a convolutional neural network for feature extraction, and in some cases another network for classification [34–42].

This solution is uses only hardware pre-processing (camera as sensors and environment related) and one convolutional neural network for feature extraction and classification. The setup proved to be sufficient and robust for the needed classification.

#### **5. Conclusions**

From the point of view of robotics applications developed in the automotive industry, the robustness of image processing applications from the manufacturing area can be increased considerably by using a machine learning algorithm to replace the classic method of processing with filters, hardware, complicated optics, and complex software algorithms. The machine learning algorithm can replace the classic approach and thereby ensure greater flexibility in developing the backbone of the application, e.g., PLC communication, socket services, and human–machine interface, so indispensable in this environment.

The weak point of this implementation remains the dependency on a sufficient and a correct dataset. By ensuring that we have the correct data to work with, we can develop and train a robust application for use in the manufacturing environment.

The advantage of using such an approach is that other implementations, e.g., communication, HMI, logging, and others, can be abstracted and reused in new implementations. The training of the algorithm can also be abstracted and reused. The flexible part needs to remain the architecture of the neural network and the dataset used.

Based on the work presented in this paper, a new application is already in development. The scope of the new implementation is to detect the presence of a safety pin inside the piston locking mechanism. This operation will be performed using three cameras which are triggered simultaneously for acquisition, a new architecture for the neural network, and different hardware to support a more complex application.

**Author Contributions:** Conceptualization, D.C.; Methodology, F.M.; Software, M.M.A.; Validation, M.M.A., D.C. and A.M.; Formal analysis, D.C. and F.M.; Investigation, A.M.; Data curation, M.M.A. and A.M.; Writing—original draft, M.M.A. and A.M.; Writing—review & editing, M.M.A. and A.M.; Visualization, M.M.A. and A.M.; Supervision, D.C. and F.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding. Authors are researcher in the frame of University of Craiova.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Our dataset is not publicly available.

**Conflicts of Interest:** The authors declare that they are not aware of any conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Konrad Wojtowicz \*and Przemysław Wojciechowski**

Faculty of Mechatronics, Armament, and Aerospace, Military University of Technology, 00-908 Warsaw, Poland **\*** Correspondence: konrad.wojtowicz@wat.edu.pl; Tel.: +48-261-837-529

**Abstract:** An increasing number of professional drone flights require situational awareness of aerial vehicles. Vehicles in a group of drones must be aware of their surroundings and the other group members. The amount of data to be exchanged and the total cost are skyrocketing. This paper presents an implementation and assessment of an organized drone group comprising a fully aware leader and much less expensive followers. The solution achieved a significant cost reduction by decreasing the number of sensors onboard followers and improving the organization and manageability of the group in the system. In this project, a group of quadrotor drones was evaluated. An automatically flying leader was followed by drones equipped with low-end cameras only. The followers were tasked with following ArUco markers mounted on a preceding drone. Several test tasks were designed and conducted. Finally, the presented system proved appropriate for slowly moving groups of drones.

**Keywords:** drone; UAV; multi-agent; ArUco; markers; group of drones; machine vision

#### **1. Introduction**

Due to global technological development and commercial opportunities, vast growth in the unmanned aerial vehicle market has been observed. Software, control systems, structures, and methods of analyzing the environment with drones are developing. There are high hopes for using drones for rescue and medical purposes. Thanks to the commercialization of the drone market, emergency services receive professional tools that make their work faster, easier, and safer. Often, after a fire, earthquake, or collapse, it is difficult or even impossible to assess the level of damage and determine the necessary action based on external observations. Using drones equipped with multispectral observation heads is recommended to monitor the situation inside buildings. Unmanned aerial vehicles, transmitting live, high-definition images, e.g., from a collapsed building or mining collapse, minimize the risk and do not expose rescuers to unnecessary danger while simultaneously offering first aid.

In rescue operations, particular attention is paid to the time needed to provide help. Drones searching for injured persons often cannot carry additional cargo in the form of essential materials such as bandages, medications, or even water. This paper presents a system of underequipped "follower" drones tracking a "leader" drone. The leader is equipped with systems enabling the identification and avoidance of obstacles and the location of the injured, in order to immediately provide the necessary means of survival for the victims of disasters (Figure 1). An example application of a group of drones is a mobile crop-monitoring system employing several drones in a group carrying optical sensors [1]. A comprehensive review listing other multi-agent system applications was presented in [2].

The project aimed to investigate whether it is possible to send several drones into unknown surroundings and control them via UAV "leader" tracking using ArUco tags. This seems to be the simplest and least expensive method, providing a considerable number of resources necessary for survival to people trapped in hard-to-reach or dangerous environments without risking the health or lives of rescuers. The main innovation is the

**Citation:** Wojtowicz, K.; Wojciechowski, P. Synchronous Control of a Group of Flying Robots Following a Leader UAV in an Unfamiliar Environment. *Sensors* **2023**, *23*, 740. https://doi.org/ 10.3390/s23020740

Academic Editors: Luige Vladareanu, Hongnian Yu, Hongbo Wang and Yongfei Feng

Received: 5 December 2022 Revised: 29 December 2022 Accepted: 4 January 2023 Published: 9 January 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

structure of the multi-agent group of UAVs, wherein raising the number of inexpensive followers increases neither the data exchange between actors nor the computational power requirements.

**Figure 1.** Assumptions of the designed system.

#### **2. Related Works**

Regarding autonomous flights, the research can be divided into two areas. The first typically focuses on improving and increasing the efficiency of control algorithms. The second deals with teaching UAVs that have previously been operated in manual mode to "learn by heart", recreating the required trajectory with corrections from external threat monitoring systems. Of course, there are many commercial structures wherein it is possible to use GPS. However, we are more interested in scenarios wherein the environment and surroundings through which one is to move are unknown, and it is impossible to estimate the position based on GNSS systems. Many works have indicated that solutions involving laser scanners, RGB-D sensors, or ultrasonic sensors mounted on the UAV board are fundamental and most effective. There are also solutions employing a synthetic aperture radar in addition to optical sensors [3].

It should be noted, however, that such solutions take up much space on the supporting structure of the drone, and their weight makes it impossible to take on additional cargo. Another approach is the simultaneous monitoring and remote control of each vehicle in the group [4]. Theoretically, it is possible to plan a trajectory for multiple drones in a constraint area [5]. However, in a natural environment, flights are only collision-free for a short time, because of unexpected disturbances. In the case we tested, i.e., small drones that can carry a small load, each gram of equipment is essential. According to many research results, the best solution is to track another UAV leader [6–11]. A multifunctional group of drones can

be achieved by providing situational awareness by mounting sensors on the leader drone only [12] and a variety of mission sensors onboard group-member vehicles [13], e.g., to deliver enhanced scanning capabilities to infrastructure inspection systems [14] or build models of ancient sites [15].

A group of UAVs can be controlled by one of multiple formation strategies and techniques [16]; for rescue missions within disaster management systems, these include: virtual structure [17,18], consistency algorithm [19], behavior-based control [20], and leader– follower techniques. There are many advantages of using biological models, behavior-based formation control, and tracking, which give custom roles to particular agents [21–23]. However, these models can be adapted to swarms of vehicles and require constant agent-to-agent or agent-to-ground communication. In our case, we needed to bring a group of drones to a destination without putting additional tasks in their way. In the leader–follower formation, the followers can stay passive without any communication link. Various proposals exist for keeping the group together and moving forward in a leader–follower formation [24–27]. Many have introduced novel methods, algorithms, and ideas for controlling agents and the group. One of these, a linear consistency algorithm based on the leader–follower technique, was presented in [28]. It comprised a method of tracking the leader's position, heading, and speed. In [29], a virtual piloting leader algorithm was designed. It successfully coped with a leader failure but required high computational power onboard all the agents. Further, a distributed UAV formation control method was designed in [30]. However, the applied higher-order consistency theory required a preorganized communication topology.

A summary of the simulations and applications of the leader–follower technique highlights a significant disadvantage: a high dependence on the leader. Any failure of the leader affects the mission. Another problem is the substantial computational power requirement when the number of agents rises. In this work, we addressed the second issue as this project's main innovation, since the structure of the leader and followers does not change according to the number of agents in the group.

Visual passive markers are commonly used in every area of life. The visual pattern was first proposed in 1948 by two students, Bernard Silver and Norman Joseph Woodland. Due to the lack of appropriate technology, it was almost 30 years later that the barcode was used commercially and automatically.

Currently, markers are commonly used to mark goods, position machines in production, or read the position and orientation of medical devices during minimally invasive treatment, primarily due to their low production cost (Figure 2).

**Figure 2.** Eight existing marker systems. Top, from left to right: ARToolKit, Circular Data Matrix, RuneTag, ARToolKit Plus. Bottom, from left to right: QR code, barcode, ArUco, Cantag.

One of the best-known passive markers is a QR code [31]. A large amount of information can be stored under one tag, making it suitable for data transfer. In addition, they are resistant to damage, which means that depending on the percentage of damage to the entire code, it is possible to read at least part of the data contained in it.

Another type of tag is one used to track objects. ARTag and Artoolkit are characterized by the speed of detection and easy tracking, but they are not immune to changes in lighting. The ArUco marker was designed with similar technology. Its advantage is that it generates a small percentage of false-positive recognition, while the method of its encoding increases the effective recognition distance and the number of bit conversions. Some works indicate that these are the most effective candidates for use in AR [32,33]. ArUco markers proved to be solid reference points for mobile test beds [34] and stationary test benches [35]. The markers provided a reliable reference for position and attitude determination, which could be enhanced by setting the markers in three-dimensional patterns [36]. The accurate positioning of the markers allows their application as characteristic points to support simultaneous localization and mapping systems [37]. We applied ArUco markers as reference points in our previous research on a drone automatic landing system [38].

Researchers from the University of Michigan built a square-shaped AprilTag with a barcode, similar to QR codes and ArUco. A significant problem with their use seems to be the low recognition speed. Olson et al. proposed an updated version of AprilTag called AprilTag2, which resulted in increased detection speed [39]. Another modification was a circular ring marker that was tested for efficiency but lacked feature recognition [40].

The CircularTag, WhyCon, RUNE-Tag, and TRIP tags present a different approach. These round tags allow for high positioning accuracy but involve a complex, systemintensive detection algorithm [41–45].

#### **3. Materials and Automatic Control Algorithm**

#### *3.1. Test System Design*

A classic X-shaped quadrotor was used for the tests. A Raspberry Pi 4 minicomputer was used as the system for the implementation of the detection and tracking program. Due to the desire to reduce costs, a HAMA C400 in Full HD 1080p webcam with a 70◦ visual angle was responsible for recording the image. The design was based on our previous research experience with drone airframes and flight controllers [46].

In the initial part of the study, a test of the limit values of the system was carried out (Figure 3). The main parameters that were determined during the tests were the maximum effective detection distance of the marker (Tables 1 and 2), the maximum marker detection angle (change in marker position in the field of view of the camera), and the possibility of marker detection depending on its position in the camera's field of view at the horizontal level. The marker detection was checked 2 m from the camera, in order to represent a system working in limited space. The effective detection angle of the markers at the horizontal level was 38◦ (Figure 4).

**Figure 3.** The marker identification rate versus distance.


**Table 1.** Measurement results at 770 lx light intensity.

**Table 2.** Measurement results at 63.7 klx light intensity.


**Figure 4.** Effective camera field of view.

Three sizes of ArUco marker (3 cm × 3 cm, 6 cm × 6 cm, and 12 cm × 12 cm) were tested at different light intensities (770 lx and 63.7 klx).

Based on the obtained results, it was concluded that the attempts to track the markers in flight would be carried out only for markers with dimensions of 12 × 12 cm. This was due to the quick recognition of the marker and the ability to maintain tracking more easily at greater distances (maximum over 5.5 m) and with more pronounced changes in the leader's course (52 left and 57 right in clear weather conditions), allowing the drones to follow the leader on more complicated routes.

#### *3.2. Marker Detection Algorithm*

The primary purpose for which the Aruco markers were designed was to quickly determine the three-dimensional position of the camera relative to a single marker. Here, the Hamming coding algorithm was applied. The tag detection algorithm was optimized for a low false-detection rate. We could distinguish five stages of the detection process (Figure 5). After the entire algorithm is completed, the marker ID and the rotation and translation vectors are generated (to determine the position of the marker).

**Figure 5.** Marker detection process.

The Hamming encoding of the internal tag matrix provides one-bit error correction detection on each binary line. Unique tag identifiers are included in the directory, which can be placed in the ArUco module or created by the user. Once the tag ID is detected, the solvePnP (Perspective-n-Point) function is used for each corner of the tag. This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector> couple). Then, after solving the equation (Equation (1)), the 3D location of the point in is determined based on the 2D image.

$$\mathbf{s} \begin{bmatrix} u \\ v \\ 1 \end{bmatrix} = \begin{pmatrix} f\_x & \gamma & c\_x \\ 0 & f\_y & c\_y \\ 0 & 0 & 1 \end{pmatrix} \begin{bmatrix} r\_{11} & r\_{12} & r\_{13} & t\_1 \\ r\_{21} & r\_{22} & r\_{23} & t\_2 \\ r\_{31} & r\_{32} & r\_{33} & t\_3 \end{bmatrix} \begin{bmatrix} \mathbf{X}\_G \\ \mathbf{Y}\_G \\ \mathbf{Z}\_G \\ 1 \end{bmatrix} \tag{1}$$

where the vector ⎡ ⎣ *u v* 1 ⎤ ⎦ describes the position of the point on the image *(u, v*); ⎛ ⎝ *fx γ cx* 0 *fy cy* 001 ⎞ ⎠ and ⎡ ⎣ *r*<sup>11</sup> *r*<sup>12</sup> *r*<sup>13</sup> *r*<sup>21</sup> *r*<sup>22</sup> *r*<sup>23</sup> *r*<sup>31</sup> *r*<sup>32</sup> *r*<sup>33</sup> *t*1 *t*2 *t*3 ⎤ ⎦ stand for the optics parameters; and *XG*, *YG*, and *ZG* describe the

point in space based on the camera reference system. After placing the plane over the four points, the algorithm determines rotation vectors and translation between the camera and marker planes.

#### *3.3. Automatic Control Algorithm*

The "leader" drone is tracked automatically. The algorithm applied works continuously in real time. By detecting the marker, the drone determines its center and locates it by taking into account the center of the field of view. On this basis, it determines the direction in which it must move and the speed it should maintain to avoid losing the marker from the field of view (Figure 6).

**Figure 6.** All conditions of the marker position. (**a**) desired distance—no action; (**b**) too far—move forward; (**c**) too close—move backward; (**d**) desired vertical position; (**e**) too low—move up; (**f**) too high—move down; (**g**) desired horizontal position; (**h**) too right—move left; (**i**) too left—move right.

The first column shows the drone's behavior in the case of forward and backward movement (movement along the *X*-axis). In this case, the determining factor for the drone's behavior is the detected marker (Figure 6a) size. If the detected marker covers a smaller area than the assumed area (Figure 6b), the drone receives the command "move forward", because the marker is too far away. Case (Figure 6c) describes a situation wherein the detected marker is too large, which means that the distance between the "leader" drone and the tracking drone is too small. Therefore, the drone receives the command to "move backward" and changes its position to obtain the optimal position.

The drone's behavior along the *Z*-axis is shown in the second column. The decisive factor for the command sent to the FC is the position of the indicator's center (Figure 6d). When the center of the marker is above the center of the field of view, the drone receives information that it is too high and must decrease the flight altitude (Figure 6e), while when the marker is below the center indicated in the image, the drone receives information that it is too low and must increase the flight altitude (Figure 6f).

The situation is similar for determining the required flight trajectory according to the *Y*-axis (Figure 6g). When the center of the marker is on the left side of the image center, the drone receives the command to "move to the right" (Figure 6h), while when the marker is on the right side of the image center, it receives the command "move to the left" (Figure 6i).

A PI controller was used to eliminate overshoots. The proportional and integral coefficients were determined and applied to the following algorithm:

In the first step, the difference between the marker area and the arithmetic average of the declared marker size range (*f berror*) in relation to close range (*Fbc*), away range (*Fba*), the declared max and min values of the distance between the drone and the marker (*fbrange*), and the marker area (Area) was calculated (Equation (2)).

In the second step, the difference between the marker area and the arithmetic average of the declared marker size range (*f berror*2) in relation to *Fb* back speed (*Fbb*), *Fb* forward speed (*Fbf*), and the declared maximum speed values (*fbspeedrange*) was calculated (Equation (3)).

Finally, a speed value (*Fbspeed*) in relation to the proportional term (*P*), the integral term (I), the previous loop error (*Pfberror*), *fberror*, and *fberror2* was calculated (Equation (4)).

In addition, to avoid exceeding the speed limits for the follower drone, we decided to protect it using a conditional statement (Equation (5)).

$$fberror = area - \frac{fbc + fba}{2} \tag{2}$$

$$fberror2 = \frac{(fberror - fbc) \* (fbf - fbb)}{fba - fbc} + fbb$$

$$fbspeed = \frac{P \ast fberror2 + I \ast (fberror2 - pfberror)}{100} \tag{4}$$

$$fbspeed = \begin{cases} -2 & fbspeed < -2\\ fbspeed & fbspeed \in [-2, 2] \\ 2 & fbspeed > 2 \end{cases} \tag{5}$$

The program code performing the controller function is shown in detail in Figure 7.

**Figure 7.** Programmatic implementation of the control algorithm.

#### **4. Results**

Due to the desire to use the reconnaissance drone tracking system in unfamiliar surroundings, we did not consider the speed of following the "leader" drone. The main element of the test was to determine the forward speeds at which the flight would be smooth without losing the marker. For this purpose, the ArUco marker was installed on an IRIS 3DR UAV. The IRIS 3DR could plan the mission's route and the speed of movement. The measurements were conducted for the three selected speeds using a 12 × 12 cm marker, which had the highest recognizability (Figure 8).

**Figure 8.** Flight in a straight line behind the leader. The bottom right image was taken by the camera, showing the detected marker.

The research was divided into two parts. The first study was designed to determine the minimum corridor necessary for a safe flight. The corridor was calculated by having the drones repeatedly follow the leader moving at a constant speed in a straight line. The speeds were selected based on the limitations of the data processing steps for routing and decision making by the leader drone.

The determined properties were superimposed on a single chart, while the initial position of the leader was compared to the location at which the tracking drones started the tracking process to facilitate the route analysis of the individual followers (Figure 9).

**Figure 9.** Position of the drones relative to the leader in the XY plane at different speeds: (**a**) 1.75 m/s; (**b**) 1.95 m/s; (**c**) 2.2 m/s.

The results were obtained by subtracting each axis's leader and follower route parameters. The data obtained this way were averaged using the arithmetic mean as the best approximation of the actual value. In contrast, to calculate the average "dispersion" of individual results around the mean value, the standard deviation from the mean was calculated with the following Equations (6)–(8).

$$\overline{X} = \frac{\sum\_{i=1}^{k} \left( |x\_{Li} - x\_{F\_i}| \right)}{k} \tag{6}$$

$$\overline{Y} = \frac{\sum\_{i=1}^{k} \left( \left| y\_{Li} - y\_{F\_i} \right| \right)}{k} \tag{7}$$

$$
\sigma = \frac{\sum\_{i=1}^{k} (\mathbf{x}\_i - \boldsymbol{\mu})^2}{N} \tag{8}
$$

where |*xLi* − *xFi* | and *yLi* − *yFi* are the absolute values of the distance between the leader and the follower in a given plane.

At the specified speeds, no tracking loss of the ArUco tag was registered, and the leader's tracking was smooth. Table 3 shows values for the mean distance to the marker on the XY plane and the standard deviations at a speed of 1.75 m/s.

**Table 3.** The average distance and standard deviation between the follower and leader routes at a speed of 1.75 m/s.


Rejecting the last two measurements, which were much better than the others, and averaging the obtained results, the minimum safe corridor that would allow the drones to follow the leader had a diameter of 0.7284 m, with an SD of 0.3045 m.

In the case of measurements at 1.95 m/s, the average value of the safe corridor was lower and amounted to a surprising 0.1326 m, with an SD of 0.053 m. The distance values of the individual drones from the leader in the XY plane are presented in Table 4. Only five followers were included in this dataset, because the data from one flight were corrupted.

**Table 4.** The average distance and standard deviation between the follower and leader routes at a speed of 1.95 m/s.


Comparable results to those obtained at 1.75 m/s can be seen in the third measurement at 2.2 m/s. The average safe corridor value was 0.7025 m, with an SD of 0.3044. The distance values of the individual drones from the leader in the XY plane are presented in Table 5.

**Table 5.** The average distance and standard deviation between the follower and leader routes at a speed of 2.2 m/s.


In the next step, we decided to examine how the tracking drones behaved when following the leader along the programmed route. A route in the form of a rectangle with sides of 4 m and 2.5 m was chosen. The leader's set speeds were 1.7 m/s, 2 m/s, and 2.75 m/s (Figure 10).

**Figure 10.** *Cont*.

**Figure 10.** The 3D position of the drones relative to the leader at different speeds during flight around the perimeter of the rectangle: (**a**) 1.7 m/s; (**b**) 2 m/s; (**c**) 275 m/s.

It can be noticed in the attached graphs that at the speeds of 1.7 m/s and 2 m/s, the tracking drones did not lose the leader over the entire route, while at the speed of 2.75 m/s, four out of five tracking attempts ended the marker being lost from sight (Table 6).

**Table 6.** The average distance and standard deviation between the follower and leader routes at a speed of 1.7 m/s during flight around the perimeter of the rectangle.


In the case of flights at a speed of 1.7 m/s, the followers' routes were closest to the leader's route (average distance 0.1948 m, SD 0.1114 m). Satisfactory results were also achieved at a speed of 2 m/s. According to Table 7, all followers moved at similar distances from the leader's route. The value of the average safe corridor, in this case, was 0.2418 m, and the SD was 0.1475.

**Table 7.** The average distance and standard deviation between the follower and leader routes at a speed of 2 m/s in flight around the perimeter of the rectangle.


Figure 9c shows tracking blackouts for followers 1, 3, 4, and 5, which ended the mission early (for safety reasons, a landing procedure was established when the marker was lost). Due to only one drone completing the task, the mean values and standard deviations were not counted. In addition, it was recognized that the leader's flight speed of 2.75 m/s in open space disqualified the possibility of using such speeds in missions.

#### **5. Discussion**

Preliminary tests determined the boundary conditions at which the system operated satisfactorily. The best recognition results for ArUco markers in terms of the distance from the camera were obtained at the marker size of 12 × 12 cm (5.86 m), while for the sizes of 6 × 6 cm and 3 × 3 cm, detection was possible at distances of 3.2 m (45% less than the best result) and 1.74 m (70% less than the best result), respectively. In addition, the best angular recognition results were also achieved for the 12 × 12 cm marker. The achieved marker deflection angles, at which the marker was still detectable, of 50.5◦ left and 56◦ right exceeded the recognition capabilities of the system for the 6 × 6 cm and 3 × 3 cm markers by 11.5% and 22.8% (46◦ left and 44◦ right) and 23.1% and 47.4% (40◦ left and 30◦ right), respectively. The results of the tests carried out under the conditions of a sunny day (63.7 klx) did not differ significantly from those of the tests carried out under shaded conditions (770 lx). The only significant differences were observed in the marker recognition distance, which increased by an average of 15% with a lower light intensity. In addition, the maximum angle of view of the camera at which marker recognition was still possible was set at 38◦.

Our major research task was to determine the flight parameters at which the tracking process would be continuous and smooth while maintaining the flight trajectory as close as possible to that set by the leader. Two scenarios were provided for in the research. The first, i.e., a straight flight behind the leader at a constant speed, was used to optimize the tracking system and determine the optimal speed of the leader. Six repetitions of the flight behind the leader were carried out for the first speed adopted for the tests—1.75 m/s. The fifth and sixth attempts achieved the best results, with an average distance of 0.1868 m and 0.1341 m, respectively, from the leader's flight trajectory on the 8 m planned route. The results from these two measurements were so favorable (78% lower than the rest of the average results obtained in this experiment) that we decided not to take them into account in the tests determining the average safe corridor that must be provided for the follower to complete the mission. Such a discrepancy in the results could have been due to the ideal weather conditions (windless day) in which these two flights were performed. The flights of the other four followers were similar to each other. The average distances from the leader's flight trajectory were 0.7626 m, 0.7181 m, 0.7087 m, and 0.7243 m, respectively. In the case of flights at a speed of 1.95 m/s, particularly satisfactory results were obtained, with an average distance from the leader's trajectory of 0.1326 m. The resulting distance was less than 3 cm in the consecutive tests with a leader flight speed of 2.2 m/s and 1.75 m/s.

In the second scenario, flights were carried out along a planned rectangular route. This test was performed in order to check the traceability of the tag in the proposed system. The leader's speeds were pre-determined for the tests, just as in the first part of the study. Thus, the leader was moving at speeds of 1.7 m/s, 2 m/s, and 2.75 m/s. In the case of the first two speeds, the average distances from the leader's trajectory were similar (0.1948 m for the speed of 1.7 m/s and 0.2418 m for 2 m/s), while the third speed turned out to be too high for the followers to keep up with the leader. Out of five attempts, only one was successful, and the follower reached the route's endpoint with an average distance from the leader's trajectory of 0.22 m.

According to the results of the conducted research, it can be assumed that rescue missions carried out based on the proposed system could be successful. The obtained results suggest that the organization of such tasks in an automatic system is realistic and, most importantly, effective. Of course, there are some limitations to the use of such a procedure. The main problem is the aerodynamic drag surface of the ArUco marker. Despite good tracking results, difficulties resulting from strong gusts of wind (which cannot be ruled out in open-air operations) substantially reduced the usefulness of the entire system. In closed rooms, the detection of all markers at no more than 150 cm from the "leader" UAV was achieved with 100% efficiency, while the detection of the same markers in an open space under windy weather conditions was occasionally unsuccessful, and a new procedure for finding the marker was required. In addition, it seems reasonable that a

communication system should be created for the leader, connecting it to a tracking drone to avoid losing the marker. In such a case, the leader could receive information in the form of a "STOP" command, which would remain until the follower rediscovered the marker.

The great advantage of the system is that it can move freely in limited spaces where it is impossible to use GNSS navigation systems. The achieved tracking speeds corresponded to the movement of drones in an unknown space with the continuous analysis of the surrounding image. High speeds are not required in such situations, but high maneuverability is expected, which is ensured using a multi-rotor platform.

Another advantage of the proposed solution is the possibility of cascading drones depending on the amount and weight of the equipment needing to be transported. The only modification that would have to be made is that each tracking drone would have to be equipped with an ArUco marker and would become a "guide" for the next tracking drone.

The flight duration for this type of task is related to the battery used. If an extended flight time is required, a battery with a larger capacity must be used. However, it should be remembered that as the power reserve of batteries increases, their weight also increases, which is crucial considering the possibility of transporting equipment necessary to save lives and protect health.

In further studies on this project, we plan to replace the ArUco markers with infrared diodes. With such a modification, drones could track the leader even in conditions without lighting. In addition, it would eliminate problems resulting from the resistance to movement set by the marker. Another option to improve the system is to mount the camera on a gimbal placed on the drone. The proposed solution would reduce the probability of losing the tracked marker resulting from a sudden direction change by the leader.

Additionally, tests should be carried out in closed rooms, for which this system was also designed, to verify the system. In this way, we would limit the impact of external factors (such as gusts of wind, precipitation, or dust) on the entire system. Determining the characteristics of the follower and leader movement in closed rooms would contribute to creating a list of minimum requirements that must be met to use the system safely and effectively in rescue missions.

#### **6. Conclusions**

The proposed system could prove effective in the assumed scenarios of rescue missions. Our research found that its use in closed spaces and outdoors was possible and practical. The system had certain limitations, such as the impossibility of its use in intense winds or during missions conducted in complete darkness. The system's capabilities could be increased, and it could be used in the dark. Using such a system would substantially reduce the cost of multiple-vehicle drone operations, but the most significant advantage of this solution is its minimization of the threat to the lives and health of rescuers who would otherwise have to perform the mission independently. Its main benefit is the innovative way of organizing the group of robots within a leader–follower formation without active communication between agents in the group or between the agent and the ground control station. This results in a considerably lower cost of expanding the group with further agents, which we identified as one of the primary drawbacks of leader–follower formations. Moreover, the system can be used immediately, without prior preparation, saving the time usually needed to perform reconnaissance and decide on how to carry out a mission.

**Author Contributions:** Conceptualization, K.W. and P.W.; methodology, K.W. and P.W.; software, P.W.; validation, K.W. and P.W.; formal analysis, K.W. and P.W.; investigation, K.W. and P.W.; resources, P.W.; data curation, K.W.; writing—original draft preparation, K.W. and P.W.; writing—review and editing, K.W. and P.W.; visualization, P.W.; supervision, K.W.; project administration, K.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors thank Adam Marut, Jakub Kochan, and Jakub Djabin for their assistance and contribution to developing the system described in this work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
