Next Article in Journal
Towards Robust Robot Control in Cartesian Space Using an Infrastructureless Head- and Eye-Gaze Interface
Next Article in Special Issue
Design and Implementation of Universal Cyber-Physical Model for Testing Logistic Control Algorithms of Production Line’s Digital Twin by Using Color Sensor
Previous Article in Journal
A Smartphone Camera Colorimetric Assay of Acetylcholinesterase and Butyrylcholinesterase Activity
Previous Article in Special Issue
Fully Automated DCNN-Based Thermal Images Annotation Using Neural Network Pretrained on RGB Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Dynamic Identification Technique of Industrial Products in a Robotic Workplace

1
Faculty of Mechanical Engineering, Slovak University of Technology in Bratislava, Námestie Slobody 17, 812 31 Bratislava, Slovakia
2
SOVA Digital a.s. Bojnická 3, 831 04 Bratislava, Slovakia
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(5), 1797; https://doi.org/10.3390/s21051797
Submission received: 2 February 2021 / Revised: 28 February 2021 / Accepted: 1 March 2021 / Published: 5 March 2021
(This article belongs to the Collection Sensors and Data Processing in Robotics)

Abstract

:
The article deals with aspects of identifying industrial products in motion based on their color. An automated robotic workplace with a conveyor belt, robot and an industrial color sensor is created for this purpose. Measured data are processed in a database and then statistically evaluated in form of type A standard uncertainty and type B standard uncertainty, in order to obtain combined standard uncertainties results. Based on the acquired data, control charts of RGB color components for identified products are created. Influence of product speed on the measuring process identification and process stability is monitored. In case of identification uncertainty i.e., measured values are outside the limits of control charts, the K-nearest neighbor machine learning algorithm is used. This algorithm, based on the Euclidean distances to the classified value, estimates its most accurate iteration. This results into the comprehensive system for identification of product moving on conveyor belt, where based on the data collection and statistical analysis using machine learning, industry usage reliability is demonstrated.

1. Introduction

Requirements for fast and accurate product identification and their measured parameters are currently increasing in industrial production environments [1,2] Intelligent solutions with a multidisciplinary approach are one possible solution. In our case we decided to combine and use available statistical mathematical methods together with database and computer solutions based on machine learning. The combination of these approaches, allows us to provide appropriate solutions for industry deployment with fast response and accuracy, which would otherwise be difficult to implement.
One of the new challenges in the industry is rapid detection and identification of moving products with various parameters. Nowadays, in the spirit of intelligent industry, there is a trend to abandon mass series production and switch to customized small series production runs according to [3,4,5,6,7,8]. We know many methods for products detection such as (the most commonly used) barcodes, quick response (QR), radio frequency identification (RFID) codes etc. [9]. From a production point of view, recognition price and speed are decisive factors. Therefore we have concentrated in this article on a low-cost universal industrial color sensor [10]. From a product recognition accuracy perspective, static product recognition is the best, i.e., to perform product identification while the conveyor belt is stopped. Although this procedure is the most accurate, it wastes time within the production cycle. Therefore, we deal with the dynamic recognition of products in motion. This is more complicated in terms of accuracy, because uncertainties arise in products’ identification, when the measured value is outside the control chart limits.
There are more color recognition methods based on color calibration algorithms which are suitable for our case. These depend on color models representing the respective color space such as RGB and its subset standard RGB (sRGB), or cyan magenta yellow and key (CMYK), or luminance (Y), blue–luminance (U), red–luminance (V) (YUV), hue saturation value (HSV), hue saturation brightness (HSB) and hue saturation lightness (HSL) or CIELAB color space, as stated in [11,12,13]. Since the selected industrial color sensor uses RGB color space directly, rather specific algorithms such as thin plate spline integration (TPS-3D), partial least squares analysis (PLS) or commercial calibration algorithms, e.g., ProfileMaker (PROM) would be more capable from our point of view [14]. However, to ensure the fastest possible implementation, we decided to apply the K-nearest neighbor machine learning algorithm described in [15,16], with which we already had experience and useful results from previous practical experiments.

2. Subject and Methods

The main objective of the article is to present a universal solution for fast detection of industrial products based on their color. In order to have sufficient base of relevant data, we have prepared the sample workplace shown in Figure 1a. This workplace fully replicates industrial applications and is equipped with industrial components such as a KUKA KR3R540 robot (KUKA Deutschland GmbH, Augsburg, Germany), a SICK CSM-WP117A2P color sensor (SICK AG, Waldkirch, Germany) and a conveyor belt from Automatica (Liptovský Mikuláš, Slovakia).
We performed 21,600 measurements in total to obtain statistical data to determine whether the accuracy of sensor is sufficient for moving product identification as shown in Figure 2.
We evaluated Type A standard uncertainty for each product (color) separately from these measurements. Then we calculated Type B standard uncertainties and subsequently the resulting combined uncertainty, based on which we declare the best settings for the simulated process and after for the real operation. The next step is to set regulatory limits and create control charts for accurate identification. We can then evaluate if identified products are within the control limits and their identification is unambiguous. In other case, they are outside the control limits and their identification is ambiguous [17]. For these cases, we use the machine learning algorithm K-nearest neighbors. We have used values acquired during the measurements of color sensor accuracy as training dataset.
The methodologies are explained and analyzed in more detail in the following subsections.

Preparation of Test Robotic Workplace

It was necessary to set up workplace to test identification by the color sensor which simulates real operation and it is automated. An automated workplace enables one to perform a large number of measurements using different combinations of the monitored factors. The workplace is represented by a conveyor belt, along which colored calibration cubes move. After passing the cube and performing a measurement, the cube is caught by the robot’s vacuum gripper and moved again to the beginning of conveyor belt. This process is repeated for given number of measurements. The color sensor is located on the side of conveyor belt so that its detection zone faces the belt. To attach and position sensor, we designed a tool, which was printed on 3D printer. We first designed and assembled the workplace using the Process Simulate simulation tool (Siemens PLM Software, Plano, Texas, USA). We tested there the reachability of individual points needed for execution of catching and releasing cube operations. During testing the robot’s declared range in the Process Simulate tool, we found this range was insufficient for the application. Therefore it was increased by adding a flange designed to allow insertion of a suction cup mechanism. After repeated simulation of the flanged robot range, a flange prototype was created on a 3D printer. The robot range was also verified at the physical workplace, as shown in Figure 3.
The KUKA KR3 R540 robot is a low payload capacity robot. However, in our case it exceeds the experiment requirements, as the weight of the transferred calibration cubes is up to 100 g. The repeatability of return to programmed position for this robot reaches a value of 0.02 mm, which is sufficient for our purposes. We also took this information into account when calculating the resulting uncertainty of the color sensor measurements, according to [18,19]. The primary parameters of the KUKA KR3 R540 robot are listed in Table 1.
We installed the KUKA ethernet KUKA Robot Language (KRL) extension into the robot for measurement process purposes. This extension allowed us to communicate via an Ethernet connection between the robot and a computer and collect the data. After correct configuration, it was possible to monitor inputs coming to the robot and based on them control outputs, in our case the conveyor belt and suction cup.
We used a conveyor belt from Automatica in the workplace implementation. The conveyor belt specifications are shown in Table 2.
The motor speed was regulated to 30 percent of the maximum revolutions per minute (RPM), which corresponded to the setting in real operation. The frequency converter was controlled by the robot’s output signals based on data from the control computer.
We chose the CSM-WP117A2P color sensor from SICK shown in Figure 4a for color cubes identification,. The compact size of this sensor, which facilitates its placement at the workplace, is one of its advantages. The sensor emits white light on the scanned object using additive color mixing from three color diodes. Based on the reflection of light it evaluates combinations of red, green and blue components, which finally define the scanned color. Measurements are performed based on the light beam emitted by the sensor. Due to the small beam size, a large color area is not required for accurate color identification. The color sensor’s primary technical data are listed in Table 3. The sensor’s relative sensitivity curve shows a dependency on the sensing distance as seen in Figure 4b.
Another advantage is the possibility of using communication via an input output (IO) link. We integrated the sensor using the Sensor Integration Gateway (SIG100). The advantage of such an integration is ability to connect multiple sensors through a single gateway. This significantly facilitates communication and data collection from the sensors. The SIG100 uses the Representational State Transfer (REST) application programming interface, so it was possible to query data from the sensor via a Java Script Object Notation (JSON) string generated by the control computer. After obtaining data from the sensor, this data was recorded to the Structured Query Language (SQL) database MySQL, which made their categorization and evaluation easier.
The measured output of the CSM WP117A2P sensor is three numerical values. Each value represents a percentage of a primary color. Values range in size from 0 to 100 percent. The first value represents percentage of red (R), the second percentage of green (G), and the third value percentage of blue (B) of the subject color. Based on this information, we defined following measurement model according to [21,22]:
δ C o l o r = [ δ R ,   δ G , δ B ]
where δ C o l o r is the resulting composite color percentage calculated from the contributions of the red δ R , green δ G and blue component δ B . When determining the overall measurement result, we did not consider correlations of values due to the fact that this is new proposal for which it is necessary to perform further experiments.

3. Results

In our experimental measurements using the color sensor, we found that the highest influences on the measured values came from the illumination of the scanned object, the distance of the sensor to scanned object and whether the scanned object was moving or stopped [22,23,24]. Therefore we decided to perform measurements with different combinations of these factors. The optimum sensing distance specified by the manufacturer is 12.5 mm, with a tolerance of 3 mm. We chose values approaching to the limit distances of 15 mm, 10 mm and the optimal measuring distance of 12.5 mm. The illumination of the scanned object was another major influence on the measurements. We performed the measurements in natural daylight, with artificial light and in the dark. As a main goal of these measurements was to determine whether the measurement accuracy would be sufficient to identify the cubes even when the conveyor belt is running, therefore we performed measurements of stopped and moving cubes and compared the deviations.
We measured all settings combinations for six cubes with an edge size of 30 mm and following colors: red, blue, pink, green, yellow and brown. The measured calibration cubes are shown in Figure 1b.
By combining the influencing factors we created 18 combinations. We performed 200 measurements for each of the cube, from which we created the resulting dataset containing 21,600 measurements.
In the first phase we evaluated Type A standard uncertainty for individual color components (red— u A R , green— u A G , blue— u A B ) measured by the sensor. Subsequently we calculated total Type A standard uncertainty ( u t o t a l   ) for each given combination. The following part of the work provides the results of our Type A standard uncertainty evaluation for individual colors for the abovementioned combinations, in a tabular form. When calculating uncertainties, we used procedures published in the literature [22,25,26].
Table 4 shows the Type A standard uncertainty calculated from measurements performed on a red cube. The table shows that smallest uncertainty was achieved when the cube is stopped under artificial light at a scanning distance of 10 mm. On the other hand, uncertainties calculated from measurements for a cube in motion on the conveyor are higher for all combinations of factors than for a stopped cube. This fact confirms that movement of the conveyor has a significant effect on measurement. The highest total uncertainty was achieved when measuring a cube in motion under artificial light at a sensing distance of 12.5 mm.
Table 5 shows resulting Type A standard uncertainty for a blue cube. The lowest uncertainty is shown for measurements performed when the cube is stopped in the dark with a distance of 12.5 mm between the sensor and the scanned object. The highest uncertainty was achieved when measuring the cube moving along the conveyor belt under artificial light at a distance of 10 mm.
Table 6 lists the Type A standard uncertainty calculated from data obtained when a pink cube was measured. The lowest total uncertainty was recorded for a static cube in the dark at the distance of 15 mm. The highest uncertainty was recorded when measuring the cube in motion under artificial light at a distance of 10 mm.
Table 7 shows the total Type A standard uncertainty calculated for green cube measurements. The lowest uncertainty was achieved when the cube is stopped under artificial light at a measuring distance of 10 mm. The highest uncertainty was recorded for measurements performed for a moving cube in natural daylight at a distance of 15 mm.
Table 8 lists Type A the standard uncertainty calculated for a yellow cube. The table shows that the lowest uncertainty was achieved when the cube is stopped under artificial light at a distance of 12.5 mm. The highest uncertainty was achieved when measuring a moving cube under artificial light at a distance of 10 mm.
Table 9 contains Type A standard uncertainty calculated for measurements performed for a brown cube. The lowest total uncertainty was recorded when the was stopped under natural daylight at a distance of 10 mm. The highest uncertainty was recorded when measuring a cube in motion in the dark at a distance of 10 mm.
When we compare all calibration cubes, the lowest Type A standard uncertainty shown in the dataset is for the brown cube, when a stopped cube is measured under different conditions. The highest uncertainty is shown by measurements performed for pink cubes. When evaluating the data measured on the conveyor without stopping the cube, measurements made on the green cube achieve the lowest Type A standard uncertainty, while values measured for the yellow cube show the highest uncertainty. By further examining data from the objects’ illumination point of view, we came to the conclusion that values of Type A uncertainties are lowest when measuring in the dark. This conclusion was confirmed by data obtained in both static and motion measurements. The highest Type A uncertainties were recorded in daylight static measurements, which may be due to its variance. When measuring in motion, we recorded the highest Type A uncertainties for artificial light measurements, which could be affected by reflection of light from moving objects.
From the objects’ distance point of view the lowest Type A uncertainties corresponded to the data measured at a distance of 15 mm and the highest uncertainties were for measurements performed at a distance of 10 mm between the sensor and the measured object. We came to the same findings when evaluating the data from static as well as motion measurements.
As already mentioned in the tables, measurements performed for cubes in motion showed several times higher Type A uncertainties for all colors and all factor combinations than measurements performed for static cubes.
In the following part of the article we focus on Type B uncertainties, which we determined based on identifiable sources of uncertainties affecting the measurements. After our analysis of the measurement process, we identified six components of Type B uncertainty. These sources and their value distributions are listed in Table 10 [22,25,26].
The first identified component of Type B uncertainty is the uncertainty of cube placement by the robot. We estimated this value of uncertainty based on the repeatability of the robot’s return into a specified position. This is stated in the robot documentation, according to standards for industrial manipulators. We also took into account the calibration deviation of the robot effector and flange shape deviation caused by inaccurate assembly of individual 3D printed parts. Based on the combination of these factors, we finally estimated the resulting component called cube placement by robot, hereinafter represented by u B 1 .
As the second component of Type B uncertainty, we identified the sensitivity of the sensor at different sensing distances. Since we measured at three distances between the sensor and the scanned object, namely distances of 10, 12.5 and 15 mm, we determined its sensitivity at the mentioned distances based on sensor documentation. The sensitivity for the individual distances is hereinafter referred to as u B 2 and its values for individual distances are given in Table 10.
The third component of Type B uncertainty is the effect of illumination. We determined this component by estimation based on experimental measurements. As already mentioned in previous subsections, data measured in the dark achieved the lowest variability. Based on this, the lowest uncertainty value was assigned to this uncertainty component in the dark measurements. The uncertainty values for individual illumination are given in Table 10 with the designation u B 3 .
As the next component of Type B uncertainty we determined the effect of conveyor movement. When estimating the value of this uncertainty component, we used the documentation of the conveyor belt motor, specifically the motor speed. The values of this uncertainty component for measurements of static cubes and moving cubes are given in Table 10 under the designation u B 4 .
The component u B 5 was determined from the range of measured values, based on the difference between the maximum and minimum measured values of individual color components. We determined the interval for each combination of measurement settings for each color. The resulting interval of calculated values is shown in Table 10.
The last component of Type B uncertainty is the microclimate. This component includes the effect of the external environment on the measurements. The determined value of this uncertainty component is given in Table 10, defined according to [21].
Based on the uncertainty components listed in Table 10, we calculated the combined standard uncertainty and expanded uncertainty. When determining the expanded uncertainty, we chose the expansion coefficient k = 2 i.e., the Gaussian distribution for a coverage probability of 95.45 % .
We calculated the combined standard uncertainty based on the total standard Type A uncertainty from data in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 and the total standard Type B uncertainty given in Table 10. These total uncertainties, listed as u A t o t a l and u B t o t a l , are recalculated for every color cube, representing the customized product separately.
We can state based on these calculations (a sample calculation is presented in Section 3.1) that the influence of conveyor belt movement on the measurement uncertainties is considerable. Measurements made on a moving conveyor belt show higher values of uncertainties. This was confirmed for all cube colors and combinations of the measurement process settings.
During data evaluation we found the lowest uncertainty was recorded for measurements performed on brown cube. The brown cube achieved the best results when measuring on a stopped or even moving conveyor belt. On the other hand, the highest uncertainty was achieved for a static green cube and for a yellow cube in motion.
We investigated the effect of sensor distance from the scanned object as the next factor. The data indicate 15 mm as most suitable measuring distance for both static and motion measurements. In the static measurements, the highest uncertainties were recorded when measuring at a distance of 12.5 mm. Measurements in motion reached the highest uncertainties at a measuring distance of 10 mm. It can be stated that the measuring affects motion measurements more than static ones. While for static measurements the uncertainties at individual distances achieve similar values, for motion measurements the uncertainty at a distance of 10 mm is significantly higher than at distances of 12.5 and 15 mm.
When evaluating the uncertainties of the data in terms of the lighting used, we found the lowest measurement uncertainties for measurements of stopped as well as moving cubes on the conveyor, accomplished in the dark. We recorded the highest values of uncertainties when measuring under artificial light. For static measurements, the value of the uncertainties under artificial light is significantly higher than in daylight. This is not the case for measurements in motion, where the uncertainties recorded in daylight and under artificial light have very similar values, which may be caused by cube color reflections.
Our findings show that if we want to use all colors of cubes, the best combination of settings of the measurement process is a measuring distance of 15 mm and the measurement should be performed in the dark, ideally with stopped cubes.

3.1. Statistical Evaluation of Measured Data

To demonstrate a sample procedure of statistical evaluation of uncertainties, we chose a brown color cube as the customized product, based on the best results whether stopped or in motion. Table 11 shows values of individual measured color RGB components, measured for a moving brown cube at a scanning distance of 15 mm, in the dark. Column R indicates the percentage of red component, column G indicates the percentage of green component and column B indicates the percentage of blue component, respectively.
In the first evaluation step we determined the standard Type A uncertainty for individual color components using statistical methods, according to Equations (3) and (4).
u A R = i = 1 n ( x R i x ¯ R ) 2 n ( n 1 ) = 0.016 %
u A G = i = 1 n ( x G i x ¯ G ) 2 n ( n 1 ) = 0.006 %
u A B = i = 1 n ( x B i x ¯ B ) 2 n ( n 1 ) = 0.011 %
Total Type A standard uncertainty is then [11,15]:
u A t o t a l = u A R 2 + u A G 2 + u A B 2 = 0.180 %
In the next step, we defined the individual sources of Type B uncertainties based on the equipment documentation and our experimental measurements. The sources of uncertainties we used for determination of combined standard uncertainty are shown in Table 12, as mentioned in [18].
The first uncertainty component is repeatability and it indicates the total standard uncertainty determined by the Type A method. The uncertainty component called cube placement by the robot determined by the type B method, was estimated based on the return repeatability of the robot manipulator with the considered assembly deviation of the robot vacuum gripper. The uncertainty component sensor distance sensitivity was determined based on the documentation of the CSM-WP117A2P color sensor. When determining the illumination effect, we used experimental data obtained with different types of lighting (darkness, daylight, artificial light) from experiments. The uncertainty component called conveyor movement effect is calculated based on the conveyor speed specified in its documentation. The uncertainty component range of measured values was determined based on calculations performed on our experimental data. The influence of microclimate includes the impact of the surrounding environment, in our case an air-conditioned laboratory as mentioned in [19,25].
From the individual components of uncertainty determined by the type B method, we subsequently calculated a total Type B uncertainty according to the following formula:
u B t o t a l = u B 1 2 + u B 2 2 + u B 3 2 + u B 4 2 + u B 5 2 + u B 6 2 = 1.410 %
After calculation of the total Type A standard uncertainty and total Type B standard uncertainty, we determined the combined standard uncertainty based on a formula given in [18]:
u C = u A t o t a l 2 + u B t o t a l 2 = 1.422 %
When defining the expanded uncertainty, we chose the expansion coefficient k = 2 i.e., the Gaussian distribution for a coverage probability of 95.45 % . The relationship in the sense of [19,26] then applies to the expanded uncertainty:
U = k · u C = 2.844 %
The result of color measurements (for the brown calibration cube) using the CSM-WP117A2P color sensor after merging the color components and rounding, according to the balance table in Table 13, can be written as follows:
The resulting color = ( 82.857 ± 2.844 ) % ; k = 2 .
Similar calculations were applied to all the other colors representing customized industrial products. Based on the results, the brown color was evaluated as the best and therefore its sample calculation was presented in this subsection.

3.2. Definition of Control Limits and Control Chart Creation

For proper functionality of a logistics system, it is necessary to ensure that sensors located at workplaces are able to correctly identify calibration cubes based on red, green and blue color components. We decided to use the control chart statistical tool to ensure the stability of the measurement process. This is a graphical method using the principle of statistical tests of significance. The purpose of control charts is to compare and visualize the current state of measurement with respect to predefined limits. When defining the limits, we take into account the internal variability of the measured process. The basic parts of a Shewart control chart are [27,28]: central line (CL); upper control limit (UCL); and lower control limit (LCL).
The central line represents the reference value of the displayed characteristic, in our case the average of measured values X ¯ . We assume a normal distribution therefore the control limits are set at the distance of 3 σ on both sides of the central line, where σ denotes the standard deviation. The control limit above the central line represents the high control limit and the control limit below the central line represents the low control limit.
We initially assumed the possibility of creating two charts for each calibration cube, which would cover all combinations of the measurement process settings. While one of the charts would monitor the process stability when measuring stopped cubes, the second would do this for a moving conveyor belt. As mentioned in the evaluation of uncertainties, data measured during the movement of cubes on the conveyor belt showed several times higher uncertainties than data measured when the calibration cube was stopped. Based on this finding, two control charts would allow us to define control limits for a larger number of colors in static measurements without overlapping.
The color component values of the measured color within one setting combination, show a relatively low degree of variability. However, comparison of individual sets showed that the average measured values of color components differs depending on the settings combination used. Further testing revealed that illumination is the main factor influencing the change of these values. Other variability of values occurred for scanning distance changes, but this difference was not so significant. Based on these findings, the best way to increase the number of identifiable colors would be to create a database of control limits for each combination of measurement process settings. Limits could be then selected based on the particular conditions at the measurement site.
From the part of this work describing measurement uncertainties, we can see the lowest achieved measurement uncertainties correspond to the brown calibration cube. The best combination of measurement settings was in the dark, at a distance of 15 mm. Therefore, we chose a brown cube measured in the dark, at a scanning distance of 15 mm, as an example for determining the stability of the measuring process using control charts.

3.3. RGB Color Component Charts for Measurement of Stopped Brown Cube

Figure 5, Figure 6 and Figure 7 show the color components control charts created from measurements of a brown cube in the dark at a scanning distance of 15 mm. We used the whole set of measurements for a given settings combination when determining the control limits, but we printed only the first 45 measured values to maintain the clarity of the charts. The vertical axis of the charts shows the values of the measured color components in percentages. The horizontal axis shows the measurement number at which the value was recorded. We used procedures described in the literature to define the control limits [29].
Figure 5 show red color components control chart. When determining the control limits, we started with a normal distribution i.e., we set the control limits at a distance of 3 σ from the central line. However, in the case of a stopped cube, this distance was not sufficient for the upper control limit, as several values exceeded this limit, which caused instability of the control chart. Therefore we moved the upper control limit to the distance of 4 σ , which proved to be sufficient to ensure the stability of the control chart based on the visualized dataset, as all measured values were within control limits.
Similarly we set the limits for the green and blue color components. After application of the measured data, we found the that control charts are stable. The control chart for the green color component is shown in Figure 6 and the control chart for the blue color component is shown in Figure 7. As we can see from the charts, the lowest variability in values was identified for the green color component. We set limits based on the normal distribution, and in the case of the green color component, these were sufficient to ensure the stability of the control chart and it was not necessary to expand them. In the case of the blue color component for a stopped cube, we extended the lower control limit to 4σ. This covered all measured data, and also did not unnecessarily expand the interval too much by shifting both control limits.

3.4. Stability Monitoring of the Measuring Process

Based on the control charts created for the individual color components for all combinations of measurement settings for the calibration cubes, it is possible to continuously monitor the stability of the measurement process at individual workplaces in production. After implementation of a color sensor into the production process, a combination of settings is selected based on the measurement conditions at the given workplace and respective control charts are selected for the individual color components of calibration cubes. If the measuring process is stable i.e., all measured values are inside the control limits range, it will also ensure the smooth operation of the logistics system, because it will work with correct data.
Color identification is essential to ensure smooth logistics operations. Without correct color identification, inaccuracies in calculations of the current state of materials on the line arise. Inaccuracy has a negative impact on the functionality of the system in long running operation. By implementing control charts, we can identify values that are outside the control limits and examine their cause.
The aim of the application of control charts is to maintain the stability of the measuring process. In the case of process instability, it is necessary to assess each measured value individually. The value can be also excluded, if this is an isolated case with a large deviation from the control limits. If this case is repeated and values accumulate outside the control limits, it is possible to implement some of appropriate corrective mechanism, e.g., shortening the calibration interval, extending the control limits, reduction of defects on the calibration cube or sensor.
If one of the sensor readings writes to the database a value that is outside the control limits for that color component of the measured color, the color is not successfully identified. In the next part, we address the failure of color identification and the possibility to solve this issue with minimum impact on the logistics process. At the same time we must be able to identify the origin of measurement errors.

3.5. K-Nearest Neighbors Algorithm for Identification of Values Outside the Control Limits

After a value is scanned by a sensor, this value is recorded into the database table corresponding to a particular sensor, based on some unique sensor identifier. Measurement number which is unique in the table is assigned to the value. The measured values of individual color components are thereafter loaded into the logistics system. Once values are available, it is checked whether color components are within the control limits of any of the defined colors. The color is identified, if component values are inside the control limits. If values fall outside the control limits, we still need to identify the color to avoid the disruption of logistics processes, which is controlled by the colors. For identification of colors based on values that are outside the control limits, we decided to use the K-nearest neighbor machine learning algorithm, which we implemented in the Python programming language.
The K-nearest neighbor is classification algorithm often used in the analysis of large datasets based on common attributes. In the first step, the algorithm assigns training data to a certain group based on their designation. The training data in our case are values measured during our experimental measurements. These data have six independent variables based on which the resulting dependent variable is defined and determining the group [30]:
  • Conveyor belt speed;
  • Illumination;
  • Scanning distance;
  • Measured value of red, blue and green color component.
Since we divided the data for control charts according to combinations of measurement settings, the dependent variables conveyor belt speed, illumination and scanning distance are constant for all measured colors. Based on this fact, we need only three variables to define the dependent variable, in our case color. These variables are measured values of individual RGB color components.
The training sets for algorithm at specific setting of measurement parameters thus contain 1200 measurements i.e., 200 for each measured calibration cube. We divided the data into training and testing data, using a ratio of 80/20 (training/testing) for functionality testing purposes. We can test in that manner whether the algorithm has not adapted too much to the training data, and still be able to respond to a new dataset. As sample data set, we chose measurements performed on a moving conveyor belt, in the dark, at a scanning distance of 15 mm. Figure 8 shows distribution of training data based on the dependent variable, in our case color, where the coordinates of individual points are determined based on RGB coordinates. As we can see in Figure 8, the data are grouped according to the color of cubes and there are visible gaps between the individual colors. After application of the algorithm on the training data, we verified its accuracy on the test dataset. Thanks to the mentioned gaps, the algorithm achieved an accuracy of 100% in categorization of the test data. This algorithm configuration is subsequently used to check measured values outside the control limits.
If a measured value outside the control limits of any color occurs during the process, this value is tested by an algorithm trained for that setting. The algorithm attempts to classify this value. The K-nearest neighbor algorithm for classification uses the Euclidean distances of trained data to the value to be classified. According to the input parameter of algorithm, which is number of searched neighbors, it determines given number of points with the lowest possible Euclidean distance to the value to be classified. The class of the new value is identified based on the class where most of the selected neighbors belong to [15,31].
The trained algorithm is therefore a tool which can be used for immediate estimation of measured colors when recorded a value is outside the control limits. We can reduce inaccuracies between the real and digital control system by application of the algorithm and ensure the smooth process running. It is necessary in our case to set the maximum distance of the nearest neighbor, what can filter out the category of values created by incorrect measurements. If we did not set the maximum distance tolerance, each measurement would be identified as one of the colors, no matter how far the measured values were from the control limits of defined colors. The application of the algorithm is important, especially when deploying sensors in a new workplace, until all environmental influences are identified. However, the algorithm is only offers the possibility of temporarily identifying solutions for values outside the control limits. If large number of values are outside the control limits during the measurements, one of the abovementioned corrective mechanism must be applied e.g., an extension of the control limits.

4. Discussion

One of the main trends nowadays is a shift away from mass series production and a transition to custom production based on the increasing requirements of demanding customers [8,32]. The consequence of this trend is often several variations of the same product on one production line at the same time. Each variation has its own specifics which must be taken into account within the production process. For this reason, it is currently essential to be able to recognize products with the highest possible accuracy and speed. Every single stop due to products’ identification, increases the work cycle and prolongs the production time of a product. Technology with intelligent recognition capability is relatively expensive and difficult to maintain [33].
The article offers us an advantageous alternative to expensive and complex technology for dynamic scanning and identification of products in motion, in the form of using a cheap static industrial color sensor with simple maintenance and adjustment wherever the type of production allows. It involves a simple color sensor commonly used in the industry [34,35,36,37]. The difference here is its usage in dynamic identification, where it is not primarily suitable due to the sensor characteristics. Based on our research and experiments, we can responsibly declare that it is possible to use a simple sensor for dynamic (more complex) product identification. However, statistical methods must be used in order to obtain combined standard uncertainties and control charts for a given sensor [38,39]. Subsequently the mentioned K-nearest neighbor machine learning algorithm allows us to rectify any sensor errors in dynamic color scanning [16]. The static color sensor with the best scanning results of static products, by using proposed methodology described in the article, becomes capable of identifying products even in motion. On-the-fly identification speeds up the entire production line and thus allows us to produce more products in the same time. Not least of all, since the conveyor belt does not stop, it significantly extends the technology operating life (by minimizing the occurrence of mechanical shocks), reduces required service (because of less mechanical component damage) and saves electricity (thanks to skipped energy-inefficient start-ups).
We currently see a demand for product identification technology in industrial enterprises [30]. Therefore, we would like to focus on enhancement of identification methods in the future. We plan to incorporate of optical methods for dynamic product recognition using special 3D sensors which are already at our disposal. These include the Photoneo PhoXi [40] or cheaper camera OpenCV alternatives [41], where we anticipate their integration with standard Cognex industrial cameras, supported by additional software for intelligent product recognition [42].

Author Contributions

Conceptualization, J.V. and P.V.; methodology, J.V.; software, M.Š.; validation, J.R., J.S. and P.V.; formal analysis, J.V.; investigation, D.Š.; resources, D.Š.; data curation, J.R.; writing—original draft preparation, J.V.; writing—review and editing, D.Š.; visualization, D.Š.; supervision, J.V.; project administration, J.V.; funding acquisition, J.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the financial Slovak Grant Agency APVV, project ID: APVV-17-0214 and Scientific Grant Agency VEGA of the Ministry of Education of Slovak Republic (grant number: 1/0317/17) and the Scientific Grant Agency KEGA (grant number: 007STU-4/2021 and 024STU-4/2020).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sardar, S.K.; Sarkar, B.; Kim, B. Integrating Machine Learning, Radio Frequency Identification, and Consignment Policy for Reducing Unreliability in Smart Supply Chain Management. Processes 2021, 9, 247. [Google Scholar] [CrossRef]
  2. Tran, N.-H.; Park, H.-S.; Nguyen, Q.-V.; Hoang, T.-D. Development of a Smart Cyber-Physical Manufacturing System in the Industry 4.0 Context. Appl. Sci. 2019, 9, 3325. [Google Scholar] [CrossRef] [Green Version]
  3. Gardner, H.; Kleiner, F.S.; Mamiya, C.J.; Tansey, G.T. Gardner’s Art through the Ages, 11th ed.; Harcourt College Publishers: Fort Worth, TX, USA, 2001. [Google Scholar]
  4. Makbkhot, M.M.; Al-Ahmari, A.M.; Salah, B.; Alkhalefah, H. Requirements of the Smart Factory System: A Survey and Perspective. Machines 2018, 6, 23. [Google Scholar] [CrossRef] [Green Version]
  5. Kritzinger, W.; Karner, M.; Traar, G.; Henjes, J.; Sihn, W. Digital Twin in manufacturing: A categorical literature review and classification. IFAC PapersOnLine 2018, 51, 1016–1022. [Google Scholar] [CrossRef]
  6. Tao, F.; Zhang, H.; Liu, A.; Nee, A.Y.C. Digital Twin in Industry: State-of-the-Art. IEEE Trans. Ind. Inform. 2019, 15, 2405–2415. [Google Scholar] [CrossRef]
  7. Cohen, Y.; Naseraldin, H.; Chaudhuri, A.; Pilati, F. Assembly systems in Industry 4.0 era: A road map to understand Assembly 4.0. Int. J. Adv. Manuf. Technol. 2019, 105, 4037–4054. [Google Scholar] [CrossRef]
  8. Valencia, E.T.; Lamouri, S.; Pellerin, R.; Dubois, P.; Moeuf, A. Production Planning in the Fourth Industrial Revolution: A Literature Review. IFAC PapersOnLine 2019, 52, 2158–2163. [Google Scholar] [CrossRef]
  9. Burmester, M.; Munilla, J.; Ortiz, A.; Caballero-Gil, P. An RFID-Based Smart Structure for the Supply Chain: Resilient Scanning Proofs and Ownership Transfer with Positive Secrecy Capacity Channels. Sensors 2017, 17, 1562. [Google Scholar] [CrossRef] [PubMed]
  10. Benito-Altamirano, I.; Pfeiffer, P.; Cusola, O.; Daniel Prades, J. Machine-Readable Pattern for Colorimetric Sensor Interrogation. Proceedings 2018, 2, 906. [Google Scholar] [CrossRef] [Green Version]
  11. Kang, H.R. Color Technology for Electronic Imaging Devices; SPIE Optical Engineering Press: Bellingham, WA, USA, 1997. [Google Scholar]
  12. Ford, A.; Roberts, A. Colour Space Conversions; Westminster University: London, UK, 1998. [Google Scholar]
  13. International Colour Consortium (ICC). Available online: http://www.color.Org/faqs.xalter#wh2 (accessed on 14 February 2021).
  14. Menesatti, P.; Angelini, C.; Pallottino, F.; Antonucci, F.; Aguzzi, J.; Costa, C. RGB Color Calibration for Quantitative Image Analysis: The “3D Thin-Plate Spline” Warping Approach. Sensors 2012, 12, 7063–7079. [Google Scholar] [CrossRef] [Green Version]
  15. Okfalisa; Gazalba, I.; Mustakim, M.; Reza, N.G.I. Comparative analysis of k-nearest neighbor and modified k-nearest neighbor algorithm for data classification. In Proceedings of the 2nd International conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE), Yogyakarta, Indonesia, 1–3 November 2017; pp. 294–298. [Google Scholar] [CrossRef]
  16. Ohmori, S.A. Predictive Prescription Using Minimum Volume k-Nearest Neighbor Enclosing Ellipsoid and Robust Optimization. Mathematics 2021, 9, 119. [Google Scholar] [CrossRef]
  17. Yoo, Y.; Yoo, W.S. Turning Image Sensors into Position and Time Sensitive Quantitative Colorimetric Data Sources with the Aid of Novel Image Processing/Analysis Software. Sensors 2020, 20, 6418. [Google Scholar] [CrossRef]
  18. KUKA KR3 R540. [online]. © KUKA AG 2020. Available online: https://www.kuka.com/sk-sk/servisn%C3%A9-slu%C5%BEby/downloads?terms=Language:sk:1;Language:en:1Language:en:1&q= (accessed on 29 January 2021).
  19. Industrial Robots. General Technical Requirements; STN 18 6508; Slovak Office of Standards, Metrology and Testing: Bratislava, Slovakia, 1990; p. 8. [Google Scholar]
  20. CSM Color Sensor. [online]. © 2020 SICK AG. Available online: https://www.sick.com/ag/en/registration-sensors/color-sensors/csm/c/g305962 (accessed on 29 January 2021).
  21. Vašek, P. Design of Methodology and Measurement Model for Testing the Logistics System in Flexible Production and Design of Algorithms for its Optimization; SUT: Bratislava, Slovakia, 2020. [Google Scholar]
  22. Kelemenová, T.; Dovica, M. Gauge Calibration, 1st ed.; Technical University of Košice, Faculty of Mechanical Engineering, Edition of Scientific and Professional Literature: Košice, Slovakia, 2016; p. 232. ISBN 978-80-553-3069-3. [Google Scholar]
  23. Skibicki, J.; Golijanek-Jędrzejczyk, A.; Dzwonkowski, A. The Influence of Camera and Optical System Parameters on the Uncertainty of Object Location Measurement in Vision Systems. Sensors 2020, 20, 5433. [Google Scholar] [CrossRef] [PubMed]
  24. Barone, F.; Marrazzo, M.; Oton, C.J. Camera Calibration with Weighted Direct Linear Transformation and Anisotropic Uncertainties of Image Control Points. Sensors 2020, 20, 1175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Wimmer, G.; Palenčár, R.; Witkovský, V.; Ďuriš, S. Evaluation of Gauge Calibration: Statistical Methods for Analysis of Uncertainties in Metrology, 1st ed.; SUT: Bratislava, Slovakia, 2015; p. 191. ISBN 978-80-227-4374-7. [Google Scholar]
  26. Němeček, P. Measurement Uncertainties, 1st ed.; Czech Society for Quality: Praha, Czech Republic, 2008; p. 96. ISBN 978-80-02-02089-9. [Google Scholar]
  27. Vašek, P.; Rybář, J.; Vachálek, J. Identification of colored objects and factors affecting the control of the measurement process in the experimental workplace. Metrol. Test. 2020, 1, 4–7. [Google Scholar]
  28. Palenčár, R.; Kureková, E.; Halaj, M. Measurement and Metrology for Managers; SUT: Bratislava, Slovakia, 2007; p. 252. ISBN 978-80-227-2743-3. [Google Scholar]
  29. Palenčár, R.; Ruiz, J.M.; Janiga, I.; Horníková, A. Statistical Methods in Metrological and Testing Laboratories; SUT: Bratislava, Slovakia, 2001; p. 366. ISBN 80-968449-3-8. [Google Scholar]
  30. Zheng, N.; Lu, X. Comparative Study on Push and Pull Production System Based on Anylogic. In Proceedings of the International Conference on Electronic Commerce and Business Intelligence, Beijing, China, 6–7 June 2009; pp. 455–458. [Google Scholar] [CrossRef]
  31. Peng, X.; Chen, R.; Yu, K.; Ye, F.; Xue, W. An Improved Weighted K-Nearest Neighbor Algorithm for Indoor Localization. Electronics 2020, 9, 2117. [Google Scholar] [CrossRef]
  32. Micieta, B.; Binasova, V.; Lieskovsky, R.; Krajcovic, M.; Dulina, L. Product Segmentation and Sustainability in Customized Assembly with Respect to the Basic Elements of Industry 4.0. Sustainability 2019, 11, 6057. [Google Scholar] [CrossRef] [Green Version]
  33. Groover, M.P. Automation, Production Systems, and Computer-Integrated Manufacturing; Pearson Education, Inc.: Upper Saddle River, NJ, USA, 2008; ISBN-13>978-0132393218. [Google Scholar]
  34. Jia, J. A Machine Vision Application for Industrial Assembly Inspection. In Proceedings of the Second International Conference on Machine Vision, Dubai, UAE, 28–30 December 2009; pp. 172–176. [Google Scholar] [CrossRef]
  35. WU, D.; Sun, D.W. Colour measurements by computer vision for food quality control—A review. Trends Food Sci. Technol. 2013, 29, 5–20. [Google Scholar] [CrossRef]
  36. Li, J. Application Research of Vision Sensor in Material Sorting Automation Control System. IOP Conf. Ser. Mater. Sci. Eng. 2020, 782, 022074. [Google Scholar] [CrossRef]
  37. Shrestha, A.; Karki, N.; Yonjan, R.; Subedi, M.; Phuyal, S. Automatic Object Detection and Separation for Industrial Process Automation. In Proceedings of the IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 22–23 February 2020; pp. 1–5. [Google Scholar] [CrossRef]
  38. Mandel, B.J. The Regression Control Chart. J. Qual. Technol. 1969, 1, 1–9. [Google Scholar] [CrossRef]
  39. Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin, Germany, 2013; ISBN 978-1-4757-3264-1. [Google Scholar]
  40. Photoneo PhoXi Scaners. Available online: https://www.photoneo.com/phoxi-3d-scanner/ (accessed on 14 February 2021).
  41. Koori, A.; Anitei, D.; Boitor, A.; Silea, I. Image-Processing-Based Low-Cost Fault Detection Solution for End-of-Line ECUs in Automotive Manufacturing. Sensors 2020, 20, 3520. [Google Scholar] [CrossRef]
  42. Cognex Vision Pro Deep Learning. Available online: https://www.cognex.com/products/deep-learning/visionpro-deep-learning (accessed on 14 February 2021).
Figure 1. (a) Design of test robotic workplace in the Siemens Tecnomatix Process Simulate environment; (b) Calibration cubes for simulation of six different customized products.
Figure 1. (a) Design of test robotic workplace in the Siemens Tecnomatix Process Simulate environment; (b) Calibration cubes for simulation of six different customized products.
Sensors 21 01797 g001
Figure 2. Measurement process workflow diagram.
Figure 2. Measurement process workflow diagram.
Sensors 21 01797 g002
Figure 3. Robot range extension using flange.
Figure 3. Robot range extension using flange.
Sensors 21 01797 g003
Figure 4. (a) Color sensor CSM—WP117A2P; (b) Sensing distance of sensor [20].
Figure 4. (a) Color sensor CSM—WP117A2P; (b) Sensing distance of sensor [20].
Sensors 21 01797 g004
Figure 5. Red color component control chart for a brown cube measured in the dark, at a scanning distance of 15 mm, (a) stopped cube; (b) cube in motion.
Figure 5. Red color component control chart for a brown cube measured in the dark, at a scanning distance of 15 mm, (a) stopped cube; (b) cube in motion.
Sensors 21 01797 g005
Figure 6. Green color component control chart for a brown cube measured in the dark, at a scanning distance of 15 mm; (a) stopped cube; (b) cube in motion.
Figure 6. Green color component control chart for a brown cube measured in the dark, at a scanning distance of 15 mm; (a) stopped cube; (b) cube in motion.
Sensors 21 01797 g006
Figure 7. Blue color component control chart for a brown cube measured in the dark, at a scanning distance of 15 mm; (a) stopped cube; (b) cube in motion.
Figure 7. Blue color component control chart for a brown cube measured in the dark, at a scanning distance of 15 mm; (a) stopped cube; (b) cube in motion.
Sensors 21 01797 g007
Figure 8. Visualized training data.
Figure 8. Visualized training data.
Sensors 21 01797 g008
Table 1. KUKA KR3 R540 parameter overview.
Table 1. KUKA KR3 R540 parameter overview.
Maximum reach541 mm
Payload3 kg
Pose repeatability±0.02 mm
Number of axes6
Mounting positionsCeiling, Floor, Wall
Footprint179 × 179 mm
Weigh26.5 kg
Ambient operatingtemperature5 °C–45 °C
Protection classIP40
ControllerKR C-4 compact
Table 2. Automatica conveyor belt specifications.
Table 2. Automatica conveyor belt specifications.
Length1500 mm
Width25 mm
Height1000 mm
Motorthree-phase motor Nord
ControlSiemens Sinamics V20 frequency converter
Maximum revolutions1415 RPM
Belt materialrubber with anti-slip surface
Table 3. CSM-WP117A2P color sensor features.
Table 3. CSM-WP117A2P color sensor features.
Dimensions22 mm × 12 mm × 32 mm
Sensing distance12.5 mm
Sensing distance tolerance±3 mm
Light sourceLight Emitting Diode (LED), RGB
Wave length640 nm, 525 nm, 470 nm
Light spot size1.5 mm × 6.5 mm
Response time300 µs
Supply voltageDC 12 V ... 24 V
Output (channel)1 color/8 colors via IO-Link
Table 4. Type A standard uncertainty for a red cube.
Table 4. Type A standard uncertainty for a red cube.
IlluminationSensor Distance (mm)Cube StopsCube in Motion
u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % ) u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % )
Natural daylight100.0100.0050.0110.0160.1050.0210.0260.110
Natural daylight12.50.0010.0070.0110.0130.0790.0310.0340.091
Natural daylight150.0040.0110.0010.0120.0600.0230.0200.067
Artificial light100.0000.0030.0010.0030.1980.0360.0430.206
Artificial light12.50.0020.0030.0010.0040.4580.0410.0510.463
Artificial light150.0040.0010.0110.0120.0650.0300.0280.077
Darkness100.0000.0110.0000.0110.1600.0290.0340.166
Darkness12.50.0160.0000.0080.0180.0640.0250.0270.074
Darkness150.0000.0030.0050.0060.1030.0290.0270.110
Table 5. Type A standard uncertainty for a blue cube.
Table 5. Type A standard uncertainty for a blue cube.
IlluminationSensor Distance (mm)Cube StopsCube in Motion
u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % ) u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % )
Natural daylight100.0000.0040.0220.0220.0210.0930.2590.276
Natural daylight12.50.0110.0030.0040.0120.0200.0270.0490.059
Natural daylight150.0020.0050.0120.0130.0300.0590.0570.087
Artificial light100.0100.0120.0000.0160.0180.1150.3180.339
Artificial light12.50.0020.0060.0080.0100.0180.0180.0560.062
Artificial light150.0140.0000.0070.0160.0520.1060.1060.159
Darkness100.0140.0010.0020.0140.0220.0950.2350.254
Darkness12.50.0010.0070.0040.0080.0240.0300.0270.047
Darkness150.0110.0030.0070.0130.0420.1110.1130.164
Table 6. Type A standard uncertainty for a pink cube.
Table 6. Type A standard uncertainty for a pink cube.
IlluminationSensor Distance (mm)Cube StopsCube in Motion
u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % ) u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % )
Natural daylight100.0030.0050.0000.0060.1190.0350.0660.141
Natural daylight12.50.0050.0080.0190.0210.1970.0430.1370.244
Natural daylight150.0060.0030.0160.0170.0350.0420.0370.066
Artificial light100.0180.0220.0100.0300.1920.2180.2300.371
Artificial light12.50.0000.0150.0190.0240.2130.0840.1310.264
Artificial light150.0060.0070.0120.0150.0770.0530.0360.100
Darkness100.0270.0060.0160.0320.0870.0390.0560.111
Darkness12.50.0000.0140.0140.0200.1490.0280.0740.169
Darkness150.0000.0000.0080.0080.0450.0310.0230.059
Table 7. Type A standard uncertainty for a green cube.
Table 7. Type A standard uncertainty for a green cube.
IlluminationSensor Distance (mm)Cube StopsCube in Motion
u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % ) u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % )
Natural daylight100.0070.0120.0010.0140.0280.0930.0480.108
Natural daylight12.50.0110.0020.0170.0200.0210.0310.0200.042
Natural daylight150.0040.0140.0040.0150.0530.1790.0410.191
Artificial light100.0050.0020.0020.0060.0440.1470.0790.173
Artificial light12.50.0140.0190.0120.0260.0230.0340.0270.049
Artificial light150.0010.0160.0120.0200.0260.0450.0240.057
Darkness100.0070.0010.0030.0080.0200.0510.0270.061
Darkness12.50.0000.0040.0080.0090.0200.0290.0200.041
Darkness150.0080.0090.0130.0180.0380.0540.0220.070
Table 8. Type A standard uncertainty for a yellow cube.
Table 8. Type A standard uncertainty for a yellow cube.
IlluminationSensor Distance (mm)Cube StopsCube in Motion
u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % ) u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % )
Natural daylight100.0010.0000.0130.0130.2920.2800.1250.423
Natural daylight12.50.0080.0020.0140.0160.1450.1230.1030.216
Natural daylight150.0140.0010.0040.0150.1720.1440.0480.229
Artificial light100.0190.0030.0030.0190.3830.3330.1120.520
Artificial light12.50.0040.0050.0000.0060.1180.1040.0920.182
Artificial light150.0140.0070.0110.0190.1440.1120.0270.184
Darkness100.0010.0080.0150.0170.2680.2550.0890.380
Darkness12.50.0020.0140.0100.0170.1580.0990.0780.202
Darkness150.0060.0140.0000.0150.1630.1410.0410.219
Table 9. Type A standard uncertainty for a brown cube.
Table 9. Type A standard uncertainty for a brown cube.
IlluminationSensor Distance (mm)Cube StopsCube in Motion
u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % ) u A R ( % )   u A G ( % ) u A B ( % ) u t o t a l ( % )
Natural daylight100.0000.0020.0010.0020.0780.0910.0570.133
Natural daylight12.50.0080.0080.0140.0180.0430.0560.0510.087
Natural daylight150.0010.0020.0030.0040.0620.0320.0290.076
Artificial light100.0000.0010.0050.0050.0380.0450.0370.070
Artificial light12.50.0080.0090.0020.0120.0190.0310.0670.076
Artificial light150.0010.0000.0030.0030.0950.0550.0520.121
Darkness100.0000.0060.0010.0060.1170.1160.0850.185
Darkness12.50.0120.0010.0130.0180.0310.0310.0390.059
Darkness150.0050.0000.0070.0090.0500.0260.0330.065
Table 10. Type B uncertainty components.
Table 10. Type B uncertainty components.
Uncertainty
Component
Uncertainty TypeUncertainty ValueDistribution
Repeatability u A In Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9---
Cube placement
by the robot *
u B 1 0.02 % equal
Sensor distance
sensitivity
u B 2 u B 2 10 m m = 0.100 % equal
u B 2 12.5 m m = 0.105 %
u B 2 15 m m = 0.096 %
Illumination effect ** u B 3 u B 3 d a r k n e s s = 0.700 % equal
u B 3 a r t i f i c i a l = 1.000 %
u B 3 d a y l i g h t = 0.800 %
Conveyor movement effect * u B 4 u B 4 m o t i o n = 0.005 % equal
u B 4 s t a t i c = 0.000 %
Range of measured
values
u B 5 ( 0 ÷ 7 ) % equal
Microclimate *** u B 6 0.1 % equal
* value estimated based on the documentation; ** value estimated from experimental measurements; *** estimated value.
Table 11. Type B uncertainty components.
Table 11. Type B uncertainty components.
Measurement
Number
R e d ( % ) G r e e n ( % ) B l u e ( % )
143.43324.60015.743
242.88624.45715.267
343.10024.45715.800
442.20024.37814.767
541.72523.93315.267
642.76724.80015.600
743.10024.35015.475
843.40024.72515.600
944.22524.60014.767
1043.85024.60014.600
1143.43324.71114.800
1243.10024.71115.171
1343.00024.93315.933
1442.10023.93315.433
1542.60024.72515.800
644.919367.913230.023
Average color
representation
42.994624.5275315.33487
The resulting color82.857
Table 12. Uncertainty sources for the brown cube.
Table 12. Uncertainty sources for the brown cube.
Uncertainty
Component
Uncertainty TypeUncertainty ValueDistribution
Repeatability u A 0.180 % ---
Cube placement
by the robot *
u B 1 0.020 % equal
Sensor distance
sensitivity
u B 2 0.096 % equal
Illumination effect ** u B 3 0.700 % equal
Conveyor movement effect * u B 4 0.005 % equal
Range of measured
values
u B 5 1.216 % equal
Microclimate *** u B 6 0.100 % equal
* value estimated based on the documentation, ** value estimated from experimental measurements, *** estimated value related to the workplace.
Table 13. Balance table of uncertainties for the brown cube.
Table 13. Balance table of uncertainties for the brown cube.
Uncertainty Balance for the Brown Calibration Cube Moving on the Conveyor
Measurement
Impact
Standard
Uncertainty
DistributionSensitivity
Coefficient
Uncertainty
Contribution
c i · u i   ( % ) c i c i · u i   ( % ) ( c i · u i ) 2   ( % )
u A Repeatability0.180---10.1800.032400
u B 1 Cube
placement by the robot
0.020equal10.0200.000400
u B 2 Sensor
distance
sensitivity
0.096equal10.0960.009216
u B 3 Illumination effect0.700equal10.7000.490000
u B 4 Conveyor movement
effect
0.005equal10.0050.000025
u B 5 Range of measured
values
1.216equal11.2161.478000
u B 6 Microclimate0.100equal10.1000.010000
u c   ( % ) 1.422000
U   ( % ) 2.844000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vachálek, J.; Šišmišová, D.; Vašek, P.; Rybář, J.; Slovák, J.; Šimovec, M. Intelligent Dynamic Identification Technique of Industrial Products in a Robotic Workplace. Sensors 2021, 21, 1797. https://doi.org/10.3390/s21051797

AMA Style

Vachálek J, Šišmišová D, Vašek P, Rybář J, Slovák J, Šimovec M. Intelligent Dynamic Identification Technique of Industrial Products in a Robotic Workplace. Sensors. 2021; 21(5):1797. https://doi.org/10.3390/s21051797

Chicago/Turabian Style

Vachálek, Ján, Dana Šišmišová, Pavol Vašek, Jan Rybář, Juraj Slovák, and Matej Šimovec. 2021. "Intelligent Dynamic Identification Technique of Industrial Products in a Robotic Workplace" Sensors 21, no. 5: 1797. https://doi.org/10.3390/s21051797

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop