Abstract
For remanufacturing to be more economically attractive, there is a need to develop automatic disassembly and automated visual detection methods. Screw removal is a common step in end-of-life product disassembly for remanufacturing. This paper presents a two-stage detection framework for structurally damaged screws and a linear regression model of reflection features that allows the detection framework to be conducted under uneven illumination conditions. The first stage employs reflection features to extract screws together with the reflection feature regression model. The second stage uses texture features to filter out false areas that have reflection features similar to those of screws. A self-optimisation strategy and weighted fusion are employed to connect the two stages. The detection framework was implemented on a robotic platform designed for disassembling electric vehicle batteries. This method allows screw removal to be conducted automatically in complex disassembly tasks, and the utilisation of the reflection feature and data learning provides new ideas for further research.
1. Introduction
Remanufacturing is part of a circular economy, returning end-of-life (EOL) products to at least like-new conditions through a group of operations, beginning with disassembly [1,2]. The benefits of remanufacturing for the environment, society, and economy have been widely confirmed through reducing carbon emissions, energy consumption, etc. [3,4,5,6]. Recently, research on remanufacturing electric vehicle (EV) batteries has attracted much attention. As an environmentally friendly option for transportation, the proportion of EVs to all sold cars has increased considerably in the last two years across the world. However, this increase has led to the rapid disposal of EV components, which is also a threat to the environment, society, and economy. Among these components, EV batteries are valuable for recycling because of their expensive and hazardous materials (e.g., lithium and cobalt). In addition, the lifespan of EV batteries is approximately 10 years, which also intensifies the need for remanufacturing them with the growing demand for EVs [7].
Disassembly is the first and inevitable operation in the remanufacturing process due to stricter environmental regulations and growing demand for effective product remanufacturing [8,9,10]. In current applications, manual disassembly is still the main strategy, which consumes a large amount of energy and exposes operators to dangerous materials. The utilisation of robotics has been considered to be an important method in realising automatic disassembly because of its intelligent perception system and effective execution system [11]. As the most widely used perception system, robot vision can automatically investigate the structural information of EOL products and contributes to the decision making in the disassembly process, such as disassembly sequence planning [12,13]. However, there are still some challenges in disassembly tasks for robot vision, mainly caused by the uncertainty and complexity of EOL products [14]. The structural information of EOL products is usually changed and difficult to estimate, which is a barrier to setting detection criteria. In addition, 2-dimensional (2D) cameras are the mainly adopted sensors in robot vision systems, and 2D images are widely used to accurately and efficiently describe the structural information of EOL products. However, the image quality is closely related to the lighting conditions. It is difficult to create a stable lighting condition, which is significant for accomplishing disassembly tasks stably and accurately. The study of robot vision is ponderable for disassembly.
Screw detection and removal are necessary for almost all EOL products, and the structures of used screws are usually damaged, such as cracks, fractures, and wear and tear, as shown in Figure 1. The uncertain conditions of used screws pose challenges to stable and accurate detection in disassembly tasks. In existing screw detection methods, the detection criteria are mainly designed and concluded based on the texture features of the original screws or a small number of used screws. The performance of these methods in disassembly tasks is limited due to the following challenges:
Figure 1.
Examples of used and structurally damaged screws.
- (1)
- The texture features obtained from the original screws cannot characterise the used screws accurately due to the unavoidable structural damage during the use of the product.
- (2)
- The robustness of texture features extracted from a small number of available used and damaged screws is limited due to the uncertain conditions of used screws.
- (3)
- The texture features are easily affected by illumination conditions. Current detection methods cannot operate stably under uneven illumination conditions.
To address these issues, this paper presents a novel screw detection method containing a two-stage detection framework and a reflection feature regression model to detect structurally damaged screws in EOL products under uneven illumination conditions. The contributions of this paper are as follows:
- (1)
- It presents a robust feature descriptor for structurally damaged screws by integrating reflection features and texture features. This is beneficial in weakening the influence of structural damage on modelling screws.
- (2)
- It proposes a linear regression model that enables the reflection features to be updated based on the illumination conditions automatically, which contributes to the stable operation of feature modelling under various illumination conditions. In addition, an illumination label is defined to measure illumination conditions automatically and conveniently from the point of view of the image.
- (3)
- It details a two-stage detection framework based on the proposed feature descriptor. With the help of the linear regression model, the two-stage detection framework can extract used screws from EOL products under uneven illumination conditions.
Based on the proposed screw detection method, automatic screw removal was realised for an EOL EV battery using a robotic disassembly platform. The experimental results demonstrate the satisfactory accuracy and stability of the proposed screw detection method.
The remainder of this paper is organised as follows. Section 2 reviews the visual detection work in disassembly cases. Section 3 introduces the proposed screw detection method. Section 4 describes the equipment and experiments, while the results are discussed in Section 5. Section 6 summarises this paper and lists future work.
2. Related Work
2.1. Screw Detection for Disassembly
In disassembly tasks, screw detection plays a vital role by providing the position for later disassembly. The existing screw detection methods can be classified into experience-based methods and data-driven methods. Experience-based methods utilise prior product information provided by skilled operators to set detection criteria and then extract the screw. Knowledge and model are the most widely used types of experience because they are user-friendly to summarise. Gli et al. [15] set detection criteria based on screw contour information and adopted well-known strategies (e.g., polygonal approaches) to detect screws from EOL Computers. Bdiwi et al. [16] utilised Kinect to collect and characterise screws in the form of greyscale, depth, and HSV values and then proposed a three-stage screw detection framework for removing screws from EOL EV motors. DiFilippo et al. [17] combined Gaussian blur, Prewitt edge detection, region erosion, and Hough and circle detection methods in screw detection, which successfully removed 96.5% of the screws from EOL laptops. The parameters in the above methods were all determined by the structural information of the original screws. Under this situation, the developed criteria cannot represent used screws well due to the uncertainty of EOL products, and the detection criteria are also hard to conclude because of the complexity of EOL products.
The detection criteria of data-driven methods are obtained through data learning represented by deep learning models. In existing studies, researchers are focusing on utilising and optimising state-of-the-art models. DiFilippo et al. [18] designed a cognitive architecture based on Soar’s long-term and semantic memory function, which performed well in determining the label and position of screws in laptops. Using this cognitive architecture, the inference time was decreased by up to 60% and the average inference time was decreased by 10% for most EOL laptops. Foo et al. [19] employed the residual network (ResNet) in unfastening crosshead screws from EOL LCD monitors and achieved optimal precision and recall rates of 91.8% and 83.6%. Mangold et al. [20] used the you only look once (YOLO) model to detect screws in a vision-based robotic disassembly platform. The achieved mean average precision was around 0.98. During the application of these models, some strategies (e.g., transfer learning and dropout) were adopted to reduce the randomness of training. However, these detection models require a large amount of labelled data in the training process, which poses great challenges to disassembly tasks. The robustness of the deep learning models is unsatisfactory due to the uncertainty of the EOL products. There is no guarantee that the feature extractor trained on poorly diversified datasets can accurately characterise the unseen EOL products.
Some researchers have combined experience-based methods and data-driven methods. Li et al. [21] proposed an accurate screw detection method based on the faster region with convolutional neural network features (R-CNN) model and rotation edge similarity, where faster R-CNN is a deep learning model and rotation edge similarity is designed based on the former experience provided by operators. This strategy improves the robustness of the detection method and reduces the number of training data used. It also reached up to a 90.8% success rate in the disassembly process. The proposed screw detection method is based on this strategy.
On the other hand, existing studies designed detection criteria mainly based on texture features, which are easily changed and difficult to estimate in used screws. Furthermore, the lighting condition has not been fully considered, which is also an important problem in utilisation. These two problems are considered in this study, where a robust feature descriptor is designed to characterise the used screws.
2.2. Reflection Feature for Object Detection
Reflection features can be used to represent illumination conditions and object characteristics. Currently, reflection removal and detection are the two main methods used in the object detection field. In reflection removal, reflection features are regarded as noises that damage object characteristics. Here, researchers proposed various reflection removal methods mainly based on retinex theory, which assumes that an image can be decomposed into reflection and illumination components [22,23,24]. In reflection detection, reflection features are utilised as intrinsic features. Sudo et al. [25] proposed a glass detection method by focusing on the reflective properties of the glass surface. Wu et al. [26] used reflection features to track and locate moving objects in videos and proved the contribution of reflection features to detecting non-line-of-sight objects. Zhang et al. [27] presented a reflective learning model, and the reflection features were extracted to detect salient objects. Zhang et al. [28] also employed reflection features in designing a loss function to learn the saliency feature in object detection.
In summary, reflection features play a vital role in characterising objects, especially in some cases where texture features cannot perform well. However, previous studies seldom considered the impact of illumination conditions on reflection features, which is important in industrial tasks. This paper presents a reflection feature regression model to automatically determine the reflection feature based on the illumination conditions.
3. Method
This paper proposes an illumination-adaptive detection method for removing screws from EOL products, as shown in Figure 2. In the form of screw region extraction, this method can stably detect structurally damaged screws under nonuniform illumination conditions. The main idea is to characterise and merge the reflection and texture features of the screw regions in the two-stage detection framework. The reflection features are utilised here to model the overall screw regions and then employed to extract screw regions in the reflection stage, which is composed of the measure illumination condition node, the set reflection feature node, and the extract reflection-based screw region node. Compared with texture features, reflection features are less affected by structural damage and can be used to represent structurally damaged screws [29]. In the texture stage, texture features are employed to remove false areas (e.g., exposure areas) extracted before, through the match scale-invariant feature transform (SIFT) features node and the extract texture-based screw region node. The extracted screws of the two stages are then fused in the weighted fusion node, where the screw region with the lowest fused detection confidence is named the detected screw. The detected screw is continuously updated based on a self-optimisation strategy through the compare neighbour detected screws node. Here, the problem of detecting damaged screw structures under a fixed illumination condition is solved. However, in disassembly tasks, controlling lighting conditions is difficult and labour-intensive. To improve the robustness of the proposed detection method under uneven illumination conditions, a reflection feature regression model is developed to draw the relationship between reflection features and illumination conditions. Finally, by incorporating the two-stage detection framework and the reflection feature regression model, the detection of structurally damaged screws under uneven illumination conditions is realised. The detailed process is illustrated in the following subsections. Table 1 shows the important notations used in this paper.
Figure 2.
Flowchart of the proposed screw detection method.
Table 1.
The important notations used in this paper.
3.1. Characterise Reflection Features
In screw detection tasks, the texture features of screws in EOL products are easily changed by structural damage, which decreases the accuracy of texture features to describe used screws. In this paper, it is proposed to characterise used screws by using reflection features, which are mainly determined by illumination conditions and the object’s reflection abilities as
where represents the mapping function. With a fixed illumination condition, the difference in reflection features between screws and other components can be regarded as the detection criteria. The reflection ability is highly related to roughness, transparency, and refractive index, while the impact of structural damage on reflection abilities is limited. In addition, the reflection feature of screws is modelled from the point of view of regions, which is also beneficial to reduce the influence of structural damage on feature modelling. In the proposed method, the size of the screw region is determined by the size of the screw in the original products.
3.2. Measure Illumination Conditions
To promote the robustness of reflection features under uneven illumination conditions, the reflection features are expected to be automatically updated based on illumination conditions. The measurement accuracy of illumination conditions determines the representation ability of reflection features. However, it is difficult to set the position of light sensors (e.g., light meters) due to the uncertain and complex image capture environments (e.g., the relative position between cameras and targets, the angular relationship between cameras and targets). Under this situation, the efficiency and accuracy of adopting sensors to measure illumination conditions are unsatisfactory. This paper proposes an illumination label to reflect the illumination condition and describe the relationship between the reflection features of screw regions and illumination conditions from the point of view of images. The introduction of illumination labels is beneficial for accurately and efficiently measuring the illumination condition of screws and is less likely to be affected by complex capture conditions (e.g., shadowing) [30,31,32]. The illumination label extraction algorithm is defined as Algorithm 1, where the position of the screw region () is described by its top-left point and the bottom-right point . Through the illumination label extraction algorithm, the nonfeature region (there are no extracted edge feature points in this region) closest to the screw region is defined as the illumination label.
| Algorithm 1: illumination label extraction algorithm |
| Input: |
| Captured image ; |
| Screw region ; |
| Screw region position ; |
| Output: |
| Illumination label ; |
|
|
|
|
|
|
|
|
|
|
|
|
The screw region is unknown in the inference process, where we present a self-optimisation strategy to solve this problem, as illustrated in Section 3.4. Figure 3b gives an example of an extracted illumination label by using Algorithm 1.
Figure 3.
Distribution of the m-neighbourhood region of the screw region and the example of an extracted illumination label using Algorithm 1. (a) The distribution of the m-neighbourhood region of the screw region, where the green rectangle denotes the screw region, and the orange rectangles denote m-neighbourhood regions with m recorded in the centre (e.g., m = 8, m = 16). (b) An example of an extracted illumination label using Algorithm 1. In (b), the green rectangle denotes the screw region, the grey rectangles denote m-neighbourhood regions where there are extracted edge features, and the blue rectangles denote m-neighbourhood regions where there is no extracted edge feature. The blue rectangle with ‘label’ recorded in the centre denotes the extracted illumination label. The size of each neighbourhood region is the same as the size of the screw region. Due to the adjacent location of the illumination label and the screw region, the illumination condition of the extracted illumination label is the same as the illumination condition of the screw region.
Figure 3.
Distribution of the m-neighbourhood region of the screw region and the example of an extracted illumination label using Algorithm 1. (a) The distribution of the m-neighbourhood region of the screw region, where the green rectangle denotes the screw region, and the orange rectangles denote m-neighbourhood regions with m recorded in the centre (e.g., m = 8, m = 16). (b) An example of an extracted illumination label using Algorithm 1. In (b), the green rectangle denotes the screw region, the grey rectangles denote m-neighbourhood regions where there are extracted edge features, and the blue rectangles denote m-neighbourhood regions where there is no extracted edge feature. The blue rectangle with ‘label’ recorded in the centre denotes the extracted illumination label. The size of each neighbourhood region is the same as the size of the screw region. Due to the adjacent location of the illumination label and the screw region, the illumination condition of the extracted illumination label is the same as the illumination condition of the screw region.

The changes in the illumination condition of the screw region can be represented by the changes in the reflection feature of the illumination label. Therefore, the relationship between the reflection feature of the screw region and the illumination condition is described by the relationship between the reflection feature of the screw region and the reflection feature of the illumination label. In this study, we use the L value in the Lab colour space to represent the illumination conditions and reflection features.
The relationship between the reflection feature of the screw region and the illumination condition is determined by the relationship between and , where and are first calculated as
and denote the size of the screw region.
and denote the size of the extracted illumination label, and and denote the L value of pixel point in the screw region and the illumination label, respectively. Then, and are divided into reflection components (, ) and illumination components (, ) based on retinex theory as
Finally, by introducing Equation (2), the relationship between and can be represented by the relationship between and as
Therefore, the relationship between and is represented by the relationship between and .
3.3. Reflection Feature Regression Model
To set the relationship between the reflection feature of the screw region and the reflection feature of the illumination label, a reflection feature regression model is established, where the regression function is determined by . In deducing the regression function, and are first expressed based on Equation (1) as
Here, the relationship between the reflection ability of the screw region () and the reflection ability of the illumination label () can be assumed as
where is a constant value. Therefore, evolves to
and can be described by a polynomial function as
where denotes and , and denotes and . By combining Equations (13)–(17), the relationship between and is confirmed as
where are the constant value. Considering the constraints of the highest power for () and () and limited training data in industrial tasks, a linear regression model of the reflection feature is constructed as
aiming to find the optimal weight () and bias () that can minimise the difference between and by data learning. It can also be expressed based on Equation (12) as
where and denote the weight and bias of the linear regression function, respectively. In the following, Equation (21) is adopted to construct the regression model due to the availability of and , where the optimiser used is the Adam optimiser [34] and the loss function is defined as
where denotes the predicted L value of the screw region and denotes the data length.
In addition, the above discussion indicates that the reflection features of the screw region and the illumination label can be represented by the L value of the screw region and the illumination label.
3.4. Two-Stage Detection Framework
With a trained reflection feature regression model, the reflection feature of the screw region is defined and utilised in a proposed two-stage detection framework, which is composed of the reflection stage and the texture stage, as shown in Figure 4.
Figure 4.
Flowchart of the two-stage detection framework.
In the reflection stage, the centre region of the captured image is defined as the initial screw region, and the illumination label is extracted using Algorithm 1. Then, the reflection feature of the illumination label is modelled and used to predict the reflection feature of the screw region with the help of a trained reflection feature regression model. At the same time, the captured image is divided into different candidate regions with the same size as the screw region, while the Euclidean distance between the reflection features of candidate regions and the predicted reflection features of the screw region is calculated [35], named the reflection confidence (). Here, each candidate region is assigned a reflection confidence. Finally, regions with smaller detection confidences are extracted as reflection-based screw regions.
In the texture stage, the reflection-based screw regions are further analysed based on their texture features, which aims to filter out the false regions. First, the texture features of those reflection-based screw regions are detected by using the SIFT descriptor [36], where a SIFT feature matrix of each reflection-based screw region () is constructed. Then, by calculating the SIFT feature matrix of an introduced screw template (), the distance () in SIFT feature matrices between the reflection-based screw region and the screw template is obtained as
where is a feature matrix, is a feature matrix, and is a feature matrix. , , and denote the value of point for , , and , respectively. and denote the number of extracted feature points from the introduced screw template and the reflection-based screw region, respectively, while each extracted feature point is described by a vector. Based on the distance matrix, a texture confidence matrix () for each reflection-based screw region is expressed as
where records the ratio between different texture feature distances, is a vector, and denotes the value of point for matrix . Finally, the texture confidence () for each reflection-based screw region is computed as
and regions with smaller texture confidences are extracted as texture-based screw regions.
Through the reflection stage and the texture stage, texture-based screw regions are extracted, and each extracted region is assigned a reflection confidence and a texture confidence. Then, the total reflection confidence () and the total texture confidence () are calculated by adding up these reflection confidences and texture confidences, respectively. Finally, each texture-based screw region is assigned a fused detection confidence () as
The screw region with the lowest fused detection confidence is extracted and named the detected screw.
In addition, the initial screw region is randomly defined, and thus, the corresponding extracted illumination label cannot be used to determine the reflection features of the screw region in the inference process. A self-optimisation strategy is proposed to continuously update the screw region and illumination label by forming the reflection stage and the texture stage into a loop, where the difference in detection results between the neighbouring iterations is employed as a judgement.
where is a given threshold. If there is no difference, the detected screw of the current iteration will be accepted as the final detected screw. Otherwise, this detected screw is regarded as a falsely detected screw and would be utilised to update the illumination label by using Algorithm 1 for the new iteration.
By integrating the reflection feature regression model and the two-stage detection framework, a screw detection method is achieved for detecting structurally damaged screws in EOL products under nonuniform illumination conditions.
4. Experiments
4.1. Experimental Setup
The proposed screw detection method was implemented in removing used screws from an EOL plug-in hybrid EV battery in a robotic disassembly platform, as shown in Figure 5. The utilised equipment contains an industrial robot, a 2D industrial camera, an electrical nut runner, and an electromagnetic gripping system. The control of the above equipment was achieved by programming on TM flow v1.82 software [37], while the detection method was realised through Python v3.8 and MATLAB v9.1 programming on an equipped workstation.
Figure 5.
Experimental platform. (a) Utilised equipment; (b) EOL plug-in hybrid EV battery and the position of adopted screws in the experiments.
4.2. Experimental Procedure
Considering the training process of the designed reflection feature regression model, an experimental procedure containing an offline process and an online process was developed, as shown in Figure 6. In the offline process, a robot holding a camera was used to collect training images. The screw regions in the collected images were labelled by human operators and then utilised to extract illumination labels by running Algorithm 1. Finally, the reflection features of screw regions and illumination labels were calculated to construct the dataset for training the reflection feature regression model. In the online process, a robot holding a camera was also employed to collect the structural information of the screws used. Next, by inputting the trained reflection feature regression model and captured image to the two-stage detection framework, the locations of screw regions in the image coordinate system were obtained. Finally, by transforming the position of screw regions in the image coordinate system to the position in the world coordinate system, the robot holding an electromagnetic gripping system was able to remove detected screws from an EOL plug-in hybrid EV battery.
Figure 6.
Flowchart of the experimental procedure.
In the experiments, the removal of hexagonal-headed screws (M6 nuts) in 6 different positions was used as an example to discuss the detection performance. The distribution of the 6 positions is shown in Figure 5b, while the screws located in these positions are named P1 screws, P2 screws, P3 screws, P4 screws, P5 screws, and P6 screws. In addition, to quantitatively validate the detection performance, 9 training datasets and 40 test datasets were constructed for P1 screws, P2 screws, P3 screws, P4 screws, P5 screws, and P6 screws. The numbers of images in one training dataset and one test dataset are 200 and 50, respectively. The data collection was conducted under various illumination conditions, and the condition of screws (e.g., the degree of structural damage) was updated constantly during the collection process. Table 2 records the experimental parameters of the detection method in the screw removal case.
Table 2.
Experimental parameters of the detection method in the screw removal case.
4.3. Evaluation System
The detection performance was first evaluated based on the mean average precision () [38] at different values of the interaction over union () between the detected screw region and the actual screw region . Here, is calculated as
As the most commonly used evaluation indicator in the object detection field, can reflect the performance of the proposed screw detection method considering recall and precision.
Then, the operation range of the designed socket in the gripping system was taken into account, and the proposed detection method based on the disassembly accuracy () was evaluated as
where the centre of the operation range is the same as the centre of the detected screw region. is the number of samples whose actual screw region is completely covered by the operation range, and is the number of samples whose actual screw region is not completely covered by the operation range. By introducing the operation range of sockets, the success rate of unfastening screws based on the screw detection results can be evaluated using .
Apart from analysing the performance of the proposed detection method, the R-squared value was also used to validate the goodness-of-fit of the developed reflection feature regression model [39]. The range of the R-squared value is from 0 to 1, and a higher value indicates a better fitting performance. The performance of the reflection feature regression model is closely related to the stability of the proposed screw detection method. When the reflection feature regression model achieves satisfactory fitting performance represented by a high R-squared value, the proposed screw detection method is more likely to accurately and stably detect screws under uneven illumination conditions.
5. Results and Discussion
5.1. Performance of the Reflection Feature Regression Model
To evaluate the fitting performance of the reflection feature regression model, the capability of using illumination labels to reflect illumination conditions for P1 screws was first analysed. Figure 7 shows the images captured in three different positions, and Table 3 tabulates the reflection features of screws and the illumination conditions of screws measured by light meters and illumination labels. The capture positions for the three images are different from each other, and the reflection features of screws () in the three images are 58.4615, 59.3731, and 56.8118. The difference in reflection features indicates that the screw was captured under different illumination conditions. However, the light meters cannot measure the difference and obtain the same value of illumination conditions (770 lx). Using the illumination label, the tiny differences in illumination conditions were successfully detected, where the illumination conditions of screws () are 14.7876, 14.9791, and 12.1517. With the satisfactory performance of measuring illumination conditions, the relationship between the reflection features of screw regions and the illumination conditions can be accurately described by the relationship between the reflection features of screw regions and the reflection features of illumination labels.
Figure 7.
Example captured images for P1 screws. (a–c) The images captured in three different positions at the same time.
Table 3.
Performance of measuring illumination conditions by light meters and illumination labels for P1 screws.
In validating the reflection feature regression model, the reflection feature regression model was trained on 9 training datasets, and the 9 trained models were then tested on 40 test datasets for P1 screws. Figure 8 shows the detailed R-squared value and Table 4 records the maximum value, minimum value, and average value. In the experiments, the maximum R-squared value, the minimum R-squared value, and the average R-squared value are approximately 0.91, 0.80, and 0.86, respectively, which reflects the excellent fitting ability of the proposed reflection feature regression model. The small differences in the maximum R-squared value, the minimum R-squared value, and the average R-squared value among the nine subgraphs confirm the satisfactory robustness of the proposed reflection feature regression model.

Figure 8.
Goodness-of-fit of 9 trained reflection feature regression models on 40 test datasets for P1 screws. (a–i) The R-squared value of the reflection feature regression model trained on training dataset-1, training dataset-2, training dataset-3, training dataset-4, training dataset-5, training dataset-6, training dataset-7, training dataset-8, and training dataset-9, respectively.
Table 4.
The maximum, minimum, and average R-squared values of 9 trained reflection feature regression models on 40 test datasets for P1 screws.
Overall, the proposed reflection feature regression model performed well in establishing the relationship between the reflection features of screw regions and the reflection features of illumination labels under uneven illumination conditions. This attractive fitting performance is mainly contributed by the suitable linear regression function, which was deduced in designing the reflection feature regression model. In the following experiments, the reflection feature regression model trained on training dataset-1 was used to determine the reflection features of screw regions in running the two-stage detection framework for P1 screws.
5.2. Performance of the Two-Stage Detection Framework
The determined reflection features were adopted to extract screw regions together with texture features in the designed two-stage detection framework. The output of the two-stage detection framework is the final result of the proposed screw detection method. Figure 9 and Table 5 validate the detection performance of the proposed two-stage detection framework on 40 test datasets for P1 screws in terms of () and . The maximum , minimum , and average are 1.00, 0.80, and 0.99, respectively, which demonstrate the reliable detection performance of the two-stage detection framework. The maximum , minimum , and average are 1.00, 0.02, and 0.78, respectively, while the maximum , minimum , and average are 0.68, 0.00, and 0.25, respectively. The downward trend from to is caused by the stricter settings in . In addition, Figure 9d and Table 5 show the detection performance based on . Here, the maximum , minimum , and average are 1.00, 0.80, and 0.99, respectively. The satisfactory proves that the two-stage detection framework enabled the screw removal tasks to be operated accurately under nonuniform illumination conditions. The remarkable detection capability is granted by the integration of reflection features and texture features, which has stronger representation ability for structurally damaged screws. On the other hand, no significant difference was found in evaluating the detection performance between using and . Under this situation, it was concluded that is more suitable for evaluating the detection performance compared with and in the experimental case. In the following experiments, the detection performance was mainly discussed based on and .

Figure 9.
Detection performance of the proposed two-stage detection framework for P1 screws. (a–d) The detection performance evaluated by , , , and , respectively.
Table 5.
The maximum, minimum, and average , , , and of the two-stage detection framework for P1 screws.
To study the contribution of reflection features and texture features to the two-stage detection framework, the reflection stage and the texture stage were used to detect P1 screws on the 40 test datasets. Figure 10 and Table 6 compare the difference in and between using the two-stage detection framework and using the reflection stage and between using the two-stage detection framework and using the texture stage. A higher difference denotes that the stage makes fewer contributions to the detection framework. As shown in Table 6, the maximum difference, minimum difference, and average difference in between using the two-stage detection framework and the texture stage are 1.00, 0.76, and 0.98, while the maximum difference, minimum difference, and average difference in between using the two-stage detection framework and the reflection stage are 0.24, −0.02, and 0.02, respectively.
Figure 10.
Contributions of the reflection stage and the texture stage to the two-stage detection framework for P1 screws. The blue lines record the differences between using the two-stage detection framework and using the texture stage, while the red lines record the differences between using the two-stage detection framework and using the reflection stage. (a) Difference in ; (b) difference in .
Table 6.
The maximum, minimum, and average differences in and between using the two-stage detection framework and using the texture stage and between using the two-stage detection framework and using the reflection stage.
These results demonstrate that the satisfactory detection performance of the two-stage detection framework is mainly achieved by using reflection features. Specifically, there are some points with a value of 1.00 in the difference between using the two-stage detection framework and using the texture stage and some points with a value of 0.00 in the difference between using the two-stage detection framework and using the reflection stage. In the testing experiments corresponding to these points, accurate detection is completely accomplished by the reflection stage.
Furthermore, some points were found with negative values in the difference between using the two-stage detection framework and using the reflection stage. Here, the utilisation of the texture stage limited the detection capability of the two-stage detection framework. The experimental conclusions drawn from Figure 10b are consistent with those concluded from Figure 10a. In conclusion, the reflection stage is a major contributor, and the texture stage is an optimiser in the two-stage detection framework. This is caused by the properties of structurally damaged screws in EOL products. As discussed before, the texture features of screws used in EOL products are not reliable because they are easily damaged, while the representation capability of reflection features for used screws is more accurate and robust. The significant contribution of the reflection stage to the two-stage detection framework also indicates the satisfactory measuring ability of illumination labels and the remarkable learning capability of the reflection feature regression model.
5.3. Generalisation
Considering the impact of background information on screw detection, it was decided to retest the proposed screw detection method comprising the reflection feature regression model and the two-stage detection framework with P2 screws, P3 screws, P4 screws, P5 screws, and P6 screws, as recorded in Table 7. In the experiments, the testing environments (e.g., illumination conditions, capture positions), object conditions (e.g., the degree of structural damage, the level of tightness), and surrounding objects of each testing experiment are different from each other.
Table 7.
and of the proposed detection method on different screws.
The average (0.91) and (0.91) confirm the satisfactory detection performance under various illumination conditions. The small fluctuations in and among the six experiments show the robustness of the proposed detection method. In addition, the and for P4 screws, P5 screws, and P6 screws are lower than those for P1 screws, P2 screws, and P3 screws. This is because the testing environment, object conditions, and surrounding objects are more complex, which creates more false screw regions. However, their performance is also acceptable. In conclusion, the proposed detection method realised accurate and stable detection for structurally damaged screws in EOL products under uneven illumination conditions, where the reflection feature regression model empowered the reflection feature to be adjusted automatically and the two-stage detection framework integrated the reflection information and texture information to comprehensively and accurately characterise the screws used. Table 8 summarises the performance of the reflection feature regression model and the two-stage detection framework.
Table 8.
The performance of the reflection feature regression model and the two-stage detection framework.
5.4. Comparison
The proposed screw detection method was also compared with existing methods, as shown in Table 9. As mentioned before, the existing screw detection methods are generally divided into experience-based methods and data-driven methods. In experience-based methods, the detection criteria are designed based on the specific properties of different EOL objects. It is not reasonable to adopt existing experience-based methods in this case and compare their performance directly. Here, the detection performance of various feature descriptors utilised in [15,16,17] was tested, and the optimal and were recorded. In addition, the optimal detection performance of the aforementioned data-driven methods [18,19,20,21] was also recorded, where YOLO [20] achieved higher and .
Table 9.
Comparison of the detection performance with existing methods.
As shown in Table 9, the proposed screw detection method performed much better than the existing experience-based methods, which mainly benefits from the adoption of reflection features and the reflection feature regression model. The and have been significantly improved by 0.81 and 0.74. The experience-based methods are still used in various disassembly tasks due to the lower requirement for labelled data. The proposed screw detection method also performed better than some data-driven methods, such as Soar, ResNet, and faster R-CNN. The increases in are 0.15, 0.11, and 0.04, respectively, and the increases in are 0.13, 0.11, and 0.04, respectively. Although YOLO performed slightly better than the proposed method (the differences in and are 0.03 and 0.03), the demanding requirement of YOLO for training data poses great challenges to screw removal tasks, as shown in Table 10. The of the YOLO model trained on 200 images is 0.23, which is significantly lower than the of the proposed method trained on 200 images. The proposed screw detection method is not greedy for training data due to the development of the lightweight reflection feature regression model and the integration of human experience and training data in the designed two-stage detection framework. The demanding requirement of complex deep learning models (e.g., YOLO) for training data can be partially addressed by transfer learning. However, the detection performance is closely related to the diversity of training samples. The collected training samples are expected to provide a comprehensive characterisation of the structural information for unseen used screws, which is impossible due to the uncertainties of EOL products. The proposed screw detection method adopts reflection features to deal with the uncertainties in the structural information and obtained robust detection performance. Consequently, the proposed screw detection method is reliable to help accomplish complex screw removal tasks under uneven illumination conditions.
Table 10.
Number of utilised training data in the proposed screw detection method and existing data-driven methods.
6. Conclusions and Future Work
Research on screw detection is essential for automatic removal. This paper realises the aim of accurately and stably detecting structurally damaged screws in EOL products under uneven illumination conditions by presenting a reflection feature regression model and a two-stage detection framework. The utilisation of reflection features addresses the problem of characterising damaged screw structures caused by the uncertainty and complexity of EOL products, and the reflection features are automatically determined by data learning according to the illumination conditions. The texture feature helps the proposed screw detection method filter out falsely detected screws. In addition, an innovative illumination condition measurement method is proposed in employing the reflection feature by defining the illumination label. The detection method is optimised by a developed self-optimisation strategy. Finally, a vision-guided robotic disassembly platform designed to disassemble EV batteries is utilised to evaluate the detection performance. This study realises stable and accurate screw detection on a battery disassembly task and contributes to the research and application of object detection in other disassembly cases.
The developed method has two issues requiring research in the future. First, this study takes the influence of structural damage on screw detection into account. However, the problem of detecting rusty screws needs to be researched further. Future work plans to optimise the proposed method by designing a stronger data-driven network to describe the reflection characteristics of rusty screws. Second, the proposed illumination condition measurement method lacks a quantitative assessment. Future work plans to adopt sensors to describe the trend of illumination conditions to reflect the performance of the proposed measurement method.
Author Contributions
Conceptualization, Q.L. and W.D.; methodology, W.D. and D.T.P.; validation, W.D.; investigation, J.H.; resources, D.T.P. and Z.Z.; writing—original draft preparation, W.D.; writing—review and editing, D.T.P., J.H. and Y.W.; funding acquisition, W.D., D.T.P. and J.H. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Engineering and Physical Sciences Research Council under grant EP/N018524/1, the National Natural Science Foundation of China under Grant 52075404, and the China Scholarship Council under Grant 202006950054.
Data Availability Statement
The data in this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ijomah, W.L.; McMahon, C.A.; Hammond, G.P.; Newman, S.T. Development of design for remanufacturing guidelines to support sustainable manufacturing. Robot. Comput.-Integr. Manuf. 2007, 23, 712–719. [Google Scholar] [CrossRef]
- Li, R.; Pham, D.T.; Huang, J.; Tan, Y.; Qu, M.; Wang, Y.; Kerin, M.; Jiang, K.; Su, S.; Ji, C.; et al. Unfastening of hexagonal headed screws by a collaborative robot. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1455–1468. [Google Scholar] [CrossRef]
- Zhang, X.; Zhang, M.; Zhang, H.; Jiang, Z.; Liu, C.; Cai, W. A review on energy, environment and economic assessment in remanufacturing based on life cycle assessment method. J. Clean. Prod. 2020, 255, 120160. [Google Scholar] [CrossRef]
- Yuksel, H. Design of automobile engines for remanufacture with quality function deployment. Int. J. Sustain. Eng. 2010, 3, 170–180. [Google Scholar] [CrossRef]
- Hashemi, V.; Chen, M.; Fang, L. Modelling and analysis of aerospace remanufacturing systems with scenario analysis. Int. J. Adv. Manuf. Technol. 2016, 87, 2135–2151. [Google Scholar] [CrossRef]
- Zheng, H.; Li, E.; Wang, Y.; Shi, P.; Xu, B.; Yang, S. Environmental life cycle assessment of remanufactured engines with advanced restoring technologies. Robot. Computer-Integr. Manuf. 2019, 59, 213–221. [Google Scholar] [CrossRef]
- Ahmed, F.; Almutairi, G.; Hasan, P.M.Z.; Rehman, S.; Kumar, S.; Shaalan, N.M.; Aljaafari, A.; Alshoaibi, A.; AIOtaibi, B.; Khan, K. Fabrication of a biomass-derived activated carbon-based anode for high-performance li-ion batteries. Micromachines 2023, 14, 192. [Google Scholar] [CrossRef] [PubMed]
- Ong, S.K.; Chang, M.M.L.; Nee, A.Y.C. Product disassembly sequence planning: State-of-the-art, challenges, opportunities and future directions. Int. J. Prod. Res. 2021, 59, 3493–3508. [Google Scholar] [CrossRef]
- Hu, Y.; Liu, C.; Zhang, M.; Jia, Y.; Xu, Y. A novel simulated annealing-based hyper-heuristic algorithm for stochastic parallel disassembly line balancing in smart remanufacturing. Sensors 2023, 23, 1652. [Google Scholar] [CrossRef]
- Bahubalendruni, M.V.A.R.; Varupala, V.P. Disassembly sequence planning for safe disposal of end-of-life waste electric and electronic equipment. Natl. Acad. Sci. Lett. 2021, 44, 243–247. [Google Scholar] [CrossRef]
- Poschmann, H.; Brueggemann, H.; Goldmann, D. Disassembly 4.0: A review on using robotics in disassembly tasks as a way of automation. Chem. Ing. Tech. 2020, 92, 341–359. [Google Scholar] [CrossRef]
- Nowakowski, P. A novel, cost efficient identification method for disassembly planning of waste electrical and electronic equipment. J. Clean. Prod. 2018, 172, 2695–2707. [Google Scholar] [CrossRef]
- Chen, W.H.; Foo, G.; Kara, S.; Pagnucco, M. Automated generation and execution of disassembly actions. Robot. Comput.-Integr. Manuf. 2021, 68, 102056. [Google Scholar] [CrossRef]
- Vongbunyong, S.; Kara, S.; Pagnucco, M. Learning and revision in cognitive robotics disassembly automation. Robot. Comput.-Integr. Manuf. 2015, 34, 79–94. [Google Scholar] [CrossRef]
- Gli, P.; Pomares, J.; Diaz, S.T.P.C.; Candelas, F.; Torres, F. Flexible multisensorial system for automatic disassembly using cooperative robots. Int. J. Comput. Integr. Manuf. 2007, 20, 757–772. [Google Scholar] [CrossRef]
- Bdiwi, M.; Rashid, A.; Putz, M. Autonomous disassembly of electric vehicle motors based on robot cognition. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [Google Scholar] [CrossRef]
- DiFilippo, N.M.; Jouaneh, M.K. A system combining force and vision sensing for automated screw removal on laptops. IEEE Trans. Autom. Sci. Eng. 2017, 15, 887–895. [Google Scholar] [CrossRef]
- DiFilippo, N.M.; Jouaneh, M.K. Using the soar cognitive architecture to remove screws from different laptop models. IEEE Trans. Autom. Sci. Eng. 2018, 16, 767–780. [Google Scholar] [CrossRef]
- Foo, G.; Kara, S.; Pagnucco, M. Screw detection for disassembly of electronic waste using reasoning and retraining of a deep learning model. Procedia CIRP 2021, 98, 666–671. [Google Scholar] [CrossRef]
- Mangold, S.; Steiner, C.; Friedmann, M.; Fleischer, J. Vision-based screw head detection for automated disassembly for remanufacturing. Procedia CIRP 2022, 105, 1–6. [Google Scholar] [CrossRef]
- Li, X.; Li, M.; Wu, Y.; Zhou, D.; Liu, T.; Hao, F.; Yue, J.; Ma, Q. Accurate screw detection method based on faster R-CNN and rotation edge similarity for automatic screw disassembly. Int. J. Comput. Integr. Manuf. 2021, 34, 1177–1195. [Google Scholar] [CrossRef]
- Sun, Y.; Chang, Z.; Zhao, Y.; Hua, Z.; Li, S. Progressive two-stage network for low-light image enhancement. Micromachines 2021, 12, 1458. [Google Scholar] [CrossRef] [PubMed]
- Tang, Q.; Yang, J.; He, X.; Jia, W.; Zhang, Q.; Liu, H. Nighttime image dehazing based on retinex and dark channel prior using taylor series expansion. Comput. Vis. Image Underst. 2021, 202, 103086. [Google Scholar] [CrossRef]
- Cui, Y.; Sun, Y.; Jian, M.; Zhang, X.; Yao, T.; Gao, X.; Li, Y.; Zhang, Y. A novel underwater image restoration method based on decomposition network and physical imaging model. Int. J. Intell. Syst. 2022, 37, 5672–5690. [Google Scholar] [CrossRef]
- Sudo, H.; Yukushige, S.; Muramatsu, S.; Inagaki, K.; Chugo, D.; Hashimoto, H. Detection of glass surface using reflection characteristic. In Proceedings of the Annual Conference of the IEEE Industrial Electronics Society, Toronto, ON, Canada, 13–16 October 2021. [Google Scholar] [CrossRef]
- Wu, J.; Ji, Z. Seeing the unseen: Locating objects from reflections. In Proceedings of the Annual Conference Towards Autonomous Robotic Systems, Bristol, UK, 25–27 July 2018. [Google Scholar] [CrossRef]
- Zhang, P.; Liu, W.; Lei, Y.; Lu, H. Hyperfusion-Net: Hyper-densely reflective feature fusion for salient object detection. Pattern Recognit. 2019, 93, 521–533. [Google Scholar] [CrossRef]
- Zhang, P.; Liu, W.; Lu, H.; Shen, C. Salient object detection with lossless feature reflection and weighted structural loss. IEEE Trans. Image Process. 2019, 28, 3048–3060. [Google Scholar] [CrossRef] [PubMed]
- Tan, L.; Tang, T.; Yuan, D. An ensemble learning aided computer vision method with advanced colour enhancement for corroded bolt detection in tunnels. Sensors 2022, 22, 9715. [Google Scholar] [CrossRef] [PubMed]
- Lalonde, J.F.; Efros, A.A.; Narasimhan, S.G. Estimating the natural illumination conditions from a single outdoor image. Int. J. Comput. Vis. 2012, 98, 123–145. [Google Scholar] [CrossRef]
- Barron, J.T.; Malik, J. Shape, illumination, and reflectance from shading. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 1670–1687. [Google Scholar] [CrossRef]
- Zhou, T.; Krahenbuhl, P.; Efros, A.A. Learning data-driven reflectance priors for intrinsic image decomposition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
- Lee, H.W. The study of mechanical arm and intelligent robot. IEEE Access 2020, 8, 119624–119634. [Google Scholar] [CrossRef]
- Sanakkayala, D.C.; Varadarajan, V.; Kumar, N.; Soni, G.; Kamat, P.; Kumar, S.; Patil, S.; Kotecha, K. Explainable AI for bearing fault prognosis using deep learning techniques. Micromachines 2022, 13, 1471. [Google Scholar] [CrossRef]
- Deng, S.; Du, L.; Li, C.; Ding, J.; Liu, H. SAR automatic target recognition based on Euclidean distance restricted autoencoder. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3323–3333. [Google Scholar] [CrossRef]
- Zeng, X.; Wang, X.; Chen, K.; Zhang, Y.; Li, D. Dividing the neighbours is not enough: Adding confusion makes local descriptor stronger. IEEE Access 2019, 7, 136106–136115. [Google Scholar] [CrossRef]
- Dmytriyev, Y.; Zaki, A.M.A.; Carnevale, M.; Insero, F.; Giberti, H. Brain computer interface for human-cobot interaction in industrial applications. In Proceedings of the International Congress on Human-Computer Interaction, Optimisation and Robotic Applications, Ankara, Türkiye, 11–13 June 2021. [Google Scholar] [CrossRef]
- Song, Q.; Li, S.; Bai, Q.; Yang, J.; Zhang, X.; Li, Z.; Duan, Z. Object detection method for grasping robot based on improved YOLOv5. Micromachines 2021, 12, 1273. [Google Scholar] [CrossRef]
- Gong, C.S.A.; Su, C.H.S.; Chen, Y.H.; Guu, D.Y. How to implement automotive fault diagnosis using artificial intelligence scheme. Micromachines 2022, 13, 1380. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).










