Next Article in Journal
MEMS Devices-Based Hand Gesture Recognition via Wearable Computing
Previous Article in Journal
Surface Acoustic Wave Humidity Sensor: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Stage Screw Detection Framework for Automatic Disassembly Using a Reflection Feature Regression Model

1
School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
2
Department of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK
*
Author to whom correspondence should be addressed.
Micromachines 2023, 14(5), 946; https://doi.org/10.3390/mi14050946
Submission received: 12 March 2023 / Revised: 24 April 2023 / Accepted: 25 April 2023 / Published: 27 April 2023

Abstract

:
For remanufacturing to be more economically attractive, there is a need to develop automatic disassembly and automated visual detection methods. Screw removal is a common step in end-of-life product disassembly for remanufacturing. This paper presents a two-stage detection framework for structurally damaged screws and a linear regression model of reflection features that allows the detection framework to be conducted under uneven illumination conditions. The first stage employs reflection features to extract screws together with the reflection feature regression model. The second stage uses texture features to filter out false areas that have reflection features similar to those of screws. A self-optimisation strategy and weighted fusion are employed to connect the two stages. The detection framework was implemented on a robotic platform designed for disassembling electric vehicle batteries. This method allows screw removal to be conducted automatically in complex disassembly tasks, and the utilisation of the reflection feature and data learning provides new ideas for further research.

1. Introduction

Remanufacturing is part of a circular economy, returning end-of-life (EOL) products to at least like-new conditions through a group of operations, beginning with disassembly [1,2]. The benefits of remanufacturing for the environment, society, and economy have been widely confirmed through reducing carbon emissions, energy consumption, etc. [3,4,5,6]. Recently, research on remanufacturing electric vehicle (EV) batteries has attracted much attention. As an environmentally friendly option for transportation, the proportion of EVs to all sold cars has increased considerably in the last two years across the world. However, this increase has led to the rapid disposal of EV components, which is also a threat to the environment, society, and economy. Among these components, EV batteries are valuable for recycling because of their expensive and hazardous materials (e.g., lithium and cobalt). In addition, the lifespan of EV batteries is approximately 10 years, which also intensifies the need for remanufacturing them with the growing demand for EVs [7].
Disassembly is the first and inevitable operation in the remanufacturing process due to stricter environmental regulations and growing demand for effective product remanufacturing [8,9,10]. In current applications, manual disassembly is still the main strategy, which consumes a large amount of energy and exposes operators to dangerous materials. The utilisation of robotics has been considered to be an important method in realising automatic disassembly because of its intelligent perception system and effective execution system [11]. As the most widely used perception system, robot vision can automatically investigate the structural information of EOL products and contributes to the decision making in the disassembly process, such as disassembly sequence planning [12,13]. However, there are still some challenges in disassembly tasks for robot vision, mainly caused by the uncertainty and complexity of EOL products [14]. The structural information of EOL products is usually changed and difficult to estimate, which is a barrier to setting detection criteria. In addition, 2-dimensional (2D) cameras are the mainly adopted sensors in robot vision systems, and 2D images are widely used to accurately and efficiently describe the structural information of EOL products. However, the image quality is closely related to the lighting conditions. It is difficult to create a stable lighting condition, which is significant for accomplishing disassembly tasks stably and accurately. The study of robot vision is ponderable for disassembly.
Screw detection and removal are necessary for almost all EOL products, and the structures of used screws are usually damaged, such as cracks, fractures, and wear and tear, as shown in Figure 1. The uncertain conditions of used screws pose challenges to stable and accurate detection in disassembly tasks. In existing screw detection methods, the detection criteria are mainly designed and concluded based on the texture features of the original screws or a small number of used screws. The performance of these methods in disassembly tasks is limited due to the following challenges:
(1)
The texture features obtained from the original screws cannot characterise the used screws accurately due to the unavoidable structural damage during the use of the product.
(2)
The robustness of texture features extracted from a small number of available used and damaged screws is limited due to the uncertain conditions of used screws.
(3)
The texture features are easily affected by illumination conditions. Current detection methods cannot operate stably under uneven illumination conditions.
To address these issues, this paper presents a novel screw detection method containing a two-stage detection framework and a reflection feature regression model to detect structurally damaged screws in EOL products under uneven illumination conditions. The contributions of this paper are as follows:
(1)
It presents a robust feature descriptor for structurally damaged screws by integrating reflection features and texture features. This is beneficial in weakening the influence of structural damage on modelling screws.
(2)
It proposes a linear regression model that enables the reflection features to be updated based on the illumination conditions automatically, which contributes to the stable operation of feature modelling under various illumination conditions. In addition, an illumination label is defined to measure illumination conditions automatically and conveniently from the point of view of the image.
(3)
It details a two-stage detection framework based on the proposed feature descriptor. With the help of the linear regression model, the two-stage detection framework can extract used screws from EOL products under uneven illumination conditions.
Based on the proposed screw detection method, automatic screw removal was realised for an EOL EV battery using a robotic disassembly platform. The experimental results demonstrate the satisfactory accuracy and stability of the proposed screw detection method.
The remainder of this paper is organised as follows. Section 2 reviews the visual detection work in disassembly cases. Section 3 introduces the proposed screw detection method. Section 4 describes the equipment and experiments, while the results are discussed in Section 5. Section 6 summarises this paper and lists future work.

2. Related Work

2.1. Screw Detection for Disassembly

In disassembly tasks, screw detection plays a vital role by providing the position for later disassembly. The existing screw detection methods can be classified into experience-based methods and data-driven methods. Experience-based methods utilise prior product information provided by skilled operators to set detection criteria and then extract the screw. Knowledge and model are the most widely used types of experience because they are user-friendly to summarise. Gli et al. [15] set detection criteria based on screw contour information and adopted well-known strategies (e.g., polygonal approaches) to detect screws from EOL Computers. Bdiwi et al. [16] utilised Kinect to collect and characterise screws in the form of greyscale, depth, and HSV values and then proposed a three-stage screw detection framework for removing screws from EOL EV motors. DiFilippo et al. [17] combined Gaussian blur, Prewitt edge detection, region erosion, and Hough and circle detection methods in screw detection, which successfully removed 96.5% of the screws from EOL laptops. The parameters in the above methods were all determined by the structural information of the original screws. Under this situation, the developed criteria cannot represent used screws well due to the uncertainty of EOL products, and the detection criteria are also hard to conclude because of the complexity of EOL products.
The detection criteria of data-driven methods are obtained through data learning represented by deep learning models. In existing studies, researchers are focusing on utilising and optimising state-of-the-art models. DiFilippo et al. [18] designed a cognitive architecture based on Soar’s long-term and semantic memory function, which performed well in determining the label and position of screws in laptops. Using this cognitive architecture, the inference time was decreased by up to 60% and the average inference time was decreased by 10% for most EOL laptops. Foo et al. [19] employed the residual network (ResNet) in unfastening crosshead screws from EOL LCD monitors and achieved optimal precision and recall rates of 91.8% and 83.6%. Mangold et al. [20] used the you only look once (YOLO) model to detect screws in a vision-based robotic disassembly platform. The achieved mean average precision was around 0.98. During the application of these models, some strategies (e.g., transfer learning and dropout) were adopted to reduce the randomness of training. However, these detection models require a large amount of labelled data in the training process, which poses great challenges to disassembly tasks. The robustness of the deep learning models is unsatisfactory due to the uncertainty of the EOL products. There is no guarantee that the feature extractor trained on poorly diversified datasets can accurately characterise the unseen EOL products.
Some researchers have combined experience-based methods and data-driven methods. Li et al. [21] proposed an accurate screw detection method based on the faster region with convolutional neural network features (R-CNN) model and rotation edge similarity, where faster R-CNN is a deep learning model and rotation edge similarity is designed based on the former experience provided by operators. This strategy improves the robustness of the detection method and reduces the number of training data used. It also reached up to a 90.8% success rate in the disassembly process. The proposed screw detection method is based on this strategy.
On the other hand, existing studies designed detection criteria mainly based on texture features, which are easily changed and difficult to estimate in used screws. Furthermore, the lighting condition has not been fully considered, which is also an important problem in utilisation. These two problems are considered in this study, where a robust feature descriptor is designed to characterise the used screws.

2.2. Reflection Feature for Object Detection

Reflection features can be used to represent illumination conditions and object characteristics. Currently, reflection removal and detection are the two main methods used in the object detection field. In reflection removal, reflection features are regarded as noises that damage object characteristics. Here, researchers proposed various reflection removal methods mainly based on retinex theory, which assumes that an image can be decomposed into reflection and illumination components [22,23,24]. In reflection detection, reflection features are utilised as intrinsic features. Sudo et al. [25] proposed a glass detection method by focusing on the reflective properties of the glass surface. Wu et al. [26] used reflection features to track and locate moving objects in videos and proved the contribution of reflection features to detecting non-line-of-sight objects. Zhang et al. [27] presented a reflective learning model, and the reflection features were extracted to detect salient objects. Zhang et al. [28] also employed reflection features in designing a loss function to learn the saliency feature in object detection.
In summary, reflection features play a vital role in characterising objects, especially in some cases where texture features cannot perform well. However, previous studies seldom considered the impact of illumination conditions on reflection features, which is important in industrial tasks. This paper presents a reflection feature regression model to automatically determine the reflection feature based on the illumination conditions.

3. Method

This paper proposes an illumination-adaptive detection method for removing screws from EOL products, as shown in Figure 2. In the form of screw region extraction, this method can stably detect structurally damaged screws under nonuniform illumination conditions. The main idea is to characterise and merge the reflection and texture features of the screw regions in the two-stage detection framework. The reflection features are utilised here to model the overall screw regions and then employed to extract screw regions in the reflection stage, which is composed of the measure illumination condition node, the set reflection feature node, and the extract reflection-based screw region node. Compared with texture features, reflection features are less affected by structural damage and can be used to represent structurally damaged screws [29]. In the texture stage, texture features are employed to remove false areas (e.g., exposure areas) extracted before, through the match scale-invariant feature transform (SIFT) features node and the extract texture-based screw region node. The extracted screws of the two stages are then fused in the weighted fusion node, where the screw region with the lowest fused detection confidence is named the detected screw. The detected screw is continuously updated based on a self-optimisation strategy through the compare neighbour detected screws node. Here, the problem of detecting damaged screw structures under a fixed illumination condition is solved. However, in disassembly tasks, controlling lighting conditions is difficult and labour-intensive. To improve the robustness of the proposed detection method under uneven illumination conditions, a reflection feature regression model is developed to draw the relationship between reflection features and illumination conditions. Finally, by incorporating the two-stage detection framework and the reflection feature regression model, the detection of structurally damaged screws under uneven illumination conditions is realised. The detailed process is illustrated in the following subsections. Table 1 shows the important notations used in this paper.

3.1. Characterise Reflection Features

In screw detection tasks, the texture features of screws in EOL products are easily changed by structural damage, which decreases the accuracy of texture features to describe used screws. In this paper, it is proposed to characterise used screws by using reflection features, which are mainly determined by illumination conditions and the object’s reflection abilities as
R = f ( P , I )
where f represents the mapping function. With a fixed illumination condition, the difference in reflection features between screws and other components can be regarded as the detection criteria. The reflection ability is highly related to roughness, transparency, and refractive index, while the impact of structural damage on reflection abilities is limited. In addition, the reflection feature of screws is modelled from the point of view of regions, which is also beneficial to reduce the influence of structural damage on feature modelling. In the proposed method, the size of the screw region is determined by the size of the screw in the original products.

3.2. Measure Illumination Conditions

To promote the robustness of reflection features under uneven illumination conditions, the reflection features are expected to be automatically updated based on illumination conditions. The measurement accuracy of illumination conditions determines the representation ability of reflection features. However, it is difficult to set the position of light sensors (e.g., light meters) due to the uncertain and complex image capture environments (e.g., the relative position between cameras and targets, the angular relationship between cameras and targets). Under this situation, the efficiency and accuracy of adopting sensors to measure illumination conditions are unsatisfactory. This paper proposes an illumination label to reflect the illumination condition and describe the relationship between the reflection features of screw regions and illumination conditions from the point of view of images. The introduction of illumination labels is beneficial for accurately and efficiently measuring the illumination condition of screws and is less likely to be affected by complex capture conditions (e.g., shadowing) [30,31,32]. The illumination label extraction algorithm is defined as Algorithm 1, where the position of the screw region ( [ M s , M e , N s , N e ] ) is described by its top-left point ( M s , N s ) and the bottom-right point ( M e , N e ) . Through the illumination label extraction algorithm, the nonfeature region (there are no extracted edge feature points in this region) closest to the screw region is defined as the illumination label.
Algorithm 1: illumination label extraction algorithm
Input:
Captured image S ;
Screw region S m ;
Screw region position [ M s , M e , N s , N e ] ;
Output:
Illumination label S l a b e l ;
1:   
Use Sobel operators [33] to extract the edge features of captured image S and obtain the edge feature map S f ;
2:   
Select the m -neighbourhood region of screw region S m with the same size as the screw region ( M e M s , N e N s ) , as shown in Figure 3a;
3:   
For  m = 8 l o o p ( l o o p is initially set to 1) do
4:     
For  n e i g h in m-neighbourhood regions do
5:       
If there is no edge feature in n e i g h  do
6:         
S l a b e l = n e i g h ;
7:         
Go to Step 12;
8:       
End
9:     
End
10:  
l o o p = l o o p + 1 ;
11:  
End
12:  
Return  S l a b e l .
The screw region is unknown in the inference process, where we present a self-optimisation strategy to solve this problem, as illustrated in Section 3.4. Figure 3b gives an example of an extracted illumination label by using Algorithm 1.
I s c r e w = I l a b e l
Figure 3. Distribution of the m-neighbourhood region of the screw region and the example of an extracted illumination label using Algorithm 1. (a) The distribution of the m-neighbourhood region of the screw region, where the green rectangle denotes the screw region, and the orange rectangles denote m-neighbourhood regions with m recorded in the centre (e.g., m = 8, m = 16). (b) An example of an extracted illumination label using Algorithm 1. In (b), the green rectangle denotes the screw region, the grey rectangles denote m-neighbourhood regions where there are extracted edge features, and the blue rectangles denote m-neighbourhood regions where there is no extracted edge feature. The blue rectangle with ‘label’ recorded in the centre denotes the extracted illumination label. The size of each neighbourhood region is the same as the size of the screw region. Due to the adjacent location of the illumination label and the screw region, the illumination condition of the extracted illumination label is the same as the illumination condition of the screw region.
Figure 3. Distribution of the m-neighbourhood region of the screw region and the example of an extracted illumination label using Algorithm 1. (a) The distribution of the m-neighbourhood region of the screw region, where the green rectangle denotes the screw region, and the orange rectangles denote m-neighbourhood regions with m recorded in the centre (e.g., m = 8, m = 16). (b) An example of an extracted illumination label using Algorithm 1. In (b), the green rectangle denotes the screw region, the grey rectangles denote m-neighbourhood regions where there are extracted edge features, and the blue rectangles denote m-neighbourhood regions where there is no extracted edge feature. The blue rectangle with ‘label’ recorded in the centre denotes the extracted illumination label. The size of each neighbourhood region is the same as the size of the screw region. Due to the adjacent location of the illumination label and the screw region, the illumination condition of the extracted illumination label is the same as the illumination condition of the screw region.
Micromachines 14 00946 g003
The changes in the illumination condition of the screw region can be represented by the changes in the reflection feature of the illumination label. Therefore, the relationship between the reflection feature of the screw region and the illumination condition is described by the relationship between the reflection feature of the screw region and the reflection feature of the illumination label. In this study, we use the L value in the Lab colour space to represent the illumination conditions and reflection features.
The relationship between the reflection feature of the screw region and the illumination condition is determined by the relationship between L R s c r e w and L R l a b e l , where L S s c r e w and L S l a b e l are first calculated as
L S s c r e w = 1 i j i = 1 i = s r l j = 1 j = s r w L S s c r e w ( i , j )
L S l a b e l = 1 i j i = 1 i = l r l j = 1 j = l r w L S l a b e l ( i , j )
s r l and s r w denote the size of the screw region.
s r l = M e M s
s r w = N e N s
l r l and l r w denote the size of the extracted illumination label, and L S s c r e w ( i , j ) and L S l a b e l ( i , j ) denote the L value of pixel point ( i , j ) in the screw region and the illumination label, respectively. Then, L S s c r e w and L S l a b e l are divided into reflection components ( L R s c r e w , L R l a b e l ) and illumination components ( L I s c r e w , L I l a b e l ) based on retinex theory as
L S s c r e w = L R s c r e w L I s c r e w
log ( L S s c r e w ) = log ( L R s c r e w ) + log ( L I s c r e w )
L S l a b e l = L R l a b e l L I l a b e l
log ( L S l a b e l ) = log ( L R l a b e l ) + log ( L I l a b e l )
Finally, by introducing Equation (2), the relationship between L R s c r e w and L R l a b e l can be represented by the relationship between L S s c r e w and L S l a b e l as
log ( L S s c r e w ) = log ( L R s c r e w ) + log ( L S l a b e l ) log ( L R l a b e l )
L S s c r e w L S l a b e l = L R s c r e w L R l a b e l
Therefore, the relationship between L R s c r e w and L R l a b e l is represented by the relationship between L S s c r e w and L S l a b e l .

3.3. Reflection Feature Regression Model

To set the relationship between the reflection feature of the screw region and the reflection feature of the illumination label, a reflection feature regression model is established, where the regression function is determined by L R s c r e w L R l a b e l . In deducing the regression function, L R s c r e w and L R l a b e l are first expressed based on Equation (1) as
L R s c r e w = f ( P s c r e w , L I s c r e w )
L R l a b e l = f ( P l a b e l , L I l a b e l )
Here, the relationship between the reflection ability of the screw region ( P s c r e w ) and the reflection ability of the illumination label ( P l a b e l ) can be assumed as
P s c r e w = k P l a b e l
where k is a constant value. Therefore, L R s c r e w evolves to
L R s c r e w = f ( P s c r e w , L I s c r e w ) = f ( k P l a b e l , L I l a b e l )
and f can be described by a polynomial function as
f ( x ) = a n x n + a n 1 x n 1 + + a 0
where [ a n , a n 1 , , a 0 ] denotes L I s c r e w and L I l a b e l , and x denotes P s c r e w and P l a b e l . By combining Equations (13)–(17), the relationship between L R s c r e w and L R l a b e l is confirmed as
L R l a b e l = a n P l a b e l n + a n 1 P l a b e l n 1 + + a 0
L R s c r e w = a n P s c r e w n + a n 1 P s c r e w n 1 + + a 0 = a n k n P l a b e l n + a n 1 k n 1 P l a b e l n 1 + + a 0
where k n , k n 1 , , k are the constant value. Considering the constraints of the highest power for P s c r e w ( P s c r e w n ) and P l a b e l ( P l a b e l n ) and limited training data in industrial tasks, a linear regression model of the reflection feature is constructed as
L R s c r e w = w r L R l a b e l + b r
aiming to find the optimal weight ( w r ) and bias ( b r ) that can minimise the difference between a n k n P l a b e l n + a n 1 k n 1 P l a b e l n 1 + + a 0 and a n w r P l a b e l n + a n 1 w r P l a b e l n 1 + + w r a 0 + b r by data learning. It can also be expressed based on Equation (12) as
L S s c r e w = w L S l a b e l + b
where w and b denote the weight and bias of the linear regression function, respectively. In the following, Equation (21) is adopted to construct the regression model due to the availability of L S s c r e w and L S l a b e l , where the optimiser used is the Adam optimiser [34] and the loss function is defined as
l o s s = 1 U i = 1 U ( L S s c r e w i L S s c r e w i p r e d ) 2
where L S s c r e w p r e d denotes the predicted L value of the screw region and U denotes the data length.
In addition, the above discussion indicates that the reflection features of the screw region and the illumination label can be represented by the L value of the screw region and the illumination label.

3.4. Two-Stage Detection Framework

With a trained reflection feature regression model, the reflection feature of the screw region is defined and utilised in a proposed two-stage detection framework, which is composed of the reflection stage and the texture stage, as shown in Figure 4.
In the reflection stage, the centre region of the captured image is defined as the initial screw region, and the illumination label is extracted using Algorithm 1. Then, the reflection feature of the illumination label is modelled and used to predict the reflection feature of the screw region with the help of a trained reflection feature regression model. At the same time, the captured image is divided into different candidate regions with the same size as the screw region, while the Euclidean distance between the reflection features of candidate regions and the predicted reflection features of the screw region is calculated [35], named the reflection confidence ( C R ). Here, each candidate region is assigned a reflection confidence. Finally, r n regions with smaller detection confidences are extracted as reflection-based screw regions.
In the texture stage, the reflection-based screw regions are further analysed based on their texture features, which aims to filter out the false regions. First, the texture features of those reflection-based screw regions are detected by using the SIFT descriptor [36], where a SIFT feature matrix of each reflection-based screw region ( B ) is constructed. Then, by calculating the SIFT feature matrix of an introduced screw template ( T ), the distance ( D ) in SIFT feature matrices between the reflection-based screw region and the screw template is obtained as
D ( x , y ) = i = 1 i = s n ( T ( x , i ) B ( y , i ) ) 2 , x [ 1 , p ] , y [ 1 , q ]
where T is a { p , s n } feature matrix, B is a { q , s n } feature matrix, and D is a { p , q } feature matrix. D ( x , y ) , T ( x , y ) , and B ( x , y ) denote the value of point ( x , y ) for D , T , and B , respectively. p and q denote the number of extracted feature points from the introduced screw template and the reflection-based screw region, respectively, while each extracted feature point is described by a { s n } vector. Based on the distance matrix, a texture confidence matrix ( C T ) for each reflection-based screw region is expressed as
C D ( x , y ) = D ( x , y ) / i = 1 i = p D ( i , y ) , x [ 1 , p ] , y [ 1 , q ]
C T ( x ) = min ( C D [ x , : ] ) , x [ 1 , p ]
where C D records the ratio between different texture feature distances, C T is a { p } vector, and C T ( x ) denotes the value of point ( x ) for matrix C T . Finally, the texture confidence ( C T ) for each reflection-based screw region is computed as
C T = i = 1 i = p C T ( i )
and t n regions with smaller texture confidences are extracted as texture-based screw regions.
Through the reflection stage and the texture stage, t n texture-based screw regions are extracted, and each extracted region is assigned a reflection confidence and a texture confidence. Then, the total reflection confidence ( t o t a l C R ) and the total texture confidence ( t o t a l C T ) are calculated by adding up these t n reflection confidences and texture confidences, respectively. Finally, each texture-based screw region is assigned a fused detection confidence ( C ) as
C = C R 2 t o t a l C R + C T 2 t o t a l C T
The screw region with the lowest fused detection confidence is extracted and named the detected screw.
In addition, the initial screw region is randomly defined, and thus, the corresponding extracted illumination label cannot be used to determine the reflection features of the screw region in the inference process. A self-optimisation strategy is proposed to continuously update the screw region and illumination label by forming the reflection stage and the texture stage into a loop, where the difference in detection results between the neighbouring iterations is employed as a judgement.
{ | M s t M s t + 1 | / M s t + 1 t d | M e t M e t + 1 | / M e t + 1 t d | N s t N s t + 1 | / N s t + 1 t d | N e t N e t + 1 | / N e t + 1 t d : no   difference
where t d is a given threshold. If there is no difference, the detected screw of the current iteration will be accepted as the final detected screw. Otherwise, this detected screw is regarded as a falsely detected screw and would be utilised to update the illumination label by using Algorithm 1 for the new iteration.
By integrating the reflection feature regression model and the two-stage detection framework, a screw detection method is achieved for detecting structurally damaged screws in EOL products under nonuniform illumination conditions.

4. Experiments

4.1. Experimental Setup

The proposed screw detection method was implemented in removing used screws from an EOL plug-in hybrid EV battery in a robotic disassembly platform, as shown in Figure 5. The utilised equipment contains an industrial robot, a 2D industrial camera, an electrical nut runner, and an electromagnetic gripping system. The control of the above equipment was achieved by programming on TM flow v1.82 software [37], while the detection method was realised through Python v3.8 and MATLAB v9.1 programming on an equipped workstation.

4.2. Experimental Procedure

Considering the training process of the designed reflection feature regression model, an experimental procedure containing an offline process and an online process was developed, as shown in Figure 6. In the offline process, a robot holding a camera was used to collect training images. The screw regions in the collected images were labelled by human operators and then utilised to extract illumination labels by running Algorithm 1. Finally, the reflection features of screw regions and illumination labels were calculated to construct the dataset for training the reflection feature regression model. In the online process, a robot holding a camera was also employed to collect the structural information of the screws used. Next, by inputting the trained reflection feature regression model and captured image to the two-stage detection framework, the locations of screw regions in the image coordinate system were obtained. Finally, by transforming the position of screw regions in the image coordinate system to the position in the world coordinate system, the robot holding an electromagnetic gripping system was able to remove detected screws from an EOL plug-in hybrid EV battery.
In the experiments, the removal of hexagonal-headed screws (M6 nuts) in 6 different positions was used as an example to discuss the detection performance. The distribution of the 6 positions is shown in Figure 5b, while the screws located in these positions are named P1 screws, P2 screws, P3 screws, P4 screws, P5 screws, and P6 screws. In addition, to quantitatively validate the detection performance, 9 training datasets and 40 test datasets were constructed for P1 screws, P2 screws, P3 screws, P4 screws, P5 screws, and P6 screws. The numbers of images in one training dataset and one test dataset are 200 and 50, respectively. The data collection was conducted under various illumination conditions, and the condition of screws (e.g., the degree of structural damage) was updated constantly during the collection process. Table 2 records the experimental parameters of the detection method in the screw removal case.

4.3. Evaluation System

The detection performance was first evaluated based on the mean average precision ( m A P ) [38] at different values of the interaction over union ( I O U ) between the detected screw region S d and the actual screw region S r . Here, I O U is calculated as
I O U = ( S d S r ) / ( S d S r )
As the most commonly used evaluation indicator in the object detection field, m A P can reflect the performance of the proposed screw detection method considering recall and precision.
Then, the operation range of the designed socket in the gripping system was taken into account, and the proposed detection method based on the disassembly accuracy ( a c c u r a c y ) was evaluated as
a c c u r a c y = T / ( T + F )
where the centre of the operation range is the same as the centre of the detected screw region. T is the number of samples whose actual screw region is completely covered by the operation range, and F is the number of samples whose actual screw region is not completely covered by the operation range. By introducing the operation range of sockets, the success rate of unfastening screws based on the screw detection results can be evaluated using a c c u r a c y .
Apart from analysing the performance of the proposed detection method, the R-squared value was also used to validate the goodness-of-fit of the developed reflection feature regression model [39]. The range of the R-squared value is from 0 to 1, and a higher value indicates a better fitting performance. The performance of the reflection feature regression model is closely related to the stability of the proposed screw detection method. When the reflection feature regression model achieves satisfactory fitting performance represented by a high R-squared value, the proposed screw detection method is more likely to accurately and stably detect screws under uneven illumination conditions.

5. Results and Discussion

5.1. Performance of the Reflection Feature Regression Model

To evaluate the fitting performance of the reflection feature regression model, the capability of using illumination labels to reflect illumination conditions for P1 screws was first analysed. Figure 7 shows the images captured in three different positions, and Table 3 tabulates the reflection features of screws and the illumination conditions of screws measured by light meters and illumination labels. The capture positions for the three images are different from each other, and the reflection features of screws ( L S s c r e w ) in the three images are 58.4615, 59.3731, and 56.8118. The difference in reflection features indicates that the screw was captured under different illumination conditions. However, the light meters cannot measure the difference and obtain the same value of illumination conditions (770 lx). Using the illumination label, the tiny differences in illumination conditions were successfully detected, where the illumination conditions of screws ( L S l a b e l ) are 14.7876, 14.9791, and 12.1517. With the satisfactory performance of measuring illumination conditions, the relationship between the reflection features of screw regions and the illumination conditions can be accurately described by the relationship between the reflection features of screw regions and the reflection features of illumination labels.
In validating the reflection feature regression model, the reflection feature regression model was trained on 9 training datasets, and the 9 trained models were then tested on 40 test datasets for P1 screws. Figure 8 shows the detailed R-squared value and Table 4 records the maximum value, minimum value, and average value. In the experiments, the maximum R-squared value, the minimum R-squared value, and the average R-squared value are approximately 0.91, 0.80, and 0.86, respectively, which reflects the excellent fitting ability of the proposed reflection feature regression model. The small differences in the maximum R-squared value, the minimum R-squared value, and the average R-squared value among the nine subgraphs confirm the satisfactory robustness of the proposed reflection feature regression model.
Overall, the proposed reflection feature regression model performed well in establishing the relationship between the reflection features of screw regions and the reflection features of illumination labels under uneven illumination conditions. This attractive fitting performance is mainly contributed by the suitable linear regression function, which was deduced in designing the reflection feature regression model. In the following experiments, the reflection feature regression model trained on training dataset-1 was used to determine the reflection features of screw regions in running the two-stage detection framework for P1 screws.

5.2. Performance of the Two-Stage Detection Framework

The determined reflection features were adopted to extract screw regions together with texture features in the designed two-stage detection framework. The output of the two-stage detection framework is the final result of the proposed screw detection method. Figure 9 and Table 5 validate the detection performance of the proposed two-stage detection framework on 40 test datasets for P1 screws in terms of m A P @ I O U ( I O U = 0.5 ,   0.6 ,   0.7 ) and a c c u r a c y . The maximum m A P @ 0.5 , minimum m A P @ 0.5 , and average m A P @ 0.5 are 1.00, 0.80, and 0.99, respectively, which demonstrate the reliable detection performance of the two-stage detection framework. The maximum m A P @ 0.6 , minimum m A P @ 0.6 , and average m A P @ 0.6 are 1.00, 0.02, and 0.78, respectively, while the maximum m A P @ 0.7 , minimum m A P @ 0.7 , and average m A P @ 0.7 are 0.68, 0.00, and 0.25, respectively. The downward trend from m A P @ 0.5 to m A P @ 0.7 is caused by the stricter settings in I O U . In addition, Figure 9d and Table 5 show the detection performance based on a c c u r a c y . Here, the maximum a c c u r a c y , minimum a c c u r a c y , and average a c c u r a c y are 1.00, 0.80, and 0.99, respectively. The satisfactory a c c u r a c y proves that the two-stage detection framework enabled the screw removal tasks to be operated accurately under nonuniform illumination conditions. The remarkable detection capability is granted by the integration of reflection features and texture features, which has stronger representation ability for structurally damaged screws. On the other hand, no significant difference was found in evaluating the detection performance between using m A P @ 0.5 and a c c u r a c y . Under this situation, it was concluded that m A P @ 0.5 is more suitable for evaluating the detection performance compared with m A P @ 0.6 and m A P @ 0.7 in the experimental case. In the following experiments, the detection performance was mainly discussed based on m A P @ 0.5 and a c c u r a c y .
To study the contribution of reflection features and texture features to the two-stage detection framework, the reflection stage and the texture stage were used to detect P1 screws on the 40 test datasets. Figure 10 and Table 6 compare the difference in m A P @ 0.5 and a c c u r a c y between using the two-stage detection framework and using the reflection stage and between using the two-stage detection framework and using the texture stage. A higher difference denotes that the stage makes fewer contributions to the detection framework. As shown in Table 6, the maximum difference, minimum difference, and average difference in m A P @ 0.5 between using the two-stage detection framework and the texture stage are 1.00, 0.76, and 0.98, while the maximum difference, minimum difference, and average difference in m A P @ 0.5 between using the two-stage detection framework and the reflection stage are 0.24, −0.02, and 0.02, respectively.
These results demonstrate that the satisfactory detection performance of the two-stage detection framework is mainly achieved by using reflection features. Specifically, there are some points with a m A P @ 0.5 value of 1.00 in the difference between using the two-stage detection framework and using the texture stage and some points with a m A P @ 0.5 value of 0.00 in the difference between using the two-stage detection framework and using the reflection stage. In the testing experiments corresponding to these points, accurate detection is completely accomplished by the reflection stage.
Furthermore, some points were found with negative m A P @ 0.5 values in the difference between using the two-stage detection framework and using the reflection stage. Here, the utilisation of the texture stage limited the detection capability of the two-stage detection framework. The experimental conclusions drawn from Figure 10b are consistent with those concluded from Figure 10a. In conclusion, the reflection stage is a major contributor, and the texture stage is an optimiser in the two-stage detection framework. This is caused by the properties of structurally damaged screws in EOL products. As discussed before, the texture features of screws used in EOL products are not reliable because they are easily damaged, while the representation capability of reflection features for used screws is more accurate and robust. The significant contribution of the reflection stage to the two-stage detection framework also indicates the satisfactory measuring ability of illumination labels and the remarkable learning capability of the reflection feature regression model.

5.3. Generalisation

Considering the impact of background information on screw detection, it was decided to retest the proposed screw detection method comprising the reflection feature regression model and the two-stage detection framework with P2 screws, P3 screws, P4 screws, P5 screws, and P6 screws, as recorded in Table 7. In the experiments, the testing environments (e.g., illumination conditions, capture positions), object conditions (e.g., the degree of structural damage, the level of tightness), and surrounding objects of each testing experiment are different from each other.
The average m A P @ 0.5 (0.91) and a c c u r a c y (0.91) confirm the satisfactory detection performance under various illumination conditions. The small fluctuations in m A P @ 0.5 and a c c u r a c y among the six experiments show the robustness of the proposed detection method. In addition, the m A P @ 0.5 and a c c u r a c y for P4 screws, P5 screws, and P6 screws are lower than those for P1 screws, P2 screws, and P3 screws. This is because the testing environment, object conditions, and surrounding objects are more complex, which creates more false screw regions. However, their performance is also acceptable. In conclusion, the proposed detection method realised accurate and stable detection for structurally damaged screws in EOL products under uneven illumination conditions, where the reflection feature regression model empowered the reflection feature to be adjusted automatically and the two-stage detection framework integrated the reflection information and texture information to comprehensively and accurately characterise the screws used. Table 8 summarises the performance of the reflection feature regression model and the two-stage detection framework.

5.4. Comparison

The proposed screw detection method was also compared with existing methods, as shown in Table 9. As mentioned before, the existing screw detection methods are generally divided into experience-based methods and data-driven methods. In experience-based methods, the detection criteria are designed based on the specific properties of different EOL objects. It is not reasonable to adopt existing experience-based methods in this case and compare their performance directly. Here, the detection performance of various feature descriptors utilised in [15,16,17] was tested, and the optimal m A P @ 0.5 and a c c u r a c y were recorded. In addition, the optimal detection performance of the aforementioned data-driven methods [18,19,20,21] was also recorded, where YOLO [20] achieved higher m A P @ 0.5 and a c c u r a c y .
As shown in Table 9, the proposed screw detection method performed much better than the existing experience-based methods, which mainly benefits from the adoption of reflection features and the reflection feature regression model. The m A P @ 0.5 and a c c u r a c y have been significantly improved by 0.81 and 0.74. The experience-based methods are still used in various disassembly tasks due to the lower requirement for labelled data. The proposed screw detection method also performed better than some data-driven methods, such as Soar, ResNet, and faster R-CNN. The increases in m A P @ 0.5 are 0.15, 0.11, and 0.04, respectively, and the increases in a c c u r a c y are 0.13, 0.11, and 0.04, respectively. Although YOLO performed slightly better than the proposed method (the differences in m A P @ 0.5 and a c c u r a c y are 0.03 and 0.03), the demanding requirement of YOLO for training data poses great challenges to screw removal tasks, as shown in Table 10. The m A P @ 0.5 of the YOLO model trained on 200 images is 0.23, which is significantly lower than the m A P @ 0.5 of the proposed method trained on 200 images. The proposed screw detection method is not greedy for training data due to the development of the lightweight reflection feature regression model and the integration of human experience and training data in the designed two-stage detection framework. The demanding requirement of complex deep learning models (e.g., YOLO) for training data can be partially addressed by transfer learning. However, the detection performance is closely related to the diversity of training samples. The collected training samples are expected to provide a comprehensive characterisation of the structural information for unseen used screws, which is impossible due to the uncertainties of EOL products. The proposed screw detection method adopts reflection features to deal with the uncertainties in the structural information and obtained robust detection performance. Consequently, the proposed screw detection method is reliable to help accomplish complex screw removal tasks under uneven illumination conditions.

6. Conclusions and Future Work

Research on screw detection is essential for automatic removal. This paper realises the aim of accurately and stably detecting structurally damaged screws in EOL products under uneven illumination conditions by presenting a reflection feature regression model and a two-stage detection framework. The utilisation of reflection features addresses the problem of characterising damaged screw structures caused by the uncertainty and complexity of EOL products, and the reflection features are automatically determined by data learning according to the illumination conditions. The texture feature helps the proposed screw detection method filter out falsely detected screws. In addition, an innovative illumination condition measurement method is proposed in employing the reflection feature by defining the illumination label. The detection method is optimised by a developed self-optimisation strategy. Finally, a vision-guided robotic disassembly platform designed to disassemble EV batteries is utilised to evaluate the detection performance. This study realises stable and accurate screw detection on a battery disassembly task and contributes to the research and application of object detection in other disassembly cases.
The developed method has two issues requiring research in the future. First, this study takes the influence of structural damage on screw detection into account. However, the problem of detecting rusty screws needs to be researched further. Future work plans to optimise the proposed method by designing a stronger data-driven network to describe the reflection characteristics of rusty screws. Second, the proposed illumination condition measurement method lacks a quantitative assessment. Future work plans to adopt sensors to describe the trend of illumination conditions to reflect the performance of the proposed measurement method.

Author Contributions

Conceptualization, Q.L. and W.D.; methodology, W.D. and D.T.P.; validation, W.D.; investigation, J.H.; resources, D.T.P. and Z.Z.; writing—original draft preparation, W.D.; writing—review and editing, D.T.P., J.H. and Y.W.; funding acquisition, W.D., D.T.P. and J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Engineering and Physical Sciences Research Council under grant EP/N018524/1, the National Natural Science Foundation of China under Grant 52075404, and the China Scholarship Council under Grant 202006950054.

Data Availability Statement

The data in this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ijomah, W.L.; McMahon, C.A.; Hammond, G.P.; Newman, S.T. Development of design for remanufacturing guidelines to support sustainable manufacturing. Robot. Comput.-Integr. Manuf. 2007, 23, 712–719. [Google Scholar] [CrossRef]
  2. Li, R.; Pham, D.T.; Huang, J.; Tan, Y.; Qu, M.; Wang, Y.; Kerin, M.; Jiang, K.; Su, S.; Ji, C.; et al. Unfastening of hexagonal headed screws by a collaborative robot. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1455–1468. [Google Scholar] [CrossRef]
  3. Zhang, X.; Zhang, M.; Zhang, H.; Jiang, Z.; Liu, C.; Cai, W. A review on energy, environment and economic assessment in remanufacturing based on life cycle assessment method. J. Clean. Prod. 2020, 255, 120160. [Google Scholar] [CrossRef]
  4. Yuksel, H. Design of automobile engines for remanufacture with quality function deployment. Int. J. Sustain. Eng. 2010, 3, 170–180. [Google Scholar] [CrossRef]
  5. Hashemi, V.; Chen, M.; Fang, L. Modelling and analysis of aerospace remanufacturing systems with scenario analysis. Int. J. Adv. Manuf. Technol. 2016, 87, 2135–2151. [Google Scholar] [CrossRef]
  6. Zheng, H.; Li, E.; Wang, Y.; Shi, P.; Xu, B.; Yang, S. Environmental life cycle assessment of remanufactured engines with advanced restoring technologies. Robot. Computer-Integr. Manuf. 2019, 59, 213–221. [Google Scholar] [CrossRef]
  7. Ahmed, F.; Almutairi, G.; Hasan, P.M.Z.; Rehman, S.; Kumar, S.; Shaalan, N.M.; Aljaafari, A.; Alshoaibi, A.; AIOtaibi, B.; Khan, K. Fabrication of a biomass-derived activated carbon-based anode for high-performance li-ion batteries. Micromachines 2023, 14, 192. [Google Scholar] [CrossRef] [PubMed]
  8. Ong, S.K.; Chang, M.M.L.; Nee, A.Y.C. Product disassembly sequence planning: State-of-the-art, challenges, opportunities and future directions. Int. J. Prod. Res. 2021, 59, 3493–3508. [Google Scholar] [CrossRef]
  9. Hu, Y.; Liu, C.; Zhang, M.; Jia, Y.; Xu, Y. A novel simulated annealing-based hyper-heuristic algorithm for stochastic parallel disassembly line balancing in smart remanufacturing. Sensors 2023, 23, 1652. [Google Scholar] [CrossRef]
  10. Bahubalendruni, M.V.A.R.; Varupala, V.P. Disassembly sequence planning for safe disposal of end-of-life waste electric and electronic equipment. Natl. Acad. Sci. Lett. 2021, 44, 243–247. [Google Scholar] [CrossRef]
  11. Poschmann, H.; Brueggemann, H.; Goldmann, D. Disassembly 4.0: A review on using robotics in disassembly tasks as a way of automation. Chem. Ing. Tech. 2020, 92, 341–359. [Google Scholar] [CrossRef]
  12. Nowakowski, P. A novel, cost efficient identification method for disassembly planning of waste electrical and electronic equipment. J. Clean. Prod. 2018, 172, 2695–2707. [Google Scholar] [CrossRef]
  13. Chen, W.H.; Foo, G.; Kara, S.; Pagnucco, M. Automated generation and execution of disassembly actions. Robot. Comput.-Integr. Manuf. 2021, 68, 102056. [Google Scholar] [CrossRef]
  14. Vongbunyong, S.; Kara, S.; Pagnucco, M. Learning and revision in cognitive robotics disassembly automation. Robot. Comput.-Integr. Manuf. 2015, 34, 79–94. [Google Scholar] [CrossRef]
  15. Gli, P.; Pomares, J.; Diaz, S.T.P.C.; Candelas, F.; Torres, F. Flexible multisensorial system for automatic disassembly using cooperative robots. Int. J. Comput. Integr. Manuf. 2007, 20, 757–772. [Google Scholar] [CrossRef]
  16. Bdiwi, M.; Rashid, A.; Putz, M. Autonomous disassembly of electric vehicle motors based on robot cognition. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [Google Scholar] [CrossRef]
  17. DiFilippo, N.M.; Jouaneh, M.K. A system combining force and vision sensing for automated screw removal on laptops. IEEE Trans. Autom. Sci. Eng. 2017, 15, 887–895. [Google Scholar] [CrossRef]
  18. DiFilippo, N.M.; Jouaneh, M.K. Using the soar cognitive architecture to remove screws from different laptop models. IEEE Trans. Autom. Sci. Eng. 2018, 16, 767–780. [Google Scholar] [CrossRef]
  19. Foo, G.; Kara, S.; Pagnucco, M. Screw detection for disassembly of electronic waste using reasoning and retraining of a deep learning model. Procedia CIRP 2021, 98, 666–671. [Google Scholar] [CrossRef]
  20. Mangold, S.; Steiner, C.; Friedmann, M.; Fleischer, J. Vision-based screw head detection for automated disassembly for remanufacturing. Procedia CIRP 2022, 105, 1–6. [Google Scholar] [CrossRef]
  21. Li, X.; Li, M.; Wu, Y.; Zhou, D.; Liu, T.; Hao, F.; Yue, J.; Ma, Q. Accurate screw detection method based on faster R-CNN and rotation edge similarity for automatic screw disassembly. Int. J. Comput. Integr. Manuf. 2021, 34, 1177–1195. [Google Scholar] [CrossRef]
  22. Sun, Y.; Chang, Z.; Zhao, Y.; Hua, Z.; Li, S. Progressive two-stage network for low-light image enhancement. Micromachines 2021, 12, 1458. [Google Scholar] [CrossRef] [PubMed]
  23. Tang, Q.; Yang, J.; He, X.; Jia, W.; Zhang, Q.; Liu, H. Nighttime image dehazing based on retinex and dark channel prior using taylor series expansion. Comput. Vis. Image Underst. 2021, 202, 103086. [Google Scholar] [CrossRef]
  24. Cui, Y.; Sun, Y.; Jian, M.; Zhang, X.; Yao, T.; Gao, X.; Li, Y.; Zhang, Y. A novel underwater image restoration method based on decomposition network and physical imaging model. Int. J. Intell. Syst. 2022, 37, 5672–5690. [Google Scholar] [CrossRef]
  25. Sudo, H.; Yukushige, S.; Muramatsu, S.; Inagaki, K.; Chugo, D.; Hashimoto, H. Detection of glass surface using reflection characteristic. In Proceedings of the Annual Conference of the IEEE Industrial Electronics Society, Toronto, ON, Canada, 13–16 October 2021. [Google Scholar] [CrossRef]
  26. Wu, J.; Ji, Z. Seeing the unseen: Locating objects from reflections. In Proceedings of the Annual Conference Towards Autonomous Robotic Systems, Bristol, UK, 25–27 July 2018. [Google Scholar] [CrossRef]
  27. Zhang, P.; Liu, W.; Lei, Y.; Lu, H. Hyperfusion-Net: Hyper-densely reflective feature fusion for salient object detection. Pattern Recognit. 2019, 93, 521–533. [Google Scholar] [CrossRef]
  28. Zhang, P.; Liu, W.; Lu, H.; Shen, C. Salient object detection with lossless feature reflection and weighted structural loss. IEEE Trans. Image Process. 2019, 28, 3048–3060. [Google Scholar] [CrossRef] [PubMed]
  29. Tan, L.; Tang, T.; Yuan, D. An ensemble learning aided computer vision method with advanced colour enhancement for corroded bolt detection in tunnels. Sensors 2022, 22, 9715. [Google Scholar] [CrossRef] [PubMed]
  30. Lalonde, J.F.; Efros, A.A.; Narasimhan, S.G. Estimating the natural illumination conditions from a single outdoor image. Int. J. Comput. Vis. 2012, 98, 123–145. [Google Scholar] [CrossRef]
  31. Barron, J.T.; Malik, J. Shape, illumination, and reflectance from shading. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 1670–1687. [Google Scholar] [CrossRef]
  32. Zhou, T.; Krahenbuhl, P.; Efros, A.A. Learning data-driven reflectance priors for intrinsic image decomposition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
  33. Lee, H.W. The study of mechanical arm and intelligent robot. IEEE Access 2020, 8, 119624–119634. [Google Scholar] [CrossRef]
  34. Sanakkayala, D.C.; Varadarajan, V.; Kumar, N.; Soni, G.; Kamat, P.; Kumar, S.; Patil, S.; Kotecha, K. Explainable AI for bearing fault prognosis using deep learning techniques. Micromachines 2022, 13, 1471. [Google Scholar] [CrossRef]
  35. Deng, S.; Du, L.; Li, C.; Ding, J.; Liu, H. SAR automatic target recognition based on Euclidean distance restricted autoencoder. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3323–3333. [Google Scholar] [CrossRef]
  36. Zeng, X.; Wang, X.; Chen, K.; Zhang, Y.; Li, D. Dividing the neighbours is not enough: Adding confusion makes local descriptor stronger. IEEE Access 2019, 7, 136106–136115. [Google Scholar] [CrossRef]
  37. Dmytriyev, Y.; Zaki, A.M.A.; Carnevale, M.; Insero, F.; Giberti, H. Brain computer interface for human-cobot interaction in industrial applications. In Proceedings of the International Congress on Human-Computer Interaction, Optimisation and Robotic Applications, Ankara, Türkiye, 11–13 June 2021. [Google Scholar] [CrossRef]
  38. Song, Q.; Li, S.; Bai, Q.; Yang, J.; Zhang, X.; Li, Z.; Duan, Z. Object detection method for grasping robot based on improved YOLOv5. Micromachines 2021, 12, 1273. [Google Scholar] [CrossRef]
  39. Gong, C.S.A.; Su, C.H.S.; Chen, Y.H.; Guu, D.Y. How to implement automotive fault diagnosis using artificial intelligence scheme. Micromachines 2022, 13, 1380. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Examples of used and structurally damaged screws.
Figure 1. Examples of used and structurally damaged screws.
Micromachines 14 00946 g001
Figure 2. Flowchart of the proposed screw detection method.
Figure 2. Flowchart of the proposed screw detection method.
Micromachines 14 00946 g002
Figure 4. Flowchart of the two-stage detection framework.
Figure 4. Flowchart of the two-stage detection framework.
Micromachines 14 00946 g004
Figure 5. Experimental platform. (a) Utilised equipment; (b) EOL plug-in hybrid EV battery and the position of adopted screws in the experiments.
Figure 5. Experimental platform. (a) Utilised equipment; (b) EOL plug-in hybrid EV battery and the position of adopted screws in the experiments.
Micromachines 14 00946 g005
Figure 6. Flowchart of the experimental procedure.
Figure 6. Flowchart of the experimental procedure.
Micromachines 14 00946 g006
Figure 7. Example captured images for P1 screws. (ac) The images captured in three different positions at the same time.
Figure 7. Example captured images for P1 screws. (ac) The images captured in three different positions at the same time.
Micromachines 14 00946 g007
Figure 8. Goodness-of-fit of 9 trained reflection feature regression models on 40 test datasets for P1 screws. (ai) The R-squared value of the reflection feature regression model trained on training dataset-1, training dataset-2, training dataset-3, training dataset-4, training dataset-5, training dataset-6, training dataset-7, training dataset-8, and training dataset-9, respectively.
Figure 8. Goodness-of-fit of 9 trained reflection feature regression models on 40 test datasets for P1 screws. (ai) The R-squared value of the reflection feature regression model trained on training dataset-1, training dataset-2, training dataset-3, training dataset-4, training dataset-5, training dataset-6, training dataset-7, training dataset-8, and training dataset-9, respectively.
Micromachines 14 00946 g008aMicromachines 14 00946 g008b
Figure 9. Detection performance of the proposed two-stage detection framework for P1 screws. (ad) The detection performance evaluated by m A P @ 0.5 , m A P @ 0.6 , m A P @ 0.7 , and a c c u r a c y , respectively.
Figure 9. Detection performance of the proposed two-stage detection framework for P1 screws. (ad) The detection performance evaluated by m A P @ 0.5 , m A P @ 0.6 , m A P @ 0.7 , and a c c u r a c y , respectively.
Micromachines 14 00946 g009aMicromachines 14 00946 g009b
Figure 10. Contributions of the reflection stage and the texture stage to the two-stage detection framework for P1 screws. The blue lines record the differences between using the two-stage detection framework and using the texture stage, while the red lines record the differences between using the two-stage detection framework and using the reflection stage. (a) Difference in m A P @ 0.5 ; (b) difference in a c c u r a c y .
Figure 10. Contributions of the reflection stage and the texture stage to the two-stage detection framework for P1 screws. The blue lines record the differences between using the two-stage detection framework and using the texture stage, while the red lines record the differences between using the two-stage detection framework and using the reflection stage. (a) Difference in m A P @ 0.5 ; (b) difference in a c c u r a c y .
Micromachines 14 00946 g010
Table 1. The important notations used in this paper.
Table 1. The important notations used in this paper.
NotationDescription
R Reflection features
P Reflection abilities
I Illumination conditions
I s c r e w The illumination condition of the screw region
I l a b e l The illumination condition of the illumination label
L R s c r e w The L value of the reflection component for the screw region
L R l a b e l The L value of the reflection component for the illumination label
L S s c r e w The L value of the screw region
L S l a b e l The L value of the illumination label
L I s c r e w The L value of the illumination component for the screw region
L I l a b e l The L value of the illumination component for the illumination label
P s c r e w The reflection ability of the screw region
P l a b e l The reflection ability of the illumination label
The L values of the reflection components for the screw region and the illumination label are used to represent the reflection features of the screw region and the illumination label, while the illumination conditions of the screw region and the illumination label are recorded by the L values of the illumination components for the screw region and the illumination label. The L values are extracted from the Lab colour space in this study.
Table 2. Experimental parameters of the detection method in the screw removal case.
Table 2. Experimental parameters of the detection method in the screw removal case.
The Length of the Screw Region ( s r l )The Width of the Screw
Region ( s r w )
The Number of Extracted Reflection-Based Screw Regions ( r n )The Size of the
Extracted Texture Feature Point ( s n )
The Number of Extracted Reflection-Based Screw
Regions ( t n )
The Difference Threshold for Comparing Neighbouring Detected Screws ( t d )
10010020128100.05
Table 3. Performance of measuring illumination conditions by light meters and illumination labels for P1 screws.
Table 3. Performance of measuring illumination conditions by light meters and illumination labels for P1 screws.
Capture Position
(X, Y, Z, Rx, Ry, Rz)
L S s c r e w L S l a b e l Illumination Conditions Measured by Light Meters
Figure 7a(140 mm, −534 mm, 424 mm, 179°, −3°, −8°)58.461514.7876770 lx
Figure 7b(160 mm, −534 mm, 424 mm, 179°, −3°, −8°)59.373114.9791770 lx
Figure 7c(140 mm, −515 mm, 424 mm, 179°, −3°, −8°)56.811812.1517770 lx
The adopted light meter is CA 1110 developed by Chauvin Arnoux. Measurement range: 0.1 lx to 200,000 lx; ±3% of reading on incandescent sources; ±6% of the reading on LEDs; ±9% of the reading on fluorescent sources.
Table 4. The maximum, minimum, and average R-squared values of 9 trained reflection feature regression models on 40 test datasets for P1 screws.
Table 4. The maximum, minimum, and average R-squared values of 9 trained reflection feature regression models on 40 test datasets for P1 screws.
Maximum R-SquaredMinimum R-SquaredAverage R-Squared
Training dataset-10.9090.7970.860
Training dataset-20.9150.8060.856
Training dataset-30.9200.8020.858
Training dataset-40.9140.8000.862
Training dataset-50.9180.8040.867
Training dataset-60.8980.8010.855
Training dataset-70.9100.8040.851
Training dataset-80.9070.8020.861
Training dataset-90.9040.8040.844
Table 5. The maximum, minimum, and average m A P @ 0.5 , m A P @ 0.6 , m A P @ 0.7 , and a c c u r a c y of the two-stage detection framework for P1 screws.
Table 5. The maximum, minimum, and average m A P @ 0.5 , m A P @ 0.6 , m A P @ 0.7 , and a c c u r a c y of the two-stage detection framework for P1 screws.
Maximum ValueMinimum ValueAverage Value
m A P @ 0.5 1.000.800.99
m A P @ 0.6 1.000.020.78
m A P @ 0.7 0.680.000.25
a c c u r a c y 1.000.800.99
Table 6. The maximum, minimum, and average differences in m A P @ 0.5 and a c c u r a c y between using the two-stage detection framework and using the texture stage and between using the two-stage detection framework and using the reflection stage.
Table 6. The maximum, minimum, and average differences in m A P @ 0.5 and a c c u r a c y between using the two-stage detection framework and using the texture stage and between using the two-stage detection framework and using the reflection stage.
Difference between Using the Two-Stage Detection Framework and Using the Texture Stage Difference between Using the Two-Stage Detection Framework and Using the Reflection Stage
Maximum   m A P @ 0.5 1.000.24
Minimum   m A P @ 0.5 0.76−0.02
Average   m A P @ 0.5 0.980.02
Maximum   a c c u r a c y 1.000.06
Minimum   a c c u r a c y 0.82−0.02
Average   a c c u r a c y 0.990.01
Table 7. m A P @ 0.5 and a c c u r a c y of the proposed detection method on different screws.
Table 7. m A P @ 0.5 and a c c u r a c y of the proposed detection method on different screws.
m A P @ 0.5 a c c u r a c y
P1 screws0.990.99
P2 screws0.970.97
P3 screws0.970.97
P4 screws0.820.82
P5 screws0.840.84
P6 screws0.860.86
average0.910.91
Table 8. The performance of the reflection feature regression model and the two-stage detection framework.
Table 8. The performance of the reflection feature regression model and the two-stage detection framework.
Average R-SquaredAverage m A P @ 0.5 Average a c c u r a c y
Reflection feature regression model0.857\\
Two-stage detection framework\0.910.91
Table 9. Comparison of the detection performance with existing methods.
Table 9. Comparison of the detection performance with existing methods.
m A P @ 0.5 a c c u r a c y
Experience-based methods [15,16,17]0.100.17
Soar [18]0.760.78
ResNet [19]0.800.80
YOLO [20]0.940.94
Faster R-CNN [21]0.870.87
Proposed0.910.91
Table 10. Number of utilised training data in the proposed screw detection method and existing data-driven methods.
Table 10. Number of utilised training data in the proposed screw detection method and existing data-driven methods.
YOLO [20]Proposed
Number of used training data6720200
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Q.; Deng, W.; Pham, D.T.; Hu, J.; Wang, Y.; Zhou, Z. A Two-Stage Screw Detection Framework for Automatic Disassembly Using a Reflection Feature Regression Model. Micromachines 2023, 14, 946. https://doi.org/10.3390/mi14050946

AMA Style

Liu Q, Deng W, Pham DT, Hu J, Wang Y, Zhou Z. A Two-Stage Screw Detection Framework for Automatic Disassembly Using a Reflection Feature Regression Model. Micromachines. 2023; 14(5):946. https://doi.org/10.3390/mi14050946

Chicago/Turabian Style

Liu, Quan, Wupeng Deng, Duc Truong Pham, Jiwei Hu, Yongjing Wang, and Zude Zhou. 2023. "A Two-Stage Screw Detection Framework for Automatic Disassembly Using a Reflection Feature Regression Model" Micromachines 14, no. 5: 946. https://doi.org/10.3390/mi14050946

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop