Next Article in Journal
MAMGD: Gradient-Based Optimization Method Using Exponential Decay
Previous Article in Journal
Real-Time Machine Learning for Accurate Mexican Sign Language Identification: A Distal Phalanges Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Innovative Vision-Guided Feeding System for Robotic Picking of Different-Shaped Industrial Components Randomly Arranged

1
Department of Innovation Engineering, University of Salento, 73100 Lecce, Italy
2
Kinéma S.r.l., Street Modugno—Bari, 73100 Modugno, Italy
*
Author to whom correspondence should be addressed.
Technologies 2024, 12(9), 153; https://doi.org/10.3390/technologies12090153
Submission received: 1 July 2024 / Revised: 26 August 2024 / Accepted: 2 September 2024 / Published: 5 September 2024
(This article belongs to the Section Manufacturing Technology)

Abstract

:
Within an industrial plant, the handling of randomly arranged objects is becoming increasingly popular. The technology industry has introduced ever more powerful devices to the market, but they are often unable to meet the demands of the industry in terms of processing times. Using a multi-component feeder, which facilitates the automatic picking of objects arranged in bulk, is the ideal element to speed up the identification of objects by the vision system. The innovative designed feeder eliminates the dead time of the vision system since the feeder has two working surfaces, thus making the viewing time hidden in relation to the total handling cycle time. In addition, the step feeder integrated into the feeder structure allows for control over the number of objects that fall onto the work surface, optimizing the material flow. The feeder was designed to palletize aluminum hinge fins but can also handle other products with different shapes and sizes. A two-dimensional (2D) vision system is integrated into the robotic cell to identify the components to be palletized, obtaining a reduced cycle time. The innovative feeder is fully adaptable to industrial applications and allows for easy integration into the robotic cell in which it is installed; by testing its operation with different aluminum fins, male and female, significant results were obtained in terms of cycle times ranging from 1.44 s to 1.68 s per piece, with an average productivity level (PL) of 1175 pcs every 30 min.

1. Introduction

Automated picking, more commonly referred to as bin picking, is becoming increasingly central to product handling in the industry. Taking parts from bulk supply and placing them into production lines are everyday tasks in industrial automation. Ongoing research in this area is driving companies to seek out new technologies to achieve increasingly customized and flexible solutions to reduce costs and increase productivity. The objective is to carry out the movement of products in the shortest possible time while maintaining precision and repeatability in the picking and subsequent release, a fundamental aspect for improving productivity [1]. The difficulties generally encountered in bin-picking activities relate to the identification of different components stacked together and are highly dependent on the design characteristics of the product. Given the variety of products manufactured by companies, there is no general method [2].
Bin picking is an activity that involves two principal subsystems which are pushed into a coordinated and rapid collaboration to ensure the successful completion of the activity. This industrial stage uses a robotic manipulator and computer vision to identify and pick randomly arranged objects; it is possible, therefore, to determine the two essential components, namely the vision system and the picking device. On the one hand, vision deals with identifying and recognizing objects inside a container or, more generally, on a surface. A critical aspect of the automated picking process is separating randomly organized objects, a task often addressed by steps reported in Figure 1 [3].
The bin-picking problem also involves many technical aspects, including data acquisition, object location identification, pose estimation, and collision avoidance, which are essential for a successful solution. Various methods can be used for data acquisition, such as three-dimensional (3D) laser scanner mode sensors or 3D meters. The following image illustrates a typical layout of a robotic cell dedicated to bin picking (Figure 2).
A 3D vision system is undoubtedly efficient for object picking, providing helpful information for grasping objects such as spatial orientation. For object recognition, a laser sensor for 3D vision is not affected by ambient light, allowing for the depth and shape of objects to be calculated. However, the data obtained from a 3D vision system are very complex and, therefore, require longer processing times than a two-dimensional (2D) vision system [4]; furthermore, the high costs of 3D vision systems represent another limitation of its use.
Introducing a smart feeder that facilitates the work of the vision system for objects’ identification would allow the robotic manipulator to significantly improve its effectiveness in picking up objects. Existing automation solutions for part feeding, such as vibratory feeders and robotic pickers, require specialized engineering for each product variant [5]; therefore, a significant investment is needed to handle each product.
In this regard, the innovative aspect of this work lies in the design and testing of a particular feeder, which, by optimizing the components’ distribution to be picked, allows for the effective use of a 2D vision system. This paper focuses on designing and testing a feeder for palletizing aluminum hinge fins, with the aim of reducing the handling time from identification to object storage. The innovative aspect consists of using two independent work surfaces so that the viewing phase becomes a hidden time for the entire cycle time, thus overcoming the limitations of multi-object feeders available on the market. The components that make up the robotic cell layout used for the flexible feeder design are listed below:
  • A flexible feeder consisting of a pair of specially modified commercial conveyor belts from ALUSIC company [6];
  • A pair of Basler cameras, model Ace 2 R (code a2A1920-51gmBAS), positioned above the flexible feeder on an aluminum profile frame, that communicate with a generic industrial PC [7];
  • An EPSON SCARA robot G6 615S with controller RC700 that enables the handling of parts and industrial components [8];
  • An S7-1500 SIEMENS PLC with CPU 1513-1PN that allows for communication with all elements employed in the robotic cell [9].
The image shown in Figure 3 highlights the elements involved in the designed multi-object feeder and how they communicate with each other.
All elements in Figure 3 communicate via the PLC and the industrial PC; once the fins arrive on the feeder surface, the PLC communicates to the vision system to begin the scanning process to detect objects. When the vision system identifies each fin on the feeder surface, and after classifying it as grippable or not using a computational calculation, it communicates the position and orientation of the object on the surface to the robot in order to pick it up. Once the removal of all the pickable fins is complete, the vision system communicates to the PLC the necessary actions to be executed, namely feeder loading, surface shaking, and belt advancement, individually or even in combined form. The performed actions improve the objects’ distribution on the surface, optimize the vision system’s work, and reduce the cycle time.
This technical solution also highlights the increase in productivity depending on the object being processed and the flexibility with which the machine can adapt to objects with different shapes, hence the term flexible feeder. Equally important is the elimination of a labor-intensive process for the production operator. However, Tompkins et al. [10] estimated that picking represents 55% of warehouse costs. Using human operators is usually the most cost-effective alternative, especially compared to automated solutions. However, many studies have shown that manual processes can have hidden problems and costs, such as musculoskeletal disorders, operator absenteeism, or the impact of picking errors [11]. For this reason, the operator is sometimes assisted in manual-handling operations with the advent of collaborative robots. Collaborative robots (cobots) are intended to physically interact with humans in a shared workspace [12].
As previously expressed, this work aims to design a system dedicated to automatically picking randomly arranged objects to increase the productivity of an industrial plant, eliminating the exhausting repetitive work of an operator assigned to handling objects. Unlike other feeders on the market [13,14], the multi-component feeder’s integrated step feeder allows for the components to be processed to be fed in stages and does not require the installation of vibratory feeders or conveyors with flights, resulting in space optimization. Commercial feeders do not have a flexible installation; instead, the innovative design of the feeder is modular and perfectly adaptable to the layout in which it is installed. Furthermore, the two working surfaces allow for the scanning time to be masked from the entire cycle time; this is a crucial advantage with respect to the competitors.
The designed feeder presents considerable advantages for the objects’ palletization, representing the added value of the entire bin-picking system. The strengths, scientific value, and innovative aspects that the multi-component feeder introduces in the palletization process are reported below:
  • Its modularity makes it easily adaptable to different bin-picking scenarios, allowing for the palletization of different object types and easy integration into a new system layout. Two work surfaces guarantee a continuous flow of objects, eliminating the downtime due to the scanning phase.
  • Using the multi-component feeder with a 2D vision system and an SCARA robot allows for reduced identification and handling times, thus obtaining much faster palletizing of the picked objects.
  • The designed step feeder optimally manages the flow of objects that fall on the scanning surface, facilitating the work of the 2D vision system.
  • Using a combination of the step feeder with a vision system and a robot manipulator, excellent performances in terms of cycle time are obtained.
The article is structured as follows: In the second section, some scientific works relating to bin-picking activities are analyzed, whereas, in Section 3, the structure of the flexible feeder is discussed. Along with the characteristics that make it innovative and competitive compared to commercial solutions, the adopted artificial vision technique and the operating principle are shown in a flow diagram. Furthermore, specific calculations are reported for designing the feeder’s shaking mechanism based on some simplified hypotheses. In Section 4, the results obtained regarding the cycle time are reported, highlighting the strengths and innovations compared to other existing systems; finally, in the last section, the conclusions are drawn with some technological aspects for the designed system’s improvement.

2. Literature Review

2.1. Different Functionalities, Architectures, and Industrial Applications of Bin-Picking Systems

Bin picking is a current research topic in the field of industrial robotics, which has led to significant advances in this area. However, only a few industrial plug-and-play applications exist in practice, and the random picking problem is still a research topic rather than a standard technology ready for industrial applications (Figure 4). In this field, where system robustness and high reliability are required, picking randomly arranged objects requires an in-depth study of the application [3]. However, the devices already available on the market still have their respective limitations regarding practical applications. A fully automated picking system would open up a market with enormous potential since, in many situations, picking tasks are still performed manually [15].
Furthermore, the identification of parts inside a container is still a research area that has not yet been resolved in its most general sense, although several efforts have been made in recent years. The solution can be found more easily by placing restrictions on the type of objects the system can handle; for example, it can be assumed that the objects contained in the box are convex, and therefore, situations in which the objects penetrate each other cannot occur [3]. Another approach adopted in practice is to design the machine according to a limited number of products with similar geometric characteristics.
Pochyly A. et al. [3], using a 3D vision system by means of a single industrial camera and two linear laser projectors, analyzed two case studies for which it was necessary to design two dedicated grippers, the first involving the automatic picking-up of objects made of sheet metal. The vision identifies these pieces due to their constructive characteristics, i.e., shape and edges. In this case, it was necessary to use an intermediate storage position of the object on which the robot approximately placed the picked part, and then the vision re-analyzed the piece to obtain the correct picking mode. The cycle time needed for the final positioning of the object is about 12–15 s. The second case, on the other hand, concerns the detection of an object characterized by a horse bracket for which it was unnecessary to use the intermediate storage position, requiring a cycle time needed for the final object positioning of about 9–10 s. In the first case study, a critical cycle time aspect is the need to use the intermediate storage position.
With the use of a multi-component feeder, as proposed in this research work, an intermediate position is not necessary since the components arrive at the scanning surface in a simpler configuration for vision processing, thus allowing easy recognition using the 2D vision system and subsequent picking with the robotic arm.
Another concern with the bin-picking process is scheduling the robotic arms. The main objective of the scheduling problem is to determine which robot arm will pick up an item from an infeed conveyor, at what position, and where it will be placed. Critical constraints, including dynamic item and receptacle positions and sequence-dependent processing time, characterize this issue. Nielsen et al. [16] proposed a decomposition approach for dividing the robotic arm scheduling problem into simpler sub-problems, enabling a reduction in solution space and decision-making dimensions. In detail, they consider a layout with two conveyor belts; the first carries the food and the latter the trays, which move only when the last tray overcomes the target weight. Each sub-problem for the train is solved after the solution for the tray track is determined. The same process will start as a new set of food items enters the system, while the solutions from the previous sub-problem solving are retained. This is achieved by solving all sub-problems in sequence, which involves considering all trays for placing the items entered into the robotic arm-based batching system. The simulation results demonstrated that the overall give-away level was decreased using the proposed scheduling solution, resulting in a mean computational time of 0.66 s. Wang J. et al. [17] implemented an automated supermarket checkout to locate different products and unload them from the conveyor using SCARA robots. The product is recognized by identifying the barcode via a camera, and from this, the orientation of the product on the belt is obtained, while the spatial localization is calculated via the portion of pixels of the digitized image not occupied by the product. Lu Z. et al. [18] applied automatic harvesting technology in agriculture to select mature winter jojobas. The red part that affects the entire surface highlights the ripeness of this fruit, allowing us to determine the degree of ripeness and the marketing date. By framing from three sides, right, top, and left, they calculate an average of the degree of coloring, giving each fruit’s average an overall degree of ripeness. Subsequently, a robotic arm sorts the winter jojobas, classified according to three degrees of ripeness, into three different containers. It manages to achieve an average sorting time per fruit of 1.39 s. Jin G. et al. [19] studied the container collection process by configuring an SCARA robot with a 3D camera; considering that an object in space has 6 DOF, 1 DOF can be suppressed by imposing a constraint on its geometry, i.e., that the object is axisymmetric. For this reason, to equalize the number of DOFs in space, Jin G. et al. added a new axis by designing a gripper capable of adapting the inclination of the textile filament spool to the horizontal plane; with this system, they managed to collect the coils in 3.2 s with a success rate of 93.1%.

2.2. An Overview of Vision Systems and Path Planning in Bin-Picking Applications

With the evolution of technologies, the industry is increasingly adopting vision systems to carry out many tasks, from inspection [18] to measurement up to 3D scanning [20,21], to obtain a two- or three-dimensional representation which, once processed, returns a measurement, value, or spatial coordinates. The effectiveness of computer vision is based on the correspondence between the image obtained by the computational process and the real model. Assuming that the matching error exceeds the tolerance value, the comparison fails because the model retrieved by the vision software does not match the CAD reference model; the more complex the component is, the more difficult the calculation process is because the software has greater difficulty in data processing to match the characteristics of the obtained model with those of the reference model [2].
The solution to this problem is to intervene in the interlocking of the vision system, arranging the objects in a configuration that facilitates and speeds up the identification and comparison phases, thus increasing the system’s efficiency. To this end, in this work, an intelligent multi-component feeder has been designed that, by communicating with the other elements of the robotic cell, improves the work of the vision system in order to increase productivity. Often, the feeder consists of a conveyor belt flanked by a robotic arm that provides flexibility in the plant’s item flows by transferring a product from one conveyor belt to another. Compared to stacking machines that push objects off a conveyor belt, robotic arms also offer greater flexibility in moving objects. If the objects being transported are heavy or dangerous, implementing the robotic arm minimizes the risk of injury to human operators. Furthermore, a conveyor belt robotic arm has demonstrated its ability to perform simple and repetitive tasks [16]. A flexible feeder with a vision system and a robotic manipulator represents a coordinated, collaborative technology that significantly increases the efficiency of the bin-picking process. The vision system solves the identification and comparison tasks using different approaches; ideally, a general approach that can cope with objects of different sizes, shapes, forms, etc., is preferred [15]. However, the complexity of these general approaches is the main cause of failures that lead to production slowdowns and add costs to the company.
Another problem inherent to the process is pose estimation, i.e., finding an adequate position of the robot’s end effector, avoiding possible collisions with surrounding objects. The problem of determining the best way to grasp the object is complex, even in the most general case, but can be simplified by placing constraints on the structure of the components to be grasped. This approach can be used in various real-world industrial processes [3]. Finally, the problem of determining the path that the manipulator robot must take to pick up the detected object (the path-planning or trajectory-planning problem) is generally addressed in manipulator robot programming, for which there are several stable solutions and efficient algorithms in the literature [15].
To solve the path-planning problem, it is necessary to have a three-dimensional reconstruction of the entire robot cell so that the robot does not collide with other surrounding objects. This constraint can be relaxed to solve specific problems, as could be the case with packaging [22]. Often, a simulation is run via a digital twin, a virtual model of the layout with which simulations are performed to optimize the system design [23].
Cycle time is another critical factor in automatically picking problems for a specific application. The main problem is that the cycle time used to perform forecasting and planning is unpredictable; therefore, virtual software simulations are often performed to obtain cycle time estimates.

2.3. Limitations of Existing Systems, Different Operating Modes, and Strengths of Proposed Solutions

Many research studies in the literature propose devices that manage the scanning and picking steps separately, thus increasing the cycle time. This application aims to design an electromechanical device that allows for constant and fast object collection; in other words, we want to eliminate all dead times, i.e., non-productivity times, in which the vision system scans the surface while the robot is stationary (Figure 5a). The flexible feeder is characterized by two scanning independent surfaces, ensuring that the scanning operation takes place in a hidden time, as the robot’s operation does not stop during scanning (Figure 5b). In this way, the vision system and robotic arm work simultaneously, achieving the goal of reducing the cycle time. The feeder’s goal is also to ensure that the vision system detects takeable components; it has the function of transporting components on the scanning surface and distributing them more or less evenly over it.

3. Materials and Methods

In the introduction, the functionality and operating mode of the entire system were analyzed, including the multi-object feeder and robotic cell. We will describe the structure of the multi-component feeder and its operation, represented by a related flowchart; then, the description of the gripper dedicated to handling the fins is given, as well as of the vision system, the recognition software used, and of the designed shaking device.

3.1. Developed Architecture

The design of the flexible feeder starts with the suitable choice of conveyor belts (in our system provided by Alusic Co, Mondovì, Italy, characterized by an aluminum profile frame and motorized transmission located at the belt center (Figure 6). As the first step, the conveyor support frame was modified by lengthening the center profile, making the structure more stable. Next, lateral barriers were created to prevent the fins from falling outside the feeder during operation. A central barrier was also made so that the surface of one belt is independent of the other. The barriers have an inner side with an inclination to avoid collision with the grip. Wedges made by 3D rapid prototyping were then installed at the feeder head to cluster the fins in the center during belt movements; at the opposite end of the peripheral and central barriers, two pairs of 3D-printed wedges are placed again to group the fins in the belt center during the unloading phase.
Shaking the belt is achieved by rotating two rollers inside the feeder, driven by a three-phase asynchronous motor, model 6SM-71-B14 (manufactured by Elvem Co, Cartigliano, Italy), using a toothed belt drive. Each surface has its shaking device, managed by the S7-1500 PLC Siemens with CPU 1513-1PN (manufactured by Siemens Co, Munich, Germany). The inductive sensor, model BES003P (manufactured by Balluff Co, Neuhausen, Germany), provides the number of performed rotations and the horizontal positioning of the rollers when the cam intercepts the sensor during rotation. A pair of step feeders, operating independently or simultaneously, gradually advance the fins towards the scanning surface. Two pairs of thought-beam photocells, model BOS0212 (manufactured by Balluff Co, Neuhausen, Germany), are positioned on the step feeder; the input photocells communicate to the PLC the presence of the fins on the step feeder, while the output ones communicate the fall of the fins on the belt.
Two pneumatic oscillators (model OR80, manufactured by Oli Co, Medolla, Italy), solidly installed on the support frame, improve the advancement of the fins on the step feeder, which prevents the fins from jamming with each other (Figure 6). The fins have a very particular shape that can change significantly from one model to another (Figure 7). The step feeder comprises a series of fixed and mobile steps arranged alternately, which gradually advance the fins thanks to the alternating motion. The structure of this device is made up of two mirrored press-bent sheets that are integral to the assembly by fixed steps and steel rods. The mobile steps slide on slots created on the sides operated by a pneumatic cylinder. Upstream and downstream of the steps, two through-beam photocells control the fall of objects onto the step feeder (upstream) and the fall of objects onto the belt (downstream). This device is fixed to an aluminum profile frame, hinged to the frame of the multi-component feeder via two steel brackets, and is raised and lowered via a pair of pneumatic cylinders to allow for automatic loading of objects. The step feeder is characterized by two feeding lines so that the objects are supplied to the work surface independently, as shown in Figure 6. This aspect is fundamental as it allows for the feeding of a single work surface whenever required by the vision system
Using such a device to pick components automatically is important to optimize the handling cycle time. Providing the system with evenly distributed objects greatly facilitates the vision system’s work during scanning. Some automated picking applications even need an auxiliary surface to optimize object recognition, significantly increasing the handling time caused by a complex vision processing system [3].

3.2. Profinet Communication in the Robotic Cell and Pneumatic Links

Data communication is based on the Profinet industrial protocol; in the robotic cell, an eight-channel industrial ethernet switch, model FL 10008N (manufactured by Phoenix Contact, Blomberg, Germany), allows for the connection between all the elements (Figure 8). On channel 1, all the Sinamic G120 0.55 kW motor drivers (manufactured by Siemens Co., Munich, Germany), highlighted by the light-blue rectangle and connected in series, allow for the operation of the motors, driven by the PLC connected in cascade. The industrial PC communicates via a Profinet–Ethernet converter with the SCARA robot controller, connected to the switch’s channel 2; the valves’ manifold, model EB 80 (manufactured by Metalwork, Concesio, Italy), for operating the cylinders, is connected to channel 3. The Master I/O Link BNI005H (manufactured by Balluff Co, Neuhausen, Germany) is connected to channel 4. As detailed below, all the sensors on the feeder are connected to three I/O Link Hubs BNI007Z (manufactured by Balluff Co, Neuhausen, Germany), which are connected to the master to reach the PLC.
Hub I/O Link (I–II): to this hub, the BOS0212 through-beam photocells and four BMF00C4 magnetic sensors (manufactured by Balluff Co, Neuhausen, Germany) are connected as limit switches of cylinders that operate the steps.
Hub I/O Link (III): the BES003P inductive sensors for phasing the shaker axis, four limit-switch BMF00C4 magnetic sensors for raising/lowering the step feeder, and two BMF00JJ magnetic sensors for opening/closing the gripper are connected to this hub.

3.3. Operating Mode of the Designed Multi-Object Feeder

Figure 9 shows the flowchart related to the system operation mode with the logical connections between the different blocks. The cycle begins with gradually emptying the box containing the fins onto the step feeder and moving the fins on the belt. While fins fall on the belt, it advances until the fins are brought to the scanning surface. Scanning then begins to identify and recognize the fins and, as they are located, the vision software determines whether they are in the right configuration to be picked up. If the localized fins pass the verification by the vision software, the fin’s position and the robot’s consequent trajectory are calculated. Once all the available fins have been collected and deposited at the storage station, the cycle starts with a new scan. Suppose the comparison with the reference model by the vision system fails; in that case, the fins cannot be picked and the communication between the vision system and PLC is initiated, which makes the feeder carry out the previously defined operating sequences: activation of the shaker or rapid advancement of the belt to project the fins forward and distribute them uniformly, or simultaneous shaker activation and belt movement. Subsequently, the vision system scans the surface; if it is not able to locate any fins because they are not on the belt, the PLC operates the step feeder and advances the belt as the box empties. The cycle repeats until the number of fins to be palletized is reached.

3.4. Grasping Device

Object grasping is an essential aspect of the bin-picking activity since it must be designed focusing on some critical elements, such as:
  • Device weight: the robot possesses a specific payload that must not be exceeded otherwise its movements will be impaired. The SCARA robot used to develop this application possesses a maximum payload at the wrist of 6 kg. The designed gripping device has a maximum weight of 1.24 kg, and considering an average fin weight of about 0.050 kg, the payload is never reached.
  • Stability with which the gripper grasps the piece without damaging it. In this case, since the fin is made of aluminum, it is necessary not to deform it.
  • Designing of the fingers since they must be able to mate stably with the workpiece, be structurally strong, and not interfere with adjacent elements (such as the perimeter barriers of the feeder during the picking).
Precisely because of the previously mentioned aspects, the design of a gripping device may have a general operating principle, but there is no such thing as a universal gripper for all objects with different shapes and construction characteristics (Figure 10).
The gripping device comprises a pneumatic gripper, model KGG 80–30 (manufactured by Schunk Co, Lauffen am Neckar, Germany), two magnetic field sensors (BMF00JJ (manufactured by Balluff Co, Neuhausen, Germany)), and specially designed fingers for gripping fins. The gripper has three operating states: open, closed, and closed state with fin. Sensors on the gripper are responsible for discriminating between reading the open and closed gripper states. When the pneumatic gripper’s magnetic cursor intercepts neither sensor, the PLC recognizes that the gripper has gripped the workpiece.
Also, it is possible to use grippers to identify an object, approach it in a place difficult to grasp, pick it up without damage, and place it in a storage station. The worst-case scenario is when the object is placed near a wall or in a corner; in such cases, great grip flexibility is required to avoid collisions. Ivanov V. et al. [24] created a gripper that combines a gripping system using suction cups, one with mechanical fingers capable of working with the single gripping mode or both. The gripper is integrated with a vision system which, thanks to the processing software and 3D model of the object stored in the database, allows you to select the optimal gripping mode even in the most critical points.
Considering available technologies (suction cups, pneumatic or electromechanical grippers, etc.), the gripping strategy must be chosen based on the object to be gripped. Sheets and objects with curved surfaces are best handled with suction cups. Heavy objects with complex geometry require mechanical gripping devices. Sometimes, it is preferable to move components with mechanical grippers rather than a vacuum system for the time necessary to carry out the removal step by the suction cups. In this work, a vacuum system was discarded for two reasons: First, the shape of the fins can vary greatly from one model to another, and not all models have a comfortable grip point for using a suction cup. The second problem concerns the picking cycle time, which consists of the following steps:
  • Engagement of the workpiece once the pickup surface is found;
  • Descent of the suction cups at a reduced speed to avoid damage;
  • Activation of the ejector that generates the vacuum in the suction cup;
  • Waiting for the vacuum sensor to check that the cup is gripped;
  • Lifting the object at a reduced speed;
  • Transporting the fin to the storage station.

3.5. 2D Vision System and Software Hinge Finder

The 2D vision system adopted in this work is based on a Basler ACE 2R camera positioned on each scanning surface, installed on an aluminum profile frame by a steel bracket. Three LED lamps were placed on the frame to prevent the industrial warehouse’s lighting from influencing the cameras’ efficiency, obtaining uniform surface illumination, and eliminating shadow areas generated by external light (Figure 11). The software used to process the images obtained from the cameras is called Hinge Finder (produced by Euclid Labs s.r.l., Nervesa della Battaglia, Italy).
Once the scan surface image has been obtained, the software identifies recognized fins and classifies them as pickable or non-pickable (Figure 12). Fins that can be picked up are highlighted with dashed lines in yellow or green, depending on which side they are placed on the belt so that the fins are always placed in the same direction on the storage station. The fins highlighted with red dotted lines are classified as non-pickable due to possible collisions or software identification limits. The trapezoid shape, highlighted in black, is set according to the area the robotic arm can reach. The order in which the parts are picked is given by a score the software assigns to each pickup fin related to the computation analysis performed. In the case of non-pickable fins, highlighted by red outlines, the vision software automatically assigns a score of 0.00. This information on orientation, position on the surface, picking sequence, and support flank is sent directly to the robot. The number of pickable and non-pickable parts for both the right- and left-hand surfaces is displayed on the right-hand side of the software screen. The cycle time per pickup and the average handling cycle time are also shown (Figure 13). The software also communicates with the PLC, indicating the presence of fins and whether they are pickable or non-pickable. In conditions where no objects are present on the scanning surface or are not pickable, the software communicates the recipe that the PLC runs to the flexible feeder.
To validate the effectiveness of the designed robotic cell, we want to point out that the cameras acquire the scanning surface’s images during the release phase of the picked-up fin on the depositing station of the robotic cell; in this way, the robotic arm does not interfere with the object identification using the 2D vision system. In addition, the SCARA Epson robot has a parking position outside the cameras’ scanning area, where it waits for the coordinates of the catchable fins from the 2D visual system.

3.6. Design of the Shaked System

An important aspect to consider is the impact of the roller on the belt and how the energy is transmitted. As it is impossible to represent the system accurately, it is assumed that during the rotation of the roller, all the energy is transferred to the fin at impact. A significant simplification is made to achieve this: the fin is considered a sphere of equal volume positioned precisely at the point of tangency between the roller and the belt. In reality, it is not a point of contact but a segment equal to the entire length of the roller. Consider the following system: the roller rotating on a circular path of radius R at a speed ω with respect to the center O, which hits the ideal fin, as shown in Figure 14. Let us also suppose that, in the instant after impact, the fin has a velocity v which is aligned with the line joining the centers of the circumferences. Since the roller speed is inclined from the horizontal by a certain angle α during the impact, the component of energy transmitted to the object is related only to the vertical component of velocity.

3.6.1. Kinematic Analysis of the Fins’ Shaking Motor

For sizing the induction motor, it is necessary to analyze the loads acting on the system (Figure 15). However, it is difficult to quantify the resistant torque as there is no single and easily identifiable condition. For this, the transient response of the vibration system to the PLC control signal was observed, i.e., the time taken for the system to reach the speed regime. The choice of motor is based on two parameters: the volume available for installation and the torque delivered to obtain the speed range quickly. The following table shows the characteristics of the motor used (Table 1).
From the dynamic equilibrium, Equation (1) is obtained.
C M C R = d J ω d t = J d ω d t + ω d J d t
where C M is the motor torque, C R is the load torque, ω is the speed shaft, and J is the system moment of inertia. Since the system does not vary significantly its moment of inertia, the derivative of J for the time is zero (2):
J = c o s t a n t d J d t = 0
It is also considered that the only resistant torque is given by the viscous resistant torque proportional to the angular velocity given by C R = k ω . For a qualitative analysis, the mechanical equilibrium equation can be expressed as a transfer function by applying the Laplace transform, assuming the conditions of linearity and time invariance of the system. In the following equations, the transformations of the variables from the time domain to the Laplace domain are performed by Equations (3)–(5). Note that C M t is in the time domain, a step function with an amplitude equal to C M .
C M t L C M s = C M s
C R t L C R s k ω s
J d ω ( t ) d t L J s ω s ω ( t = 0 )
where s is the Laplace operator. Since ω 0 = 0 , substituting Equations (3)–(5) into (1) gives Equation (6):
C M s k ω s = J s ω s
The transfer function G m s is given by the ratio of the output quantity (angular velocity ω s ) and the input one (driving torque C M s ), as expressed in (7) (Figure 16).
G m s = ω s C M s = 1 J ( s + k J )
Explaining C M s as reported in Equation (3), gives Equation (8). Applying the method of residuals gives Equation (9), where A 1 and A 2 are the residual terms:
ω s = C M J s s + k J
ω s = A 1 s + A 2 J s + k J
Once the residual terms A 1 and A 2 are calculated, Equation (10) is obtained:
ω s = C M k s C M k 1 s + k J
Now, performing the Laplace anti-transformation, we obtain Equation (11):
ω ( t ) = C M k · 1 e k J t
Note that the term k represents the proportionality factor between the torque provided by the engine and the angular speed, namely k = C M ω R ; it obtains Equation (12):
ω ( t ) = ω R · 1 e k J t
with k = 0.06067 N m · s and J = 1308.17 k g m m 2 . The diagram of Figure 17, with ω R equal to 600 rpm, shows the speed trend as a function of time to analyze the rise time of the function and demonstrate the timeliness of the device’s intervention. Having defined the time constant as J / k , it is observed that the rise time of the function, i.e., the time required for the system to reach 90% of the steady-state speed, is ~ 50   m s , as highlighted in the diagram, while the speed is reached in approximately 140   m s .

3.6.2. Energy Analysis of Roller Regulation

The shaking device is equipped with three levels of shaking; the flanges that couple the rollers have three pairs of holes arranged on three concentric circumferences to obtain three levels of impact with the belt: high, medium, and low. As highlighted in Figure 18, the CAD representation of the assembly (Figure 18a), a photo of the realized and assembled system (Figure 18a,b) and the detail of the flanges (Figure 18c) are shown.
As already described, to explain the phenomenon of belt shaking, we consider the fins as a sphere with the same volume and mass; also, it is assumed that all the kinetic energy of the roller is transferred to the fins during impact so that energy is conserved, as expressed in Equations (13) and (14), where E K , R o l l e r is the roller kinetic energy, E K , F i n is the fin kinetic energy, m R is the roller mass, m F is the fin mass, and v y , 0 is the vertical component of radial velocity:
E K , R o l l e r = E K , F i n
1 2 m R v y , 0 2 = 1 2 m F v F 2
Since the roller speed is inclined from the horizontal by a certain angle α , the energy component that is transferred to the hinge is related only to the vertical component of the velocity. Then, knowing the expression of the peripheral velocity v 0 and that n is the number of rounds per minute, it can be said that
v 0 = 2 π n 60 m / s
v y , 0 = v 0 sin α [ m / s ]
From Equations (16) and (14), it is possible to calculate the vertical components of the peripheral velocity at the three different positions and observe the fin velocity immediately after the impact (Table 2) and Equation (17).
v c = m r m c v y , 0
We also consider that the mass m c of the fin is equal to 0.053   kg while the mass of the roller m r is equal to 0.300   kg . Table 3 shows the velocity of the fin after impact concerning the three positions assumed by the rollers.
Applying the principle of conservation of mechanical energy between kinetic and potential energy (Equations (18) and (19)), the maximum height reached by the fins with an initial speed other than zero is calculated from Equation (20), and it is shown in Table 4 for the three different positions:
E p = E k
m g h = 1 2 m v c 2
h = v c 2 2 g
The simplifications adopted are very strong; the fins will never absorb all the energy transmitted by the rollers after impact with the belt for the following reasons:
  • Positioning: In rare cases, the position of the fin will be exactly in accordance with the simplifications in Figure 14.
  • Belt: During percussion, the belt absorbs part of the impact, and consequently, the hinge will receive less thrust.
  • Geometry: Due to the geometry, the energy transferred to the fin is often not uniform at all points of contact with the belt.

4. Results and Discussion

In order to carry out the process of randomly picking aluminum hinge fins, an automatic system has been implemented in which the various blocks work together in a coordinated and rapid manner to handle the objects. As described in the previous sections, the robotic cell consists of a 2D vision system, a multi-component feeder to subservice the vision system, a robotic manipulator for object handling, and a fin storage station (Figure 19a,b). A 2D vision system was preferred for object detection because it can perform computational analysis faster than more complex systems. An SCARA robot was used to collect and move the fins as its 4 DOF provides excellent speed, flexibility, and a repeatability solution. The gripper for gripping the fins has also been designed with special attention to the fingers, which grip the fin effectively without damaging it, also optimizing the gripper footprint during picking.
To analyze the performance of the designed device, the number of pieces collected during 30 min of productivity was observed. The frequency of the number of pieces collected is recorded by the deposit station thanks to the barrier photocell (manufactured by Balluff, Neuhausen, Germany), model BOS0214, which detects the time between one deposit and the next each time the fin interrupts the communication between the transmitter and the receiver. A plunger pushes the deposited fin forward to receive the next one (Figure 20). For the performance test, the behavior of the robotic cell was analyzed with the palletization of five different models of fins, called A, B, C, D, and E, with corresponding male and female models, taking into account that there are some differences in size and geometry between the two, as shown in Figure 21.
The robotic cell is tested as if installed in an industrial plant, i.e., for an eight-hour shift, five days a week for four weeks. The results are reported in Table 5, where the number of pieces is expressed in pz.
From the experimental tests for the palletization of five fin models, an average handling time of 1.51 s per piece was observed, with a small variation between the different models, from a minimum of 1.44 s(male A and female B) at a maximum of 1.69 s (male C). The small difference in times found between one model and another is linked to their geometric characteristics, as these, characterized by convex areas (as shown in Figure 7), fit together and slow down the advance. The feeder designed in this paper has been optimized to loosen the joints between the components and achieve a constant material flow. Another aspect of the feeder is that it allows the optimization of the vision system’s work to perform data processing. As described in Section 3, the vision system software assigns to each recognized component a score, which represents the percentage value of the comparison outcome with the stored model. Through the matching process between the acquired image and the reference CAD model, the vision software recognizes the object to be picked up if the score is greater than or equal to 97.5% (average value obtained equal to 99.80%), so that it is considered pickable by the gripper of the robotic arm. By testing the robotic cell, it was possible to verify the multi-component feeder’s effectiveness and the step feeder’s advantage in homogeneously distributing the fins on the scanning surface, significantly increasing productivity (Figure 19a,b). The robotic manipulator collected all graspable fins identified by the vision system.
At the end of the testing activities of the designed bin-picking system, the operators who actually tested the system provided positive feedback on the introduction of the multi-component feeder with the ability of arranging the objects so that they can be easily identified and picked, so significantly increasing the productivity level (PL) of the fins’ palletization. For this application, a productivity level was defined as the number of fins recognized as pickable and therefore correctly picked in a 30 min interval. Taking into account the cycle time of 1.51 sas reported in Section 4, in the absence of any error or block of the robotic cell (ideal case), the obtained PL value is equal to 1192 pcs. More realistically, taking into account possible short interruptions of the bin-picking system operation, an average utilization percentage of 98.6% was estimated, leading to a PL value of 1175 pcs processed, on average every 30 min.
The performance of the designed system was compared with that of other competing systems, using the cycle time (time required for the vision system to recognize the object and for the robotic manipulator to position it on the storage station) and accuracy of the used vision system as comparison parameters, as shown in Table 6. In most cases, these systems operate in an industrial environment where the robot manipulator must move large objects or travel long distances. In [18], with the need to transport small objects to release points very close to each other, the use of a small robot gave an advantage in handling, as it can quickly execute limited trajectories, an aspect that in the system designed and developed in this manuscript is absent. Comparing the application proposed in this research work with the one proposed in [18], some differences immediately arise:
  • In [18], a small 6DOF robot manufactured through 3D prototyping is used; instead, in this research work, a larger SCARA robot was used, able to travel greater distances and carry heavier objects but with slower movements.
  • The distance that, on average, the SCARA robot has to go for depositing the objects into the storage station and back is greater, but the obtained cycle time is comparable with that in [18]. Since the working surface is larger and considering the bigger size of the robot, the proposed system can be adapted in different scenarios and for picking different objects.
  • During the bin-picking process, the objects are not positioned in an orderly manner at fixed points on the work surface as in [18]; instead, the designed bin-picking system is more flexible as the fins are arranged randomly on the scanning surface, determining a longer time for the vision system to identify the objects and calculate the trajectory of the robotic arm.
The added value in this work is precisely the elements communicating with each other; a 2D vision system is economical and has a much faster processing time, the SCARA robot with only 4DOF is among the fastest manipulators in handling, and compared to an anthropomorphic robot with the same characteristics, it is less expensive. Using a multi-component feeder facilitates the work of the vision and robot, significantly reducing the handling cycle time.

5. Conclusions

Nowadays, every industrial plant uses automatic systems to move objects, eliminating tiring and repetitive work for the operator, even more so when these components are arranged randomly in a container; what has been said refers to bin picking, a process in which a vision system and a robotic manipulator work together in a coordinated operation. What is proposed in this manuscript is the design of a multi-component feeder to reduce the handling cycle time for palletizing aluminum hinge fins, greatly facilitating the vision task, making the components easier to identify and easy to pick up by the robot, and rearranging objects by shaking the scan surface. For the feeder characterization, a robotic cell was designed consisting of a 2D vision system featured by a lower calculation time than 3D systems; a 4 DOF SCARA robot with high speed, flexibility, and repeatability; and a storage station where the robot places the components. The innovative aspect of the multi-component feeder is that it has two independent work surfaces so that when the vision system scans objects to identify on one surface, the robotic manipulator picks up the components previously identified on the other. The result is that the scanning time does not have to be counted in the entire handling cycle time, obtaining a continuous flow of objects without stopping. Thanks to its structure, the step feeder, the real added value of the multi-component feeder, gradually advances the components and controls the number of objects on the feeder belt. The tests carried out to verify the performance of the multi-component feeder produced significant results; in fact, the palletization of five models with the corresponding male and female models was carried out with an average handling time of 1.51 s/pz (minimum 1.44 s/pz and maximum 1.69 s/pz). Using a multi-component feeder in industrial plants offers many applications, as automatic picking via robotic guidance is gaining momentum in industrial companies. As the entire bin-picking system consists of a multi-component feeder, 2D vision system, and Epson SCARA robot, we have estimated a total cost of approximately EUR 40.,000 for the supply of materials, to which the company costs regarding the design of the various hardware and software sections need to be added.
The designed feeder offers many development opportunities to improve further and optimize its functionality. A valid alternative intended for the future development of the application is to have not just a single contact segment, but a surface; in this way, the possibility of hitting more components is greater. With the use of electromechanical actuators, a scenario of more innovative and configurable but more complex management solutions for shaking the scanning surface would open up. An example is to arrange these devices according to a matrix, thus dividing the shaking surface and the scanning surface into sectors; in this way, the sector can be activated according to the objects that were classified as having an uncatchable configuration during the scanning phase.

Author Contributions

Conceptualization, N.I.G., G.R. and R.R.; investigation, N.I.G., G.R., R.R. and R.D.F.; resources, G.R., R.R. and R.D.F.; data curation, G.R., R.R. and RDF; writing—original draft preparation, G.R., P.V. and R.D.F.; writing—review and editing, G.R., P.V. and R.D.F.; visualization, N.I.G., G.R., R.R. and R.D.F.; supervision, N.I.G. and P.V.; funding acquisition, N.I.G. and P.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are available upon request.

Conflicts of Interest

Author Roberta Rizzi was employed by the company Kinéma S.r.l. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Park, J.; Jun, M.B.G.; Yun, H. Development of robotic bin picking platform with cluttered objects using human guidance and convolutional neural network (CNN). J. Manuf. Syst. 2022, 66, 539–549. [Google Scholar] [CrossRef]
  2. Xu, J.; Pu, S.; Zeng, G.; Zha, H. 3D pose estimation for bin-picking task using convex hull. In Proceedings of the 2012 International Conference on Mechatronics and Automation, Chengdu, China, 5–8 August 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1381–1385. [Google Scholar] [CrossRef]
  3. Pochyly, A.; Kubela, T.; Singule, V.; Cihak, P. 3D Vision Systems for Industrial Bin-Picking applications. In Proceedings of the 15th International Conference Mechatronika, Prague, Czech Republic, 5–7 December 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–6. [Google Scholar]
  4. Kim, K.; Kim, J.; Kang, S.; Kim, J.; Lee, J. Vision-Based Bin Picking System for Industrial Robotics Applications. In Proceedings of the 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Daejeon, Republic of Korea, 26–28 November 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 515–516. [Google Scholar] [CrossRef]
  5. Koo, S.; Ficht, G.; Garcìa, G.M.; Pavlichenko, D.; Raak, M.; Behnke, S. Robolink feeder: Reconfigurable bin-picking and feeding with a lightweight cable-driven manipulator. In Proceedings of the 2017 13th Conference on Automation Science and Engineering (CASE), Xi’an, China, 20–23 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 41–48. [Google Scholar] [CrossRef]
  6. CS Line: Conveyor Belts and Pallet Conveyors—Alusic. Available online: https://handling-automation.alusic.com/en (accessed on 15 January 2024).
  7. Basler Ace 2 R a2A1920-51gmBAS Camera. Available online: https://www.baslerweb.com/en/shop/a2a1920-51gmbas/ (accessed on 4 April 2024).
  8. Epson G6 SCARA Robots—650 mm. Available online: https://epson.com/For-Work/Robots/SCARA/Epson-G6-SCARA-Robots---650mm/p/RG6-653ST13 (accessed on 15 January 2024).
  9. SIMATIC S7-1500 PLC, Siemens. Available online: https://www.siemens.com/global/en/products/automation/systems/industrial/plc/simatic-s7-1500.html (accessed on 4 April 2024).
  10. Tompkins, J.A.; White, J.A.; Bozer, Y.A.; Tanchoco, J.M.A. Facilities Planning, 4th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2010; ISBN 9780470444047. [Google Scholar]
  11. Fager, P.; Sgarbossa, F.; Calzavara, M. Cost modeling of onboard cobot-supported item sorting in a picking system. Int. J. Prod. Res. 2020, 59, 3269–3284. [Google Scholar] [CrossRef]
  12. Cohen, Y.; Shoval, S.; Faccio, M. Strategic View on Cobot Deployment in Assembly 4.0 Systems. IFAC-Pap. 2019, 52, 1519–1524. [Google Scholar] [CrossRef]
  13. Eyefeeder Models. Available online: https://www.eyefeeder.com/eyefeeder-models/ (accessed on 17 October 2022).
  14. Flexibowl. Available online: https://www.flexibowl.com/flexibowl (accessed on 17 October 2022).
  15. Pochyly, A.; Kubela, T.; Mozak, M.; Cihak, P. Robotic Vision for Bin-Picking Applications of Various Objects. In Proceedings of the 41st International Symposium on Robotics and 6th German Conference on Robotics 2010 (ROBOTIK), Munich, Germany, 7–9 June 2010; IEEE: Piscataway, NJ, USA, 2010. [Google Scholar]
  16. Nielsen, K.G.; Sung, I.; El Yafrani, M.; Kılıç, D.K.; Nielsen, P. A Scheduling Solution for Robotic Arm-Based Batching Systems with Multiple Conveyor Belts. Algorithms 2023, 16, 172. [Google Scholar] [CrossRef]
  17. Wang, J.; Xu, H.; Chen, Z. Vision-Based Conveyor Belt Workpiece Grabbing Using the SCARA Robotic Arm. In Proceedings of the International Conference on Machine Learning Control and Robotics (MLCR), Suzhou, China, 29–31 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 172–176. [Google Scholar] [CrossRef]
  18. Lu, Z.; Zhao, M.; Luo, J.; Wang, G.; Wang, D. Design of a winter-jujube grading robot based on machine vision. Comput. Electron. Agric. 2021, 186, 106170. [Google Scholar] [CrossRef]
  19. Jin, G.; Yu, X.; Chen, Y.; Li, J. SCARA+ System: Bin Picking System of Revolution-Symmetry Objects. IEEE Trans. Ind. Electron. 2024, 71, 10976–10986. [Google Scholar] [CrossRef]
  20. Carbone, V.; Carocci, M.; Savio, E.; Sansoni, G.; De Chiffre, L. Combination of a Vision System and a Coordinate Measuring Machine for the Reverse Engineering of Freeform Surfaces. Int. J. Adv. Manuf. Technol. 2001, 17, 263–271. [Google Scholar] [CrossRef]
  21. Abouelatta, O.B. 3D Surface Roughness Measurement Using a Light Sectioning Vision System. In Proceedings of the World Congress on Engineering (WCE), London, UK, 30 June–2 July 2010; pp. 1–6. [Google Scholar]
  22. Kong, S.; Kim, K.; Lee, J.; Kim, J. Robotic Vixion System for Random Bin Picking with dual-arm robots. In Proceedings of the International Conference on Measurement Instrumentation and Electronics (ICMIE), Melbourne, Australia, 14–16 September 2016. [Google Scholar] [CrossRef]
  23. Tipary, B.; Erdὂs, G. Generic development methodology for flexible robotic pick-and-place workcells based on Digital Twin. Robot. Comput. Integr. Manuf. 2021, 71, 102140. [Google Scholar] [CrossRef]
  24. Ivanov, V.; Aleksandrov, A.; Bdiwi, M.; Popov, A.; Rashid, A.; Pershina, Z.; Kolker, F.; Dimitrov, L. Bin Picking Pneumatic-Mechanical Gripper for Industrial Manipulators. In Proceedings of the IV International Conference on High Technology for Sustainable Development (HiTech), Sofia, Bulgaria, 7–8 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–4. [Google Scholar] [CrossRef]
Figure 1. The sequence of the steps of bin picking.
Figure 1. The sequence of the steps of bin picking.
Technologies 12 00153 g001
Figure 2. Typical layout of a bin-picking system [3].
Figure 2. Typical layout of a bin-picking system [3].
Technologies 12 00153 g002
Figure 3. Functional scheme with highlighted robotic cell elements employed for the multi-component feeder’s design.
Figure 3. Functional scheme with highlighted robotic cell elements employed for the multi-component feeder’s design.
Technologies 12 00153 g003
Figure 4. Two different types of objects contained in a box for bin picking: sheet metal (a) and the inner body of rolling bearings (b).
Figure 4. Two different types of objects contained in a box for bin picking: sheet metal (a) and the inner body of rolling bearings (b).
Technologies 12 00153 g004
Figure 5. Different operating modes between commercial feeders and developed ones. Commercial feeders’ operation and objects handling in ON/OFF mode (a); designed feeder operation, hidden time scanning, and constant object handling (b).
Figure 5. Different operating modes between commercial feeders and developed ones. Commercial feeders’ operation and objects handling in ON/OFF mode (a); designed feeder operation, hidden time scanning, and constant object handling (b).
Technologies 12 00153 g005aTechnologies 12 00153 g005b
Figure 6. The main elements characterizing the multi-component feeder: two through-beam photocells, BOS0212, placed in the inlet and the outlet of the step feeder; an inductive sensor BES003P placed in front of the shaking device; an asynchronous motor 6SM 71 B14 used for operating the shaking device; and a pneumatic oscillator OR80 to improve the advancement of the fins on the step feeder.
Figure 6. The main elements characterizing the multi-component feeder: two through-beam photocells, BOS0212, placed in the inlet and the outlet of the step feeder; an inductive sensor BES003P placed in front of the shaking device; an asynchronous motor 6SM 71 B14 used for operating the shaking device; and a pneumatic oscillator OR80 to improve the advancement of the fins on the step feeder.
Technologies 12 00153 g006
Figure 7. Two models (A,B) of aluminum hinge fins; the different shapes they have are highlighted in terms of dimensions and profile.
Figure 7. Two models (A,B) of aluminum hinge fins; the different shapes they have are highlighted in terms of dimensions and profile.
Technologies 12 00153 g007
Figure 8. Physical connection diagram of all the elements present in the robotic cell. The green lines represent the Profinet wiring of the components towards the Ethernet switcher, i.e., PLC with motor driver, the SCARA robot controller, the EB80 manifold, and the Master I/O Link. The orange lines represent the link from the Hub I/O link, where all sensors are attached, to the master. The industrial PC of the vision system is linked to the controller robot with a converter Ethernet–Profinet.
Figure 8. Physical connection diagram of all the elements present in the robotic cell. The green lines represent the Profinet wiring of the components towards the Ethernet switcher, i.e., PLC with motor driver, the SCARA robot controller, the EB80 manifold, and the Master I/O Link. The orange lines represent the link from the Hub I/O link, where all sensors are attached, to the master. The industrial PC of the vision system is linked to the controller robot with a converter Ethernet–Profinet.
Technologies 12 00153 g008
Figure 9. Operating mode of multi-object feeder.
Figure 9. Operating mode of multi-object feeder.
Technologies 12 00153 g009
Figure 10. An aluminum fin gripping device with specially designed fingers effectively grips the workpiece.
Figure 10. An aluminum fin gripping device with specially designed fingers effectively grips the workpiece.
Technologies 12 00153 g010
Figure 11. The support frame for the camera is made of aluminum profiles, with the camera fixed using a steel bracket.
Figure 11. The support frame for the camera is made of aluminum profiles, with the camera fixed using a steel bracket.
Technologies 12 00153 g011
Figure 12. Main screenshots (c) of the Hinge Finder software (v.2023) used to locate the fins by the 2D vision system. The pickable fins are highlighted by yellow or green outlines depending on the side they are placed on; the non-pickable ones are highlighted by red outlines. The software also shows the outline of the gripper fingers (rectangles in blue) and assigns a picking order score for the robot (a,b); a screenshot showing the communication between the vision system and PLC to implement the feeder functions. At the end of the handling of all the pickable fins (objects highlighted by yellow and green outlines), the PLC activates the feeder and the fins, which in the previous cycle were classified as non-pickable (red outlines); by redistributing themselves on the scanning surface, they become takeable, as described in the flowchart of Figure 9.
Figure 12. Main screenshots (c) of the Hinge Finder software (v.2023) used to locate the fins by the 2D vision system. The pickable fins are highlighted by yellow or green outlines depending on the side they are placed on; the non-pickable ones are highlighted by red outlines. The software also shows the outline of the gripper fingers (rectangles in blue) and assigns a picking order score for the robot (a,b); a screenshot showing the communication between the vision system and PLC to implement the feeder functions. At the end of the handling of all the pickable fins (objects highlighted by yellow and green outlines), the PLC activates the feeder and the fins, which in the previous cycle were classified as non-pickable (red outlines); by redistributing themselves on the scanning surface, they become takeable, as described in the flowchart of Figure 9.
Technologies 12 00153 g012aTechnologies 12 00153 g012b
Figure 13. Real-time indications that the software provides to the operator during the identification activity.
Figure 13. Real-time indications that the software provides to the operator during the identification activity.
Technologies 12 00153 g013
Figure 14. Schematic representation of simplified kinematics of flexible feeder operation.
Figure 14. Schematic representation of simplified kinematics of flexible feeder operation.
Technologies 12 00153 g014
Figure 15. Physical diagram illustrating the loads in the transmission system.
Figure 15. Physical diagram illustrating the loads in the transmission system.
Technologies 12 00153 g015
Figure 16. Representation of the transfer function according to Equation (7).
Figure 16. Representation of the transfer function according to Equation (7).
Technologies 12 00153 g016
Figure 17. Graphical trend of velocity transient as a function of time expressed by Equation (12).
Figure 17. Graphical trend of velocity transient as a function of time expressed by Equation (12).
Technologies 12 00153 g017
Figure 18. Two views of the shaking device: CAD representation (a) and assembled parts (b). Detailed view of the roller coupling flanges (c).
Figure 18. Two views of the shaking device: CAD representation (a) and assembled parts (b). Detailed view of the roller coupling flanges (c).
Technologies 12 00153 g018
Figure 19. Different views of the robotic cell during the handling activity: the robot cell with a particular focus on the multi-component feeder (a); the work surface with the objects’ distribution facilitating the vision system’s work (b).
Figure 19. Different views of the robotic cell during the handling activity: the robot cell with a particular focus on the multi-component feeder (a); the work surface with the objects’ distribution facilitating the vision system’s work (b).
Technologies 12 00153 g019
Figure 20. Screen of the deposit station with the BOS0214 photocell highlighted, which reads the fin deposition. The BOS01LL photocell to verify that the fins are correctly deposited and the plunger attached to the pneumatic cylinder.
Figure 20. Screen of the deposit station with the BOS0214 photocell highlighted, which reads the fin deposition. The BOS01LL photocell to verify that the fins are correctly deposited and the plunger attached to the pneumatic cylinder.
Technologies 12 00153 g020
Figure 21. Representation of the female and male models. The differences that can be observed are both in the convexity of the shape that changes and in the fact that the hole in which the rotation pin is inserted is smaller in the male model.
Figure 21. Representation of the female and male models. The differences that can be observed are both in the convexity of the shape that changes and in the fact that the hole in which the rotation pin is inserted is smaller in the male model.
Technologies 12 00153 g021
Table 1. Nameplate characteristics of the Elvem asynchronous motor.
Table 1. Nameplate characteristics of the Elvem asynchronous motor.
ModelPower
[ k W ]
Maximum Speed
r p m
Power Factor
c o s φ
Starting Torque
C M [Nm]
Weight
k g
6T1 71C4 B140.5513800.753.817.06
Table 2. Roller speed values according to the three different positions.
Table 2. Roller speed values according to the three different positions.
n = 600   r p m
ω = 2 π n 60 = 62.832 r a d s
Position 1Position 2Position 3
R = 34 m m R = 32.5 m m R = 31 m m
α = 33.1 ° α = 28.7 ° α = 23.2 °
v 0 = ω R = 2.14 m s v 0 = ω R = 2.04 m s v 0 = ω R = 1.95 m s
v y , 0 = 1.17 m s v y , 0 = 0.98 m s v y , 0 = 0.77 m s
Table 3. Maximum theoretical speed transferred from rollers to fins.
Table 3. Maximum theoretical speed transferred from rollers to fins.
Position 1Position 2Position 3
v c = 2.78   [ m / s ] v c = 2.33   [ m / s ] v c = 1.83   [ m / s ]
Table 4. Maximum theoretical height reached by the fins according to the three different speeds transferred.
Table 4. Maximum theoretical height reached by the fins according to the three different speeds transferred.
Position 1Position 2Position 3
h = 0.394   m h = 0.277   m h = 0.171   m
Table 5. Performance test of the robotic cell with the number of pieces picked up in 30′ and the picking frequency.
Table 5. Performance test of the robotic cell with the number of pieces picked up in 30′ and the picking frequency.
MODELABCDE
FemaleMaleFemaleMaleFemaleMaleFemaleMaleFemaleMale
[pz/30′]1198124912531241124410631154115011831187
s/pz1.501.441.441.451.451.691.561.571.521.52
Table 6. Comparison of different automatic picking regarding the cycle time and accuracy. (*) means that the system implements an auxiliary deposition station for image acquisition; (**) means that the system does not implement an auxiliary surface. The reported accuracy refers to the vision system used to recognize and identify the objects.
Table 6. Comparison of different automatic picking regarding the cycle time and accuracy. (*) means that the system implements an auxiliary deposition station for image acquisition; (**) means that the system does not implement an auxiliary surface. The reported accuracy refers to the vision system used to recognize and identify the objects.
WORKVision System AdoptedROBOT UsedAuxiliary
Position
Multi-Component FeederAccuracyCycle Time
Pochyly A.
et al. [3]
3D vision system, one industrial camera with two linear lasers.6DOF ROBOTYes (*)
No (**)
NoN.A.12–15 s (*)
9–10 s (**)
Pochyly A.
et al. [15]
3D vision system, one industrial camera with two linear lasers.6DOF ROBOTYes (*)
No (**)
NoN.A.15 s (*)
5 s (**)
Lu Z.
et al. [18]
One industrial camera and YOLOv3 algorithm6DOF ROBOTNoNo97.28%1.36 s
Jin G. et al. [19]One 3D camera5DOF SCARA+NoNo93.1%.3.2 s
Proposed automatic pickingTwo 2D cameras4DOF SCARANoYes99.80%1.51 s
N.A.: Not available.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Giannoccaro, N.I.; Rausa, G.; Rizzi, R.; Visconti, P.; De Fazio, R. An Innovative Vision-Guided Feeding System for Robotic Picking of Different-Shaped Industrial Components Randomly Arranged. Technologies 2024, 12, 153. https://doi.org/10.3390/technologies12090153

AMA Style

Giannoccaro NI, Rausa G, Rizzi R, Visconti P, De Fazio R. An Innovative Vision-Guided Feeding System for Robotic Picking of Different-Shaped Industrial Components Randomly Arranged. Technologies. 2024; 12(9):153. https://doi.org/10.3390/technologies12090153

Chicago/Turabian Style

Giannoccaro, Nicola Ivan, Giuseppe Rausa, Roberta Rizzi, Paolo Visconti, and Roberto De Fazio. 2024. "An Innovative Vision-Guided Feeding System for Robotic Picking of Different-Shaped Industrial Components Randomly Arranged" Technologies 12, no. 9: 153. https://doi.org/10.3390/technologies12090153

APA Style

Giannoccaro, N. I., Rausa, G., Rizzi, R., Visconti, P., & De Fazio, R. (2024). An Innovative Vision-Guided Feeding System for Robotic Picking of Different-Shaped Industrial Components Randomly Arranged. Technologies, 12(9), 153. https://doi.org/10.3390/technologies12090153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop