1. Introduction
The prevailing pattern of market demand for a large variety of customized products is pushing industry towards flexible and adaptive workflow models maintaining reasonable product prices to achieve high competitiveness [
1]. Flexible manufacturing cells (FMC) provide a solution that bridges the gap between flexibility and automation [
2].
An FMC may typically incorporate several processing and tending stations with widely varying functionality, such that it may not be possible to apply a uniform approach for directly commanding them to perform an action. The same heterogeneity is also encountered in terms of each machine’s numerical control and communication support, especially in cases where legacy equipment is upgraded to serve as a component of an FMC. Furthermore, the specialized nature of each FMC usually leads to the necessity of creating customized software solutions for its high-level control to meet application specific requirements [
3].
The purpose of this study is to develop a cell controller software (CCS), from a generic concept attempting to address common issues encountered in software integration and control, to the full implementation and deployment for an FMC. The CCS is demonstrated on an FMC that is tasked with the production of microfluidic devices (bioMEMS) consisting of six manufacturing workstations, a robot-based material handling and storage system, and a central control PC that runs the CCS. The heterogeneity of the equipment in terms of operation and communication renders this FMC a representative case study, depicting the challenges that arise during the development of the CCS along with the solutions that were applied.
The current work aims to present the development of a manufacturing cell control software that is based on open-source technologies to address operational flexibility, data interoperability, and equipment heterogeneity challenges. The main novelty lies both at the conceptual stage, which introduces a modular software architecture made up of components that can be continuously updated to reflect any changes in the manufacturing system assets or their behavior and representation over time, as well as at the communication level between the software and the relevant equipment, which employs an abstraction layer where each cell equipment is represented by a dedicated client module that exposes the available set of methods and states to the rest of the control software. The latter enables a robust implementation since a variety of communication protocols can still be used without problem. The proposed architecture can be used as a basis to develop applications for FMC process control and monitoring tasks, which emphasize ease of adoption and customization while being easily extendable and able to integrate with larger applications. Since this architecture is based on open-source technologies, it can serve as an alternative to existing commercial platforms. The paper covers all stages from conceptual design to implementation, while explaining the specific requirements and constraints that are involved in each individual stage along with the provided solutions that were developed.
The next section provides a short account of the state-of-the-art in flexible manufacturing controller architectures in light of Industry 4.0.
Section 3 describes the proposed CCS concept.
Section 4 provides implementation details of this concept using a particular real-life FMC as a demonstration basis.
Section 5 reports on tests and sample results of deploying the CCS.
Section 6 summarizes findings and provides an outlook.
2. Previous Work
FMCs have been being developed since the late 1970s [
4], yet the digitalization of the manufacturing workflow and the interconnection of physical assets and processes have become increasingly important with the emergence of industry 4.0 and the market demand for personalized production, leading to the adoption of agile manufacturing practices [
5]. Industry 4.0 technologies offer numerous possibilities, ranging from cloud-based monitoring [
6] and cloud-based [
7] applications to the deployment of fully integrated systems from the production process level to the business level [
2] with enhanced performance in terms of downtime and quick response to market changes [
8].
Highly digitalized, integrated manufacturing systems can bring competitive advantages, yet their development is a non-standard, often challenging process [
9]. System reference architectures, their semantics and ontologies [
10], connectivity and interoperability of systems, and the standardization of Industrial Internet of Things (IIoT) applications, currently comprise domains of interest widely studied yet still evolving.
Reference architectures provide classifications, entity hierarchies, and viewpoints that aim to facilitate the development of industrial systems that conform to specific standards. The Reference Architecture Model Industry 4.0 (RAMI 4.0) is the dominant framework of IIoT applications in the manufacturing domain [
11]. RAMI 4.0 expands on the Smart Grid Architecture Model (SGAM) and introduces a three-dimensional representation of the industry 4.0 space along three directions: (a) six layers corresponding to various IT perspectives of the system (e.g., functional descriptions, communications behavior, hardware/assets, etc.); (b) the product lifecycle and value stream based on the IEC 62890 standard; and (c) the hierarchical levels of assets within the plant (product, field device, control device, station, etc.) according to standards IEC 62264 and IEC 6151 [
12]. The Industrial Internet Reference Architecture (IIRA) [
13] based on the ISO “ISO/IEC/IEEE 42010:2011” architecture concepts is another multi-layer proposal on modeling IIoT systems, but it is not specific to manufacturing. It considers four layers (viewpoints), namely the business, usage, functional, and implementation, each one having a distinct scope and objectives. In essence, following this model provides a methodology for the stakeholders to identify the vision for their organization and how to benefit from a pertinent IIoT system. While IIRA and RAMI 4.0 have been developed independently with different objectives, scopes, and approaches, they can be considered complementary rather than conflicting. Furthermore, joint efforts from their creators are applied to bridge possible differences and eventually improve interoperability among systems [
14].
Admittedly, reference architectures are still quite abstract, and as a result, adhering to one could yield a range of significantly divergent end results. This becomes particularly evident when delving into the implementation of specific services relevant to a manufacturing system, such as the control service of an FMC integrating a variety of heterogeneous equipment, the real-time monitoring service of the equipment, and the predictive maintenance services, to name but a few. Alternative architecture proposals that are closer to particular systems or even to particular asset types naturally tend to be more concrete [
15,
16].
Interoperability of systems and versatility in the use of equipment in the context of smart factories and smart manufacturing is heavily affected by the communication protocols and interfaces that are applied across them. The Open Platform Communications Unified Architecture (OPC UA) provides a well-established standard to facilitate the exchange of information [
17]. It is proposed as the communication standard to be used for developing RAMI 4.0 applications. Semantics and vocabulary for the manufacturing domain comprise the MTConnect standard, which in combination with the OPC UA can provide a solution guaranteeing interoperability among equipment that adopts these standards [
18]. Alternative popular communication protocols are Message Queuing Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and Data Distribution Service (DDS) [
19]. However, despite the fact that these protocols and standards have been available for almost two decades, there are still significant challenges. One such challenge is how to extend the capabilities of the already operational hardware infrastructure so that it can take advantage of modern ways of information management and communication technologies [
20].
Flexibility, maintainability, and interoperability are fostered by service-oriented architectures (SOA) in the development of supporting software for manufacturing systems [
21,
22,
23]. One of the most common applications for the developed software concerns solving the scheduling problem where evolutionary optimization approaches are trending [
24,
25]. The variety of technologies and different standards in Industry 4.0, along with the co-existence of modern and legacy equipment in industrial sites, induces challenges in performing their digital transformation [
26]. SOA allows an incremental transition to the smart digital plant, while also facilitating extension of the functional capability of the system [
27]. Software integration of equipment that conforms to different standards is another common issue that can be facilitated by middleware services/applications that act as an adapter, converting an arbitrary interface to the desired one [
28,
29,
30].
Open-source technologies, in a broader sense, are also trending as practical solutions for enhancing the robustness and scalability of manufacturing systems, addressing the challenges of operational flexibility and interoperability presented in Industry 4.0. Shifting to open-source technology provides low-cost solutions and reduces dependency on proprietary systems, thus fostering innovation and collaboration across different sectors [
31,
32,
33]. Especially for small to medium enterprises (SMEs), this approach can be a significant enabler by lowering the entry-level investment barrier and offering enhanced customization options. Furthermore, the use of open-source technologies in IoT and IIoT applications ensures greater security and reliability, with continuous updates and community-driven support that keeps the systems up to date with the latest security standards and technological advancements.
The development of the CCS reported next has been inspired by the concepts provided in this section. Even though it is not a typical large-scale industrial application and thus not all concepts presented are applicable, it is also built over a layered architecture and promotes SOA, too. The communication between the system components is realized mainly with the use of gRPC [
34], a modern open-source communication protocol.
3. CCS Concept
An FMC is, in general, a complex system with numerous possible configurations and processes that often forwards this complexity to be handled at the CCS level. Nevertheless, apart from the differences across FMCs, there are also components that, at least at an abstract level, can be identified in most, if not all, such systems. Aiming to simplify the design of an FMC controller software, this section initially attempts a grouping of the essential common elements for the operation of any FMC into a set of component categories. These are then used in the proposed generic CCS software architecture that serves as a template, handling basic functionality, while allowing customizability according to application-specific requirements.
3.1. Common/Shared FMC Components
3.1.1. Physical Asset Components
The first class of components that should be included in the shared set are those directly recognized as physical assets of FMC. These are typically workstations and tending stations. For each of these categories, the creation of an interface is proposed. The methods exposed can vary depending on specific FMC setups, but a generic rule would be to include two subsets of methods, one to target the state monitoring of an asset (machine) and the other to target relevant actions (see
Table 1 for an example).
In essence, the use of interfaces for workstations and tending machines is intended to create an abstraction layer over the heterogeneity of the equipment and the rest of the application modules and to also simplify addressing a machine. Each workstation extends the relevant interface with a concrete implementation, hiding complicated logic, communication protocols used, and additional details, which are not useful beyond this layer.
3.1.2. Behavioral Components
Moving from physical assets, we focus our attention on FMC standard behaviors. The selected behaviors that were regarded as “standard” across FMCs are (i) process monitoring, (ii) task scheduling, and (iii) high-level control of machines. These operations are interdependent in a cyclic manner, namely, performing a scheduled task requires control of machines, which will subsequently trigger system state changes that need to be monitored. The new state is then used as an input to the scheduling algorithm that will generate new tasks. Naturally, the workflow is also directly affected by the human operator’s decisions, e.g., selection of product mix for manufacturing, enabling/disabling of workstations, etc. The proposed software components to support this operation are summarized in
Table 2 along with a short description of their responsibilities. The behavioral components are described at an abstract level since the implementation strategies could vary widely, without affecting the structure of the application.
The main concept is to use the FMC_controller as the high-level component that provides the endpoints of a service and is composed of the Supervisor, Scheduler, and Machine Operator components. The Supervisor, Scheduler, and Machine Operator components are coupled to interact through the state of the system and provide the monitoring, scheduling, and control functionality of the service.
3.1.3. System State Representation
It is obvious that the system and workstations’ state play a central role in the interaction of the components. Even though data structures that are sufficient for describing the system state and maintaining the maximum capability for the subsystems cannot be standardized, there is a minimum requirement that needs to be met to effectively develop a CCS application.
Thus, the data structures need to represent unambiguously two state categories, namely, (i) the physical assets state and (ii) the FMC process state. Regarding the physical assets, simple enumerations could be sufficient. The values of the enumerations would then be the responses provided by invoking the state monitoring methods of the physical asset components. For example, invoking the getDoorState method of a workstation could produce one of the following values: “door open”, “door closed”, “door moving”, and “door error”.
However, a more elaborate data structure needs to be created for describing the process state of the FMC, as it needs to contain information about jobs being performed by the cell, routings, available resources, and more. An example of such a data structure is provided in
Section 4.2.1.
3.2. Main CCS Service
The software components presented, are the main building blocks for a minimal CCS application.
Figure 1 visualizes the hierarchical structure and the relationships of the components to form a framework for an application/service that supports process monitoring, task scheduling, and system setting. The main drive for using this design is building a modular main CCS application with interchangeable components that could easily be integrated as a service in a larger application.
The introduction of software entities, i.e., physical assets and behavioral components, along with their relationships and respective interfaces, effectively forms the CCS service. This structure is well suited to both service-oriented and traditional object-oriented approaches for their concrete implementation. This generic template is not limited to specific technologies and has provided the basis for developing the FMC controller application that is the subject of the following section.
4. Implementation Example
A CCS application has been developed based on the concept and design proposed in
Section 3. The application was demonstrated on an FMC dedicated to the production of microfluidic lab-on-chip devices (bioMEMS).
4.1. CCS and Subsystem Requirements
The functional requirements for the FMC operation were defined at an abstract level from the early design phase of the project. The primary purpose of the CCS is to integrate the various standalone machines, i.e., workstations and robot, into a system that effectively operates as an FMC. The software should facilitate the phased development and integration of workstations into the FMC, allowing them to be taken offline while the rest of the system remains fully operational. A graphical user interface should be included to allow for FMC process and system monitoring, as well as high-level control.
Each workstation is required to be addressable over the LAN and expose an API that enables the CCS to communicate with it, exchange information, and require specific actions. The CCS should be capable of flexibly supporting the integration of machines that meet this requirement but may expose a variety of interfaces and communication protocols.
The FMC should be operated and monitored from the control room via the CCS software, which is hosted on the control PC (see
Appendix C for specifications), and for safety purposes exclusively under trained personnel supervision. Consequently, the CCS is designed as a desktop application, and the control PC is not accessible remotely.
4.2. Flexible Manufacturing Cell Description
4.2.1. Layout
The manufacturing site is divided into three areas, namely, (i) the production room, (ii) the control room, and (iii) the assembly room (see
Figure 2). The production room is the central area of the site, containing 5 processing stations, one metrology station, and the robot-based material handling system. Across the center of the room, a linear rail has been installed, allowing the industrial robot to move along it, tending the workstations. The latter are laid out on both sides of the rail, thus maximizing the reach and maneuvering capability of the robot. An electrically operated gripper is the end-effector of the robot, capable of transferring specially designed carriers [
35]. The carriers are the platform for transferring semi-finished products across the workstations and the storage cabinet. Each workstation as well as the storage cabinet are equipped with spigots to accurately position and securely hold the carriers.
The storage cabinet located in the assembly room is placed so that it can be directly accessible from both the robot operating in the production room and the human operator working in the assembly room. In this way, the human operator can collect from the cabinet the carriers with the finished products and replace them with empty ones to allow undisrupted operation of the FMC. The control room accommodates the control PC and auxiliary equipment. Using the CCS’s user interface, the FMC operator monitors the process and system state and controls the overall production progress.
4.2.2. Workstations—Handling System
To perform the manufacturing of the lab-on-chip devices, a set of workstations and the handling system have been specifically developed or upgraded to meet the requirements of the FMC. Each subsystem enables interaction with the CCS by providing an appropriate service. The CCS sends requests or remote procedure calls, either to get a subsystem’s state or to demand actions, and once the subsystem’s software handles them, an appropriate response is returned. All involved systems communicate using the local area network of the production site.
A fused filament fabrication (FFF) 3D printer, capable of hosting and processing (mirror printing) three carriers simultaneously, is the first stage of the manufacturing process. The printing table has been modified appropriately to permanently host three spigots as placeholders for the received carriers. An enclosure has been added to the workstation to enable the automated operation of a custom loading/unloading door. The FFF–enclosure system is equipped with a Raspberry Pi 4TM microcomputer (see
Appendix C for specifications), used primarily to provide control and monitoring capabilities of the workstation to an external application. This task is achieved by running a RepetierTM [
34] server instance for the 3D printing process, leveraging the relevant REST API, and a gRPC server application developed to manage the enclosure’s door subsystem.
The next stage of the manufacturing process involves micro-milling channels on the 3D-printed substrates. This is taken care of by the relevant workstation that has been specifically designed to be used as a component of the FMC. The micro-milling station is capable of processing one carrier at a time, and on top of its control software, it is running a gRPC server application to allow integration with the CCS by providing the necessary state monitoring and control interface.
Depending on the exact design of the bioMEMS device, the next steps involve a combination of laser processing and fluid dispensing processes. Custom processing workstations that support laser ablation, laser polishing, and drop-on-demand fluid dispensing of various materials have been developed and integrated into the FMC. The last step of the manufacturing process concerns the quality inspection of the product and is performed in a high-resolution X-ray metrology workstation. The mentioned workstations also provide a communication interface via the gRPC protocol for integration with the CCS application.
A crucial part of the FMC is the robot-based handling system, responsible for tending all the subsystems [
35]. The industrial robot has been programmed to allow communication with the CCS software using the TCP/IP communication protocol, accepting carrier transfer commands, and providing status updates.
Each custom workstation developed for the FMC is equipped with a dedicated PC that hosts all necessary software for numerical control (NC), programming, and communication with the CCS application. Process data specific to each machine are stored in the respective PC. The FFF workstation provides the same functionality through the Raspberry Pi 4TM microcomputer. These systems can be accessed from the control PC via VNC software (version 7.1.0). The network configuration is illustrated in
Figure 3.
4.2.3. CCS Main Operation
The CCS enables the operator to issue production orders by creating and assigning process plans (or “recipes”) to carriers. The CCS automatically instructs the workstations and the handling system to perform appropriate tasks based on its scheduling algorithm output and the subsystem’s state that is acquired by frequent polling.
Figure 4 documents this workflow through the execution of a basic one-step process plan involving the FFF workstation.
4.2.4. CCS Design
The CCS as an application to be executed in the control computer of the production area provides the following functionality:
handling communication with workstations
scheduling workstation and handling system tasks to complete the production process
monitoring production process and subsystems’ status
supporting multi-product production process
providing GUI to facilitate operator in
- a.
creating and assigning process plans
- b.
controlling and monitoring automated production process.
High network input-output performance is crucial for the CCS, as it handles a large number of requests to monitor the state of workstations and issue commands accordingly, while not being involved in process-intensive tasks. To address this need, a single-threaded asynchronous approach has been adopted for both the main process and the graphical user interface workflow. In this case, JavaScript was used as the main programming language, facilitating an event-driven approach using event-listeners and attaching handlers [
36]. A sequential-like workflow would be difficult to maintain since there are many disruptive events that can occur and would need to be handled as exceptions to this workflow.
The CCS structure should facilitate the phased development and integration of workstations into the FMC, allowing them to be taken offline while the rest of the system remains fully operational. The CCS application is composed of two processes, e.g., (i) the main process and (ii) the front-end, that complement each other. The main process is designed in accordance with the main CCS template architecture presented in
Figure 1, and it is responsible for handling communication with the workstations, process scheduling, system monitoring, file/operating system manipulation, and serves the main window of the application. The front-end part of the application consists of components that form the user interface. They define the way in which the information is displayed and the controls provided for the user to interact with the application.
Figure 5 provides an overview of the modules and the flow of information between them.
The communication framework between CCS and the workstations is required to be open, proven, flexible, and easy to implement. For the presented FMC use case, gRPC [
34] was selected, as it is a robust, open-source Remote Procedure Call (RPC) framework that favors interoperability by supporting the use of a large variety of programming languages and environments, while allowing data structures and communication interfaces to be organized clearly and concisely with the use of special files (proto files) devoted to this purpose. The industrial robot and the FFF are commercial systems providing the means to communicate with the CCS over TCP/IP and HTTP, respectively. Note that the CCS is not limited to integration only with systems conforming to the above communication protocols.
The main process is an extended version of the proposed main CCS template (see
Figure 1). It is developed using Electron [
37], an open-source JavaScript framework for building cross-platform desktop applications using web technologies. The functionality of the main process is provided through the interaction of its various modules (see
Figure 6). For the implementation of the CCS in this specific use case, the lack of common interfaces among the various workstations necessitated the development of dedicated software adapters, thus precluding the possibility of service reuse, which is one of the main advantages of SOA. This constraint, along with the increased complexity and overhead for service governance, dictated the choice of a monolithic design.
The main module is responsible for serving the application window to be populated by the front-end components and handles interactions with the operating and file systems. Along with the bridge module, they provide the interface for the bi-directional communication between the main process and the front-end. This has been facilitated by the utilities provided by the Electron framework. Event-listeners have been paired with suitable event-handlers, enabling the application to react to various triggers and update front-end and main process states accordingly. Certain types of events, such as completion of job orders, are triggered by the main process forcing the front-end to update, while others, such as click-events, are triggered by the front-end and handled by the main process.
The rest of the modules follow the architecture of the main CCS template. For each workstation and the handling system, there is a dedicated client module exposing a common set of methods to be used by the rest of the application (implementing either the workstation or the tending station interface presented in
Section 3.1.1). The client modules effectively create an abstraction layer that highly promotes interoperability and flexibility. It has allowed the application to interact with workstations in a uniform manner, while the underlying communication has been implemented by a variety of technologies and protocols:
gRPC services—micro-milling, laser ablation, laser polishing, inkjet, X-ray workstations,
RESTful API service—FFF workstation
TCP/IP service—industrial robot of the handling system
The system controller module is a high-level component, its main purpose being to hold the information regarding the physical setup of the FMC (e.g., the machines comprising it) and update the behavior of the application at runtime by providing handlers for user interactions, such as adding job orders, disabling scheduling, etc. It is composed of the Scheduler, Supervisor, and Machine Operator objects and defines the relationship among them.
The Supervisor, Machine Operator, and Scheduler objects belong to the same hierarchical level; their interaction effectively provides the desired control, monitoring, and scheduling capabilities of the application. They all depend on or manipulate the system state, and there is a cyclic dependency among them, where the output of one is input for the next, as illustrated in the following block of pseudocode.
The Supervisor is responsible for holding the carrier and system states and the Scheduler on updates. The Machine Operator performs the various tasks using the appropriate workstation or handling system clients and notifies the Supervisor to update the state accordingly. The Scheduler, based on the system and carriers’ states, searches for possible actions and instructs the Machine Operator to handle their execution. An action can either be a carrier transfer or a manufacturing process at a workstation.
A greedy algorithm has been applied for scheduling the asynchronous execution of actions. It is worth mentioning that when a multi-carrier process takes place, there is the possibility for scheduling deadlocks to occur. When the current location occupied by one carrier obstructs the production sequence of another, it effectively prevents the latter from moving to its intended location. If the second carrier, in turn, obstructs yet another carrier, and this situation repeats cyclically, eventually blocking the first one from reaching its intended next location, a deadlock arises. The Scheduler module is responsible for detecting and resolving deadlocks by moving one of the involved carriers back to the loading cabinet that serves the purpose of a buffer in such a situation.
Figure 7 provides a Gantt chart with the results on a multi-carrier, multiple-process-plan production, while
Figure 8 displays the basic scheduling workflow.
The data structures utilized to encapsulate the information for the system and manufacturing process states are very important elements for the application. Job orders along with the accompanying process plans are organized in a data structure named “recipe” composed by an array of the pending steps required for the job order’s completion and an array containing the completed steps. A recipe step is defined as a combination of a manufacturing process and a workstation name. The physical carriers simultaneously serve as both the platform for transferring products and the base plate for executing all manufacturing processes. As a result, in-process items are modeled through the carrier data structure. Each digital carrier is defined by its current location, the linked process plan, an ID that also corresponds to a unique position in the loading cabinet, and additional attributes that describe its activity status (e.g., is enabled in the process queue, or is currently being processed). The number of digital carriers that can be concurrently handled by the CCS application for manufacturing products of the bioMEMS parts family matches the physical capacity of the cell.
Appendix A provides an example, beginning with the definition of a process plan and proceeding with the carrier state array.
4.2.5. Front-End
The graphical user interface of the application is equally important to the main process since it provides all the controls and display capabilities to the operator of the FMC. The front-end has been developed using React
TM [
38], a JavaScript library for building user interfaces with reusable high-level components in a declarative style.
Figure 9 shows the functionality of the individual components and the structure of the application.
A typical snapshot of the CCS application GUI is shown in
Figure 10. The normal workflow for the operator of the FMC after initializing the workstations and loading the cabinet with empty carriers would be to get to the control PC and perform the following steps using the GUI:
load “digital carriers” corresponding to the physical carriers in the loading cabinet
assign process plans to them
activate scheduling
monitor production progress and system status
[optionally] create/update/delete recipes (process plans)
[optionally] update list of supported sub-recipes for a given workstation
The front-end components provide the operator with the means to perform the mentioned activities. The system monitor plays a dual role, as it displays information regarding the state of the workstations and of the robot along with the available and occupied positions of the loading cabinet. The interactive digital cabinet is used by the operator of the FMC to insert or remove carriers and attach process plans to them.
The Recipe Builder component facilitates the creation and modification of process plans. Processes grouped per workstation can be added as steps in the production sequence via a dropdown menu containing the valid options. A closely related component is the Sub-recipe Mapper, which allows the operator to update the list of supported processes per workstation.
The Process Controller component provides the means for the user to manipulate the production workflow. Specifically, the user can enable or disable the FMC operation and scheduling activities and even selectively deactivate carriers from taking part in the production process.
The Logger provides timestamped messages concerning the system state. The log messages are also saved locally in the control PC and are available for postprocessing or troubleshooting.
5. Implementation and Testing
The integration of the workstations and subsystems has been performed in phases and in parallel with the development of the CCS application. Creating different modules for each workstation that would be added to the cell simplified CCS development and deployment, while a strict procedure for testing had been enforced.
Software services for simulating communication and interaction with each workstation were developed and made available on the CCS development computer, allowing offline testing for each new feature of the CCS. The main functionality of these services is to act as finite state-machines implementing an identical interface with the corresponding workstations in terms of the supported communication protocol, the available methods, and the request-response messages. Process mechanics simulation is not supported; however, when these standalone services are deployed as a group, they effectively provide an FMC mock-up that can be used for offline testing new CCS features. This testing capability has been extensively exploited when CCS modules were added to support integration of a new workstation as well as for the development and evaluation of the scheduling strategy.
In the following section, the results for three scheduling strategy variations are demonstrated, essentially constituting a further testing process following basic functionality testing. Each variation is tested using the FMC mock-up to simulate four production scenarios.
Scenarios 1–3 simulate the production of identical items (15 items of a 5-step process plan) with varying probability of deadlock occurrence. Scenario 4 simulates the concurrent execution of 3 different process plans, each consisting of 5 items assigned to corresponding carriers (see
Table 3).
Note that for the purpose of this simulation, each process that is executed on a workstation has a known and fixed duration and so has every carrier transfer (see
Table 4). Different production sequences have been used in the testing scenarios, corresponding to different product types in the FMC.
The following normalized key performance indicators were introduced to evaluate scheduling strategies:
Production time saving = 1 − (Batch completion time/Sequential production time)
WS utilization = Total time loaded with carrier/total processing time
Robot workload = 1 − (Actual count/minimum number of carrier transfers for batch completion)
The fundamental scheduling workflow is common to all the variations compared. Essentially, all schedulers, upon invocation, search the carrier state for (1) possible transfer operation, (2) possible process operation, and (3) conflict condition. If there is a carrier matching the search criteria, then the scheduler notifies the machine operator to perform the pertinent task (i.e., a carrier transfer or a process operation at a workstation).
The first scheduling variation considers as possible transfer operations only those that lead to advancing the progress of a process plan. To decide whether a transfer operation will be requested, the algorithm iterates over the carrier state array, and for each carrier, it checks if the workstation indicated by the next step of the process plan is available. If the workstation is available or there are no steps left in the process plan, a transfer operation is requested to either move the carrier to the next workstation or to the cabinet as a finished product. If the robot is not available, the scheduler will skip the search for transfer operations. A workstation process operation is considered to be possible when the current location of a carrier matches the next step of the process plan and the workstation is ready to perform the process. A conflict condition arises when a carrier is involved in a deadlock as described in
Section 4.2.4 and it is resolved by moving the carrier to the buffer.
The second scheduling strategy differs from the first only in searching for possible carrier transfers. In this case, the carrier state array is scanned, and a possible transfer operation is requested if the carrier is currently located in an idle workstation (having finished the process operation). If the workstation indicated by the next step of the carrier’s recipe is available, then the carrier will be moved there; otherwise, it will be moved to the cabinet. The purpose of this variation is to improve the availability of workstations by freeing them from carriers unnecessarily occupying their production area. However, this increases the robot workload since additional transfer motions are requested to move carriers back and forth with respect to the cabinet.
The third scheduler is similar to the first one in terms of searching for possible carrier transfers and possible process operations, but it takes an alternative approach in considering conflict conditions. For each carrier, it searches the rest of the carrier state array to find if the location of the current carrier is the next recipe location for any of the others, essentially blocking the latter’s production progress. If at the same time the carrier is also blocked from moving to the workstation indicated by its next recipe step, then the carrier is involved in a conflict condition that is resolved by moving it back to the cabinet.
Numerical results of simulation runs for the three schedulers are shown in
Appendix B. Characteristic performance metrics for the three schedulers in all four scenarios are shown in
Figure 11.
The primary performance indicator for the schedulers is the production time-saving index since it correlates with throughput. Schedulers B and C outperform scheduler A in scenarios 2 and 4. Scheduler B yields the best results in terms of workstation availability, having the lowest workstation utilization index but achieving the highest use of the robotic transport system. Scheduler A achieves minimal use of the transport system. Based on the simulation results, scheduler C has been chosen for the application deployment in the FMC due to its promising performance without imposing excessive workload on the transport system.
The offline testing capability was extensively utilized to validate the correct sequence of RPCs between the application and mock-up FMC, taking into account the states of the workstations at any point in time. After completion of offline testing and on-site deployment of the application, an additional procedure was put in place to facilitate the integration of workstations. In particular, after the initial commissioning of a workstation that would ensure that the system is functional in stand-alone mode and in compliance with the safety and operational requirements of the FMC, the integration phase should begin.
Initially, the RPC methods exposed by the workstation for its automatic operation should be individually tested by an external application. The subsequent test should involve the execution of demo process plans, involving only the workstation and the transport system, to uncover any unforeseen errors under real operating conditions. The final test entails the execution of a process plan involving all the workstations of the FMC. Upon the successful completion of these three tests, the workstation can be considered an integral part of the FMC.
6. Conclusions and Future Work
FMC setup variations render the development of CCS applications a non-standard, complicated procedure. Using the proposed architecture as a guide for implementing critical parts of the application has significantly simplified development. The generic interfaces for the workstations and the handling system have successfully allowed addressing in a uniform manner systems that are characterized by very different principles of operation and functionality.
The structure of the controller, consisting of distinct Scheduler, Supervisor, and Machine Operator modules, enabled testing various implementation strategies for each one of them. This is especially important for the Scheduler, as it is a key element in the overall performance of the manufacturing cell.
The client modules effectively create an abstraction layer that highly promotes interoperability and flexibility. The underlying communication has been implemented by a variety of technologies and protocols, mainly gRPC, RESTful API, and TCP/IP service.
Offline testing using mock workstation server applications proved to be of major importance. It significantly accelerated integration and testing of the various workstations upon their commissioning and reduced probable software-related failures at early stages. Nonetheless, extensive testing under realistic conditions remained essential, as it is impossible to anticipate all potential failure causes in complex systems consisting mainly of heterogeneous equipment.
The proposed cell controller architecture offers a template for FMC process control and monitoring applications. However, each implementation should consider the particular characteristics of the use case and could involve additional effort and modifications. Furthermore, considerable technical expertise will be required for efficient implementation as well as maintaining high integration flexibility standards. Finally, there are some limitations regarding the use of open-source technologies in terms of long-term support and code maintenance.
The next step in further development is to exploit a microservices-style architecture so that the main control process and the workstation clients comprise distinct services. This decoupling will enable workstation services to communicate with other services, freeing them from reliance on the main control process. Additionally, services for system diagnostics or predictive maintenance can be developed and integrated in a modular fashion. The ultimate objective is to provide a structured solution for building highly customizable, performant, and scalable applications to support FMC operations.