*4.1. Communication among Agents and Users*

The communication model implemented in the developed system is a simplified version of SEASALT [16]. SEASALT is a multi-agent architecture made of three types of agents that communicate with each other to provide the user with solutions (Figure 7):


**Figure 7.** Communication model.

The system has to be maintained by one or more knowledge engineers that are responsible for two main tasks:


#### *4.2. Communication with the PLM*

The GUI Agent is responsible for communicating with Aras Innovator, the PLM system implemented in the case study company. The communication language of Aras Innovator is AML (Aras Markup Language). AML is an XML (Extensible Markup Language) dialect and markup language that drives the Aras Innovator server. Clients submit AML documents to the Aras Innovator server via HTTP (Hypertext Transfer Protocol), and the server returns an AML document containing the requested information.

The central element that builds Aras Innovator is the ItemType. An ItemType is a business object that represents the template or definition for the items that are created from it. In OOP (Object-Oriented Programming) terms, the ItemType is similar to the class definition, and the items that are created from it are the class instances or objects. Almost everything in Aras Innovator is defined through an ItemType. For instance, ItemType defines properties, forms, or views available for this item, its lifecycle, the work-flows associated with the item, permissions, relationships, server and client methods and events to run on the item, etc. ItemType is designed to hold as little information as a name, or as much information as required for the most complex business objects.

The first step in any retrieval sequence is to get the ID (identification number) of the item that contains the requested information. Figure 8 shows an example of a message requesting Aras Innovator to search for an item of type Part (i.e., any mechanical design) and to send back all the data related to it and the returned message from Aras Innovator.

**Figure 8.** Example of communication with Aras Innovator.

A very important point in the retrieval process from the PLM system is that the extracted context information should be that which was released and effective when the problem happened. The date is the parameter "When" in the user query (see Section 3). For example, the user may investigate a quality claim about a product that was created several months ago. In this case, the current version of context information in the PLM repository might not be relevant, because between the current date and the date when the problem happened several items could have been changed (e.g., new suppliers, changes in the design, or improvements in the processes). Aras Innovator returns by default the item released at the time of the request. Nevertheless, in case the release date is later than the date of the problem, there are functions that return all the previously released versions of an item.

Once the ID of an item is known, a message can be sent to the PLM system to request all its relationships. In the next example, all the manufacturing methods related to an item are requested.

```
<AML> 
 <Item type="Part" id="3F3829DC968D421DB58206C0CAC01F2F" action="get"> 
 <Relationships> 
 <Item type="Manufacturing Method List_Part" action="get"></Item> 
 </Relationships> 
 </Item> 
</AML>
```
Following the relationships among elements defined in the PLM information model (Section 3.2), and with the AML queries presented above, it is possible to extract all the context information of a problem from the limited amount of information introduced by the user in the query (i.e., 'What?', 'When?', 'How often?', 'Where?', 'Who?', and 'Why?').

#### *4.3. Communication with the CBR System*

Section 4.1 described how the Topic Agent hosts the CBR system. For this case study, the open source system myCBR was selected [18]. Since both myCBR and the multi-agent architecture (on which the MPS system is built) are developed in Java, the communication between both systems is done directly through internal Java communication. The classes of both myCBR and the agent are instantiated in the same source code, and they can interchange information through their public functions.

The Topic Agent goes through the following communication steps to retrieve from its case base a set of similar cases related the user query:


Once the way in which the CBR system receives a query and sends proposals is presented, the next section explains in detail how the CBR system manages the received data and how it calculates similarities in order to propose similar cases to the user.

#### **5. CBR System Functioning Description**

The CBR systems, which are hosted in each of the topic agents of the network, are responsible for seeking similar cases that could help the user to solve the current problem. For this function, they search in their own case base. The search is based on the information contained in the user query and the PLM context data received from the Coordination Agent. The user and PLM parameters are shown in the upper right corner of Figure 9. These parameters are presented also in Figure 10 with the format in which they are received by the topic agents. From the user inputs (numbers 1 to 12), some of the parameters (numbers 1, 2, 3, 4, and 6) are used in the request for context data in the PLM system as was explained in Section 3.2. The output of the PLM repository (number 13 to a previously undefined number "n" of attributes depending on each context case), together with some of the user query parameters (numbers 2, 3, 4, 5, 8, 9, 10, and 12), are used to create the query to be sent to the CBR system. This CBR query is shown in the upper left corner of Figure 9. Finally, there are some parameters in the user query (numbers 1, 7, and 11), shown in red in Figure 9, which are not used for the similarity calculation. The problem date (number 1) is used in the PLM system to find the released context information at the time that the problem happened. The parameters "What" and "Why" (numbers 7 and 11) are used by the knowledge engineer to understand better the solved problem, and to make a decision on whether the case will be included in the case base as a new case or not (see Section 4.1).

**Figure 9.** Information match between application user interface and the knowledge model.

The parameters delivered to the CBR system should match with the internal information model of myCBR [5], which is built on the general knowledge model of the system, and which is used to describe the reality under analysis. myCBR works with projects, which are the basic container of classes, attributes, a Similarity Measure Function (SMF), instances of the classes, and case bases. Each project can contain one or more classes, and each class contains attributes of different types. Each numerical attribute has to be configured with its range of valid values, and each taxonomy attribute type has to be configured with its list of valid values. Finally, each attribute receives an SMF.

SMFs are the functions associated with each attribute used by myCBR to calculate the final result, which will evaluate how similar is the received query in relation to each of the cases stored in the case base. Each attribute brings its own similarity contribution to the global similarity result, and the global similarity result is a pondered weight of all of them. Figure 11 shows the selected weights used in the case study of this work and the global formula applied in the project. Detailed information can be found in Camarillo et al. [5].

*Materials* **2018**, *11*, 1469

**Figure 10.** Example of query.


**Figure 11.** Weights for global similarity calculation.

As previously mentioned, myCBR works with numerical and taxonomy attributes. Numerical attributes (e.g., numbers 2, 3, 4, 5, 14, or 16 in Figure 10) contain any value between a defined minimum and maximum. Taxonomy attributes (e.g., numbers 8, 9, 12, 13, or 20 in Figure 10) contain values from a specific list of valid values. The Similarity Measure Function (SMF) of numerical attributes calculates the relative distance between the value at query and case of this attribute. For example, in the attribute number 16, "experience", if the query has the value 9 years, the case under analysis has the value 23 years, and the defined range for this attribute is from 0 to 45 years, the similarity contribution to the global value of this attribute would be:

sim(q,c) = 1 − |9 − 23|/|45 − 0| = 1 − 0.31 = 0.69.

The SMF of taxonomy attributes is calculated differently. The nodes of the taxonomy are associated with position factors, which are set based on experience and a trial-and-error process, until the results given by the system are satisfactory. The similarity value is calculated based on the value of the position factor of the common node related to both the query and the case. Figure 12 shows the example of the attribute "Function", which is noted with number 9 in the figures above. In this case, the upper node (i.e., "Function") was set with a position factor value of 0, the next one with value of 0.5, the following one with value of 0.75, and the lowest node with value of 1. To calculate the similarity, the SMF searches for the closest common node between query and case, which in this example is "Modify". This node has a position factor of 0.5, and therefore the similarity result of the example is 0.5 as is shown in the figure.

**Figure 12.** Example of individual similarity for taxonomy attributes.

A last relevant point related to similarity calculation is the range of supported manufacturing processes. In that sense, the developed system was designed to work for any kind of manufacturing process. Therefore, in the case of the context parameters, which are extracted from the PLM repository, a very wide range of possible parameter types should be expected. For some processes, a parameter of pressure might be relevant, but for other processes the relevant parameter could be a temperature. In the presented multi-agent architecture, each topic agent is specialized to the process where it is located. Therefore, when a topic agent receives a query, it will look for the parameters that it knows. Thus, all context parameters that are unknown by the topic agent will be ignored (i.e., no contribution in terms of similarity). It could also happen that not all of the parameters associated with its process are included in the query. In that case, the parameters that are not found will be assigned a null value, reducing the similarity result. In this way, cases from the same process will have always the highest similarity values followed by the cases from similar processes (i.e., processes with similar context parameters). The cases from very different processes will have the lowest similarity value.

Once the global similarity is calculated, the system displays the corresponding results. The bottom part of Figure 9 shows how the output in the system looks. In this case, the similarity value is 77%, and the proposal for the containment, corrective, and preventive actions is displayed to the user through the GUI.

The next section presents the details of the implementation of the developed system prototype in a case study company from the battery manufacturing sector.

#### **6. Case Study in Battery Manufacturing**

The developed system prototype was implemented as a case study in the company Exide Technologies, a global provider of stored electrical energy solutions (i.e., batteries and associated equipment and services) for transportation and industrial markets. Exide Technologies has several production plants running similar processes in Europe and the USA. For this reason, Exide could benefit from this research work.

For this case study, two plants of the company were selected: one located in Germany, and another one in Spain; both plants produce similar products with similar processes. They produce power batteries for the industrial market (i.e., forklifts or similar applications) with the process denominated Wet Filling to produce the positive plates. Gravity casting, with which the negative grids are produced, was also selected to test the performance of the system with a second manufacturing process.

The implementation of the case study can be divided in three phases: the definition of key PPR (Product-Process-Resource) attributes to ensure proper similarity calculation within the CBR systems, the collection of cases for the CBR case bases, and the test of the system on the shop floor.

As presented in Section 5, the similarity calculation is done based on a set of attributes that are selected out of the whole range of attributes that define the PPR reality of the case study. The selection criteria, and their individual weight in the similarity calculation, are related to the possible variation range of each attribute. For example, in a defined PPR scenario where the height of the product never changes across all existing part numbers, height would be a very bad candidate for similarity calculation, because it is not going to help to distinguish among cases. On the contrary, if the key differentiating characteristic of a range of products is the color, and there is a variety of colors, then color would be an excellent candidate for similarity calculation. To identify these key attributes for similarity, staff from production and engineering in the German plant of Exide were interviewed with a focus on the wet-filling process. The steps followed in these interviews were the identification of key PPR elements, the identification of the relationships among them, and finally the identification of their relevant attributes and their corresponding variability range. This information was used in the customization of both the PLM and CBR systems.

The next step in the case study was to collect enough cases to fill the CBR case bases of the topic agents. This activity was focused only on the wet filling production area of the German plant, leaving the Spanish plant for validation purposes. Four sources were used: the PFMEA of the wet-filling

process, existing 8D reports coming from quality claims reported in the past, cases taken directly from the field in the German plant of the case study company, and some other cases from other manufacturing processes and companies, which helped also to test the capability and flexibility of the developed prototype. For the collection of cases directly on the shop floor, an unqualified individual without any kind of industrial background was engaged for two weeks, which helped to demonstrate that the proposed representation method for production problems is intuitive enough, and it is valid even for very unskilled operators. Table 1 shows a summary of all collected cases. In the table, the term "level" refers to any disruption that can be identified in production. The term "case" refers to a set of levels describing the whole chain of problems, from a visible one to the final root cause. The following example illustrates these concepts: a pump provides less pressure than defined, because the piston inside is worn out, because the lubrication oil has contamination particles, because the filter is broken. This example represents a single case with four levels.



For the last phase in the case study, several computers were installed in the selected production lines with access to the multi-agent architecture and to the PLM system. From these computers, the operators were able to access the problem-solving system to introduce their queries and receive recommendations from the system.

The group leaders of both plants were trained in the prototype system and they used it together with other operators to solve 10 problems occurring during their shifts. Since the cases in the prototype were expressed in English, the support of a translator was needed. The queries introduced by the users, the applicability of the results of the proposals coming out of the system, and the real solutions of the problems were all recorded for final analysis. The system was successfully tested, demonstrating the feasibility of the proposed approach [5].

#### **7. Conclusions**

This work has presented the details of the information and communication models of a developed system prototype to support a Manufacturing Problem Solving (MPS) process. The main contribution of the proposed system is that it integrates an MPS method with CBR on an agent-based distributed architecture and with a PLM system as a manufacturing context data repository. This novel approach had not been proposed in the reviewed literature.

The novelty from the modeling perspective resides in the information model created and implemented in a PLM system to facilitate the storage of and search for manufacturing context information that is used to calculate the similarity among production problems on the shop floor. The relationships among items in the system, and variables for the kind of explicit information related to each item, have been designed to facilitate the searching process associated with an MPS process, where the focus is placed on collecting as much contextual information as possible concerning the problem under investigation.

The approach is also an example of a low investment proposal that can be included in the conceptual frame of the technological vision Industry 4.0. Even if this case study is far from all advanced features envisaged today in the Smart Factory concept, it can be understood as a first small step to motivate some companies to start taking steps into Industry 4.0.

**Author Contributions:** Conceptualization, A.C., J.R., and K.-D.A.; Methodology, A.C.; Software, A.C.; Validation, A.C.; Supervision, J.R. and K.-D.A.; Writing (Original Draft Preparation), A.C. and J.R.; Writing (Review & Editing, J.R. and K.-D.A.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.
