2.3.4. Metacognitive

Metacognitive knowledge aims to go beyond current understanding. The current knowledge system is the one in charge of this metacognitive process. The metacognitive's main comprises the characterization, classification, reasoning, and creation of knowledge to enhance and reach process intelligence.

Finally, Figure 7 represents the interaction among three system models. Thus, the process system model drives the diagonal's general maturing process, reaching from process definition to process intelligence. Next, the X-axis represents how the data system model supports the process and matures, starting from data definition to data dynamics. Last, the Y-axis shows the knowledge system model developing, which interacts with the other two system models and allows an intelligent technological implementation, reaching from conceptual knowledge to metacognitive knowledge.

**Figure 7.** Maturity model system integration: Wide Intelligence Management Architecture's process, data, and knowledge.

### **3. Results**

This section presents the application of the methodology previously described in a process manufacturing case study.

#### *3.1. Case Study*

This case study aims to illustrate the WIMa framework's application for the lifecycle assessment (LCA) of an acrylic fiber production plant. The objective is to demonstrate how the three models described in Section 2 work together to reach the desired technology application (in this case, LCA). Life cycle assessment requires a correctly defined process and consistent data to provide sensible results. Furthermore, this is a meaningful example to understand the framework at the process definition level (Section 2.1.1).

The acrylic fibers polymerization process considered in this work is presented initially in [21]. Acrylic fibers' production takes 14 stages in a batch production plant, represented in Figure 8, which involves different material and energetic resources. Two alternative production processes are assessed: acrylic fiber A uses acetone as a solvent in the polymerization, and acrylic fiber B uses benzene. This case study describes the process and data definition required to perform a life cycle assessment according to the WIMa procedure. Still, the actual evaluation is beyond the scope of this contribution. For further details regarding the life cycle assessment, please refer to the work in [21].

**Figure 8.** Flowsheet for acrylic fibers' production process (it contains 14 recipe elements, divided into eight recipe unit procedures and six recipe operations).

First of all, define the requirements and objective of the case study. Thus, suppose that "Polymer A.C." wants to develop a high-level decision model agen<sup>t</sup> based on optimization approaches. Besides, they want to standardize their processes to maintain relations with their industrial partners. These requirements comprise the performance of certain levels presented in Table 1:


Next, present every activity performance in detail structured in each three main models: process model, data model, and knowledge model.

#### 3.1.1. Process Model of Polymer Plant

*L1. Process definition*: Process definition tackles the formalization of the polymerization process itself according to the technical requirements. On the one hand, the process consists of a complete polymerization plant that produces acrylic fibers using acetone as solvent. The existing documentation comprises the plant flowsheet (Figure 8) and the process recipe, which are the current formalization of the plant process activities. Finally, a characterization of the organization is performed, and the results, presented in Tables 2–4, summarize the features related to the general, tactic, and strategic levels of the organization in which the polymerization plant is installed. Overall, previous information provides clear boundaries of the process and includes all material and energy flows required to perform a life cycle assessment. Therefore, the process considered is according to the process definition model requirements.

**Table 2.** General features of the organization for system characterization.


**Table 3.** Tactic features of the organization for system characterization.


**Table 4.** Strategic features of the organization for system characterization.


*L3. Process standardization*: The standardization requires following the ANSI/ISA 88 standard. Thus, a semantic model is based on ANSI/ISA 88 standard, the so-called Batch Process Ontology (BaPrOn). The use is related to the instantiation task, which allows a faster and accurate manner to standardize the process. As a result, formulas of the master process recipes were extracted, as shown in Tables 5 and 6. The production plant considers four stages in batch production mode, eight recipe unit procedures, and six recipe operations, as well as twenty-seven different resources (considering material and energy flows). The master recipe's instantiation results in a set of recipe unit procedures and recipe operations, along with their formula and input, output, and other process parameters. Thus, the environmental performance metrics parameters appear included. Overall, the instantiation results in 934 instances concerning 295 classes, 257 object properties, and 33 data properties. The description logic expressivity of the ontology is SHIN(D), where S refers to attributive language with complement of any concept allowed, not just atomic concepts (ALC); H refers to role hierarchy (subproperties); I refers to inverse properties; N refers to cardinality restrictions a special case of counting quantification; and (D) refers to

the use of data-type properties, data values or data types. As an example of class instantiation, the RawMaterial class has Input1\_1 (Acrylonitrile), Input1\_2 (MethylMethacrilate), Input1\_3 (VinilChloride), Input1\_4 (Solvent-Acetone) as instances.



**Table 6.** The formula for the master recipe of acrylic fiber A production process 2/2.


*L4. Process optimization*: This case study tackles the optimization of multistage batch plants' scheduling problem under sequence-dependent changeovers presented by Capon et al. (2011, 2012). The problem can be defined as follows: given a set of process operations planning data, including (i) time horizon, (ii) set of product recipes, (iii) equipment technologies for processing stages, (iv) product demands, (v) changeover methods, (vi) economic data related to costs and prices, and (vii) environmental data related to raw material, equipment, and product manufacturing environmental interventions; all of them provided by the data model. Four objective functions are relevant for decision-making: productivity (P), total environmental impact (TEI), makespan (M), and total profit (TP). The problem's modeling uses an immediate precedence mathematical formulation, managing the possible use of different product changeover cleaning methods; multiple alternative pieces of equipment at each stage; limited storage policies; and product batching, allocation, and timing constraints. Such problem representation is suitable for applying any of the three different optimization strategies considered in this case study. Specifically, we will solve the multi-objective problem using mathematical programming with a normalized

constraint method (MP), a genetic algorithm (GA), and a hybrid optimization approach (HA). Each solution method's suitability depends on the combination of problem features and objective function, and will be further explored in level 7 related to process intelligence. At this level, the solution techniques are considered independently, and the knowledge model supports the implementation of the optimization providing adequate data from the data model.

*L7. Process intelligence*: This level comprises the development of intelligent agents for the scheduling problem. Indeed, one can use different problem representations for optimization purposes, and which is the most suitable depends on the issue features. That is precisely the function provided by the process intelligence framework using agents. The agents perform different functions and are integrated, allowing communication among them. The agent's functions include communication, search, classification, and solution. A common vocabulary is necessary to achieve communication, and all the agents rely on the ontologies described in the knowledge model.

The solution mechanism consists of the following steps: (i) problem definition, (ii) modeling process for reaching a problem model, (iii) model analysis, (iv) model solutions, and (v) problem implementation. Based on the answers in (iv), we can make inferences and reach decisions about the problem (v). The assessment of decisions goodness feeds back to the intelligent system to enable learning.

Next, the communication agen<sup>t</sup> prepares the classification agen<sup>t</sup> for analyzing the problem using a knowledge-driven classification procedure. Next, a solution strategy is proposed based on a similitude measure resulting from the problem instance compared with (i) existing problems tackled in the past and stored in the database and (ii) existing problem approaches from the state-of-the-art. The problem instances solved in the original papers are the basis for the database of this problem. A total of 415 problem solutions are included with different problem descriptions and objective function values. As a result of the similitude measure, a set of ranked solution approaches is proposed to the decision-maker. Finally, a solution agen<sup>t</sup> uses the solution algorithms to reach the optimal solution for the problem instance. The solution agen<sup>t</sup> also sends problem solutions to the decision-maker and stores them in the future reasoning database.

The framework testing with new problem instances and results are shown in Table 7. The first column describes the problem size (number of batches for each problem). The second column specifies the objective function. The third column presents the solution implementation method's selection, while the fourth column includes the objective function's Value. Finally, the fifth column stands for the distance to the best optimal solution found. For small problem instances, the rigorous mathematical programming approach has been selected, whereas problem instances with many variables are solved using a genetic algorithm. Thus, the objective function also has an essential role in the selection of the solution strategy. Indeed, for productivity maximization, the hybrid approach is selected. In most cases, the solution proposed by the agent-based system is close to 5% to the optimal solution. Overall, this framework stands for a systematic approach to scheduling model selection and solution implementation, thus supporting the engineers' high-level decision, who do not need to have a thorough understanding of advanced optimization techniques.

Programming the different agents uses Jython because it combines Python as a programming language and uses Java APIs for communicating with the ontological models.


**Table 7.** Results for different problem instances from the agent-based framework at Polymer plant.

3.1.2. A Knowledge Model of the Polymer Plant

*L1. Conceptual knowledge*: A knowledge system harmonizes and manages sets of valuable information, making them accessible for their use considering specific purposes. In this case study, knowledge conceptualization uses the Enterprise Ontology Project (EOP) [22]. EOP is an ontology containing three active ontologies: batch process ontology, environmental ontology, and enterprise ontology.

First, batch process ontology (BaPrOn) tackles features such as physical, procedural, recipe, and process models based on the ANSI/ISA 88 standard. It focuses on the production operation managemen<sup>t</sup> of batch processes. Next, environmental ontology (EVO) considers life cycle assessment and environmental impact categories features, which allows the trace and calculation of environmental impact produced by product or processes activities. Finally, the enterprise ontology project (EOP) is based on the ANSI/ISA 95 standard. It considers the integration of enterprise activities, such as quality, maintenance, and inventory management. Additionally, EOP also considers financial features to tackle supply chain managemen<sup>t</sup> activities.

Besides, the EOP model takes into account and models the following key knowledge:


Finally, Figure 9 shows the first classes found in the taxonomy of the enterprise ontology project.

*L4. Procedural knowledge*: Mathematical programming has been choose as strategy for making optimization knowledge explicit and available. Thus, this case study makes use of the mathematical modeling ontology (MMO) [23,24] and the operation research ontology (ORO) [25].

On the one hand, MMO aims to represent knowledge of mathematical domain based on mathematical structures comprising elements, terms, and operations. Thus, the mathematical term is the atomic part of a mathematical expression. Mathematical elements and expressions are related through mathematical operations, which can be logic or algebraic types. An element or expression can define specific conceptual meanings, such as processing time, the opening Value of the valve, the effort calculation equation, etc. In the same manner, an element or expression has a behavior that is related to variables, constants values, etc. Finally, MMO allows the definition of object-oriented mathematical

modeling relating mathematical elements and expression with concepts from other semantic representations. In this case study, MMO is integrated with EOP. That allows linking mathematical models and equations with instances of the acrylic fiber process. Figure 10 shows the first classes found in the taxonomy of the mathematical modeling ontology.

On the other hand, ORO aims to capture the knowledge of operation research area that is a branch of mathematics. This ontology structures mathematical expressions fed by MMO in the form of mathematical programming. That allows a formal study and solution of complex problems for decision-making activity. As a result, an enriched semantic structure is obtaining. It considers the main parts of mathematical programming, such as the objective function in the form of an equation, a set of constraints in the form of mathematical equations. In the same manner, logic and algebraic operations are supported by MMO. Figure 11 shows the first classes found in the taxonomy of the operation research ontology.

**Figure 9.** First taxonomical representation of the Enterprise Ontology Project classes.

**Figure 10.** First taxonomical layer representation of the Mathematical Modeling Ontology classes.

*L7. Metacognitive knowledge*: This level aims to create an autonomous problem definition agen<sup>t</sup> to construct a semantically enriched problem statement [26]. The agen<sup>t</sup> works in a semantic environment where machines can access explicit knowledge codified in Python and Jython. The strategy comprises the following: (1) Semantic definition of the system. (2) Recognition of current situation. (3) The setting of key process features and variables. (4) The setting of confidential intervals for monitoring task. (5) Searching for relation to key features. (6) Definition problem statement.

First, the system's semantic definition refers to Tables 2–4 presented in Section 3.1.1. Based on the system instantiation, the current process is introduced semantically, indicators, related key features, and engineering metrics are set, such as resource availability, energy consumption, demand un-accomplishment, and cleaning overtimes desired. Table 8 shows a brief example of the resulting process of system setup for monitoring. This table performs a SWOT analysis defining strengths (S), weaknesses (W), opportunities (O), or threats (T). The following two rows show indicators and related features coming from classes representing the process domain concepts. The next row shows the engineering metrics associated with indicators. Finally, the last two rows refer to the upper and lower bounds values defined for evaluating the current performance.

**Table 8.** The semantic search of key features and confidential intervals, where SWOT refers to a strength (S), weakness (W), opportunity (O), or threat (T) and EOP refers to Enterprise Ontology Project.


Next, the intelligent agen<sup>t</sup> defines optimization goal statements (maximization or minimization). Then, using previous decision variables definitions, the system is ready for construct or semantic problem statement definition. Finally, the agen<sup>t</sup> fills the problem statement template automatically to present it in the form of natural language, as follows:

Empty template. "Taking into account *-EOP classes found as key variables-* variables, and *-EOP classes found as key parameters-* parameters; *-Goal statement- - EOP class defined as decision variable-* related to *-EOP class defined as an indicator-* indicator."

Filled template: "Taking into account *Processing start time, Storage level* variables, and *Maximum storage capacity, Batch processing time, Batch due date* parameters; *Minimize Makespan* related to *Number of late jobs* indicator."

At this level, intelligent agents provide additional capabilities for reasoning using semantic technologies. The main task focuses on decision-support for industry 4.0 microenvironments.

**Figure 11.** First taxonomical layer representation of the Operations Research Ontology classes.

3.1.3. The Data Model of the Polymer Plant

*L1. Data definition*: The data definition is concerned with the data collection and data system architecture. Accordingly, the transactional system needs to be verified. In this case study, the information related to the recipe, namely, the energy and material flows, is stored in aStructured Query Language (SQL) database. All flows are listed, identified, and quantified in the database, and their sign (positive or negative) indicates whether they enter or leave the process boundaries. The data relating to the material and energy flows

stems from the factory floor, and only process engineers have access and permission to modify the data.

*L3. Data standardization*: This level aims to standardize data properties. Data properties comprise data metrics, data language, and data structure. In general, properties are related to specifications within the processing system. Many of the systems are ruled by the technology implemented. In contrast, this methodology pursues defined, choose and consensus data properties desired for the processing system. In this particular case study, ANSI/ISA standards define those properties, standardizing the data through the instantiation process. The data model provides specifications for developed interfaces, software requirements. Table 9 presents properties, detail of Boolean, and Direction Type data from EOP (based on ANSI/ISA standards).

**Table 9.** ANSI/ISA standard properties for enumeration members.


Next, Table 10 presents data details from the polymer process.

**Table 10.** Data properties from EOP within the Polymer process system.


*L4. Data integration and feeding*: This level performs the definition of data and data sets required by the optimization software or other software. We consider software specialized in mathematical programing and solving by optimization software, which contains strict and non-strict approaches. Thus, Tables 11–13 show some structure data or data sets required by the polymer process plant's optimization activity. Besides, optimization software can call for single data at any time.


**Table 11.** Capacity data set of the Polymer plant, structuring two columns: Unit\_ID and Data value.

**Table 12.** Subtasks time of product A of the Polymer plant, structuring seven columns: Task number, Unit\_ID, Preparation time, Load time, Operation time, Unload time, and Cleaning time.


**Table 13.** Subtasks time of product B of the Polymer plant, structuring seven columns: Task number, Unit\_ID, Preparation time, Load time, Operation time, Unload time, and Cleaning time.


*L7. Data dynamics*: This level aims to develop an algorithm based on Jython being capable of structuring data. For this specific case study, the algorithm was under construction. The strategy focuses on queries and the structure of triples from the semantic models, which can dynamically define data sets as shown in L4. Data structuring and feeding. Finally, we want to point out that ontologies have a database structure but are semantically enriched and supported by knowledge.

### **4. Discussion**

This work introduces the wide intelligent managemen<sup>t</sup> architecture and the application to acrylic fiber production as a case study. As a result, the production process first creates a process definition (system characterization and flowsheet of the plant), data definition (database based on the semantic model), and knowledge conceptualization (a semantic model for representing concepts and data of the process and system). The standardization of data and concepts has been done using the semantic model to represent the process (ANSI/ISA standards). Thus, using the architecture for data structuring and feeding facilitates the integration of the Life Cycle Assessment approach improvement. From this point, the acrylic fiber production plant has the basis for developing process automation, developing process digitalization, or developing intelligent agents for decision-making. Figure 12 shows the process activities performed in the case study regarding process definition, standardization, optimization, and intelligence (yellow boxes and blue arrows path). Moreover, the acrylic fiber company can perform process improvement, automation, or digitalization based on the current plant status (green arrows pointing gray boxes). Finally, using a comprehensive intelligent managemen<sup>t</sup> architecture model can be adapted to any company necessity by choosing how to evolve their processes.

**Figure 12.** Potential activities derived from the case study, where GMPs refers to Good Manufacturing Practices, SOPs refers to Standard Operating Procedures, and GUI refers to Graphical User Interface.
