**3. Results**

#### *3.1. Model Development: Design and Building*

*Model design* started with the creation of a detailed flow chart in Cmap tools, resulting in a conceptual model. The flow chart includes all production steps and sub-steps, the operators, the average historical processing times, and the relevant machines. Connections and cross references in the overall process become evident here. Information on merging multiple process steps, the elimination or integration of process steps, and the required model logic was collected. The correctness of the flow chart was approved by the process owner—the head of production. Altogether, this step of model design ensured deep process knowledge and identified all the dependencies and necessary components for the subsequent FlexSim models. Since the manufacturing process is confidential, this process flow chart cannot be shown. However, the complexity of such production processes makes it highly recommendable to start with such depiction and its approval by the head of production.

After the basic information was determined, *model building* in FlexSim began (Figure 3) following the common guiding idea to include only crucial attributes in the model and to keep the model as simple as possible [29]. Additionally, in the models, we assumed that no machine breakdowns or other major deviations would occur.

**Figure 3.** Steps for building a process model using FlexSim: In the first third, simple facts established the foundation of the model. After inserting the model logic, the working schedule and the processing times became more complex and included the most critical attributes of the model to be verified (colored blue).

Initially, the floor plan (step 1) and the machines (step 2) were transferred out of official documents into FlexSim (Figure 4). The machines are represented as so-called processors. There are three processor types: one to process the item, one to combine multiple items, and one to separate an item into multiple items.

**Figure 4.** Simplified floor plan in FlexSim: The production area is depicted, including processors (blue boxes: number represent the chronological order of the process steps) and further details (grey boxes). Brown bars represent areas that are not used for the production of the investigated products.

The employees (step 3) are called operators. The headcount in the model, symbolized with the four males on the left, corresponds to the headcount of the original process.

Inserting the model logic (step 4) is one of the most complex parts in building the simulation model. Its basis is the batch documentation, which gives the order of the process steps. Some steps can be connected easily. More complex sequences occur when certain events condition other steps, e.g., the weighing of a new batch cannot start until certain steps of the previous batch are finished, even if the scales and operators are not occupied. Such conditions are considered in the model by triggers like opening and closing the processor ports, sending messages, or placing certain information on the items. It must be determined which and how many operators fulfill a process step.

Next, the operating schedule (step 5) and the times off are entered. Working hours are from 7:00 a.m. to 3:45 p.m., with a breakfast break from 9:15 to 9:30 a.m. and a lunch break from 12:15 to 12:45 a.m. For some processors, it must be guaranteed that the entire process can be finished at once. Therefore, the operating schedules of the processors must be considered as well.

Processing times (step 6) are then established as described in the statistical data processing. If a pairwise comparison revealed no statistical significance (Mann–Whitney test), the times for the different process steps of the two products, PINA and PEMB, were pooled (see Table S1 in Supplementary Materials: Weighing granulation liquid on the table and on the floor scale, dissolution of granulation liquid, mixing in a tumble blender, and weighing the coating on the table and on a floor scale).

Some working steps and times are not explicitly part of the batch documentation, which only provides the daily routine. These times are instead based on employees' experiences and were also implemented in the model. As these times do not vary depending on the product, the implemented times were identical in all models (Table S2 in the Supplementary Material).

Not all process steps have the same importance. Some process steps can be stopped, while others cannot. Only some of the process steps require the attendance of an operator for the entire processing time. Therefore, the priorities (step 7) have to be defined.

Additionally, the scope (step 8) of the production must be implemented. The prevalent conditions are the campaign production with a certain number of batches. FlexSim also offers the implementation of actual dates.

In this case study, the most important model parameters to verify were the processing times, model logic, and operating schedules. Model verification is unique and strongly depends on the model itself. Thus, finished components are rarely available. Sometimes, additional checkpoints and workarounds, such as labeling the so-called flow items with informational stickers, had to be implemented. FlexSim offers different statistical analysis modules that must be transformed to enable the verification of these parameters. Some of these analysis modules can be added via drag-and-drop and do not need further changes. To track the processed flow items and operators, each must be individually in C++. The implementation of these elements for model verification is the last step of model building.

Testing of the models (step 9) can now be performed in single runs that are started and stopped manually. Alternatively, it is possible to use the FlexSim module Experimenter. Experimenter offers the opportunity to predefine the amount of replications (runs), the statistics for evaluation, and the variables to compare, as well as subsequently perform the necessary replications.

#### *3.2. Model Verification*

The blue colored boxes (steps 4–6) in Figure 3 highlight the most important parameters to verify, including simple but productive model logic, an accurate operating schedule for operators and processors, and the correct processing times for each process step. To establish a sound foundation for the FlexSim-generated data, the Experimenter module was used. The amount of model replications was determined as identical to the amount of historical batch data (PINA: 25 runs; PEMB: 45 runs). While using the Experimenter, FlexSim accesses the deposited statistical distribution of each process step and thereby generates different processing times for each run. Afterwards, an interactive report and a performance measure report can be exported. This interactive report includes data on the model logic in item-trace Gantt charts and all data on the working schedules in state charts. These charts are produced replication-wise. The performance measure report includes a

statistical summary, a replication plot, a frequency histogram, and the single values of all replications for each process step.

#### 3.2.1. Model Logic

Verification of the model logic is complex to integrate. The goal of this step is to prove that all process steps run in the correct order and that the conditions and dependencies between different process steps are correctly implemented.

Therefore, all items run through the model are tagged, and triggers are programmed for all processors to leave information on these tags. This type of information is best captured using item-trace Gantt charts. Figure 5 shows a schematic item-trace Gantt chart for one batch. One batch consists of six items, with two each for the granule, the granulation liquid, and the coating. The items are pictured as one bar. Each process step is represented as one colored square of the bar. Following from top to bottom and left to right, the process order and dependencies of the different process steps become evident. As an example, the processing step for coating (light blue squares) is provided. This process requires the previously formed tablets (green square) and the newly dissolved coating (light yellow squares). Coating then combines these two items on one bar for the coated tablets, which are packed in the subsequent step (light pink). Hence, the process order and processor type of the coater, a combiner, are verified. During model building, a trigger was set to withhold weighing the coating until compaction is almost finished to prevent long holding times for the liquid parts of the coating. This chart verifies the implementation of this trigger since weighing (purple) starts shortly before the end of compaction (green). Overall, the item-trace Gantt charts were able to verify the overall model logic. Additionally, the overall processing time became evident, which is important for the subsequent comparison of different optimization scenarios.

**Figure 5.** Example of the FlexSim-generated item-trace Gantt chart for model verification: This chart represents the production of one batch including the most important process steps. The granule, granulation liquid, and coating are necessary to produce film-coated tablets. Floor and table scales are used for each of these components, as different mass ranges are weighted. One bar symbolizes one of these components depending on the initially used scale. Hence, one batch consists of six bars. The different colored subdivisions of the bars show the finished process steps of each component. This type of item-trace Gantt chart allows to (i) control the correct logical order of the process steps; (ii) check if the dependencies are correct (e.g., the weighing of coating can only start shortly before the end of the compaction to avoid long holding times); and (iii) determine the total campaign duration.

#### 3.2.2. Operating Schedules

The operating schedules are pictured in the state charts for the operators and processors. Each operator and each processor is represented by one bar, which is divided into several states, including utilize, idle, or scheduled down. The x-coordinate shows the duration in days.

Figure 6 shows a schematic state operator chart (top) and a schematic processor chart (bottom). These charts enable the modeler to verify break times (duration and fixed moments). Moreover, the capacity utilization of each operator and processor becomes evident in this chart. The process steps are arranged in chronological order. This way, one can follow a batch by starting with the utilized part of the first process step and continue watching it work downstream through the processors. The purple parts indicate processing times at which the process becomes stuck because of too few operators, while the light-yellow sections indicate blocked processors. FlexSim can export processing times and states in a table form. This way, export for further data analysis is easy to handle. An evaluation of the state charts verified the break times and process order.

**Figure 6.** Example of the FlexSim-generated state operator chart (**top**) and state processor chart (**bottom**): The production of two work days is depicted, including the most important states for the six operators and the most important states for the last process steps. Each operator is symbolized by one bar. The different-colored subdivisions of the bars represent the operators' states over time. The processors are also represented by bars divided into different-colored states. The major outcomes of both charts include verification of break, lunch, and after-work hours, as well as capacity utilization (idle/utilize). This enables one to i) test the model (chronological order) and to identify bottlenecks (capacity utilization of operators and processors).

#### 3.2.3. Processing Times

The last parameter to verify is the processing time. For this purpose, the performance measure report was used. Single FlexSim-generated values were transferred into Minitab® where these values were compared to the processing times of the historical batch data. Mann–Whitney tests with a confidence interval of 0.95 were performed, as most data are not normally distributed. None of the generated processing times were significantly different to the historical processing times (Table 1).

**Table 1.** Statistical analysis: Results of the probability plots of each process step and of the Mann– Whitney tests comparing historical batch data to the FlexSim-generated data for the products PINA and PEMB. There was no significant difference (*p* < 0.05) between the historical batch and FlexSimgenerated data.


#### *3.3. Model Validation*

To prove the reproducibility of the simulation models, a model validation was conducted. There are different options to validate computer models. Initially, it was chosen to prove the correctness and reasonability of our models by face validity. Also, a predictive validation was added for which future production campaigns of PINA and PEMB were picked and specifications about the process flow, processing times, and campaign durations were defined. The campaigns, covering four batches for PINA and ten for PEMB, were run under normal conditions. Non-standard conditions, changes, and deviations were additionally monitored and documented. Afterwards, the relevant data was collected from batch documentation, transferred into Minitab®, and compared to FlexSim-generated data. Analyzing the new data showed a valid process flow. The other parameters, the processing times and the campaign duration, however did not meet the specifications caused by severe deviations during both campaigns. Due to confidential issues, detailed explanations are limited. Some of the deviations, such as machine breakdowns, were intentionally excluded during model description and therefore not considered in the FlexSim-generated data. Another very influential deviation was personnel shortage; for none of the production days, the full head count was available. On top, urgent, non-campaign work cut the already minimized work capacities. Besides these issues, the predictive model validation gathered additional important information for the process owner and validated the model logic. The fact that not meeting the specifications was at least partly caused by a lack of the necessary personnel which, however, at least indirectly validates the models.

#### *3.4. Model Application: Optimization and Evaluation of Fictive Shift Systems*

After the successful verification and partly validation of the as-is model, realistic and meaningful changes in the real production processes were discussed with the head of production. As Figure 6 (bottom) illustrates, processors have long idle times with little utilization. This raised the question whether the system could run more profitably under different shift systems. Profitability was investigated as the total duration for one campaign and the labor costs. Labor costs included not only the salaries of the employees but also the costs for the machine's run times.

3.4.1. Establishment of Models with Different Shift Systems

The shift systems of interest were one-shift (OS), one-and-a-half-shift (OHS), and two-shift (TS). The related operating schedules can be found in Table 2.

**Table 2.** Operating schedules: Operating schedules of the optimization scenarios classified into the shift models of one-shift (OS), one-and-a-half-shift (OHS), and two-shift (TS) systems. The working hours for the one-shift system were adjusted according to an in-company agreement. \* Different weekly hours for some operators in the past.


The different parameters between the different scenarios were operating schedule, product type, and the number of operators. These variations were implemented manually without any optimization algorithm. In addition to simply changing the schedules and headcounts, other aspects must be considered. The campaign duration strongly depends on the weighing operations of the different batches. Therefore, it is important to identify the best times after the start of the campaign for each batch. This is done by supervising and evaluating the interactive report of the Experimenter module. The shift system and number of operators for the OS system of PINA influence these times. The impact of this parameter on the overall campaign duration is visualized in Figure 7. The campaign of PINA includes four batches (left side), and the campaign of PEMB includes ten batches (right side). The last batch of PINA can be weighted in after 8 h under a TS system compared to 48 h using an OS system. For PEMB, the time could be reduced from 9 d to only 3 d after campaign start. Hence, a significant reduction in campaign duration was already expected when building the optimization scenarios.

**Figure 7.** Summary of the optimal starts for weighing the granules of all batches after campaign start for PINA (**left side**) and PEMB (**right side**): The cylinders represent batches, the numbers indicate the batch number in the campaign. Therefore, the campaign of PINA includes four numbered cylinders and the one of PEMB includes ten cylinders. The start depends on the applied shift system and (only for the one-shift system of PINA) also on the number of operators. This indicates that the processors limit the weighing strategies for all other cases. The start times have a significant influence on the overall campaign time.

#### 3.4.2. Results of the Shift Systems

The implemented shift systems were evaluated by the number of successful replications, the utilization degree of the operators, and the campaign duration. A well-established model with suitable logic and processing times can complete the replication. Therefore, the presence of several successful replications indicates a harmonious model that can be used to evaluate model optimization. The predefined number of replications aligns with the replications used during model verification and thus to the number of available historical batch documents. As already mentioned, the schematic operator state charts (Figure 6, top) visualize the utilization degree of the operators. The mean campaign duration and headcounts were used as the basis for the labor cost calculations.

Originally, the overall headcount of the case study was always four operators. Hence, in the first step, the optimization scenarios for both products with OS, OHS, and TS systems were built. The four operators worked simultaneously in the OS system and at staggered intervals for the OHS and TS systems. For some combinations, some replications did not finish (Table 3). It was also impossible to build a running TS model with four operators for PEMB, since only two operators were available for monitoring up to four simultaneous running processors. Obviously, such real-life limits of this process were also reflected by the computational models. The operator and processor state charts also indicated that too many processors needed an operator at the same time. As a result, alternative models were built featuring an increased headcount of six, with three operators always working simultaneously. A summary of the results for PINA and PEMB can be found in Table 3.

**Table 3.** Results of the different shift models for PINA and PEMB: Generally, the campaigns of PINA consisted of four batches, and the campaigns for PEMB consisted of ten batches, yielding the resulting model scope. The number of successful replications indicates whether the model is stable. The arrows symbolize the utilization degree of the operators (↓ = some idleness, ↓↓ = much idleness, ↓↓↓ = operator is barely working, and ↑↑↑ = work overload, ✔ =appropriate work load). Additional metrics of interest are the campaign duration, including the standard error of the mean, and labor costs. The best scenarios are highlighted with grey boxes.


The production processes of PINA and PEMB are comparable; however, the number of batches per campaign for PEMB is 2.5 times larger than that for PINA. This makes the PEMB models both more susceptible and more relevant for choosing the optimal shift system.

The combination of six operators working in a TS yielded the lowest duration (PINA: −50%; PEMB: −53%) and the lowest labor costs (PINA: −9%; PEMB: −14%) for both products compared to the initial scenario. This is not initially surprising, as both duration and headcount seem to correlate, as duration is one parameter of labor cost calculations. As shown in Table 3, the durations of OHS with four and six operators differs slightly for PINA due to its different capacity utilizations. There was also no difference in the duration of the TS with four operators. Ultimately, having six operators in a TS decreased the duration by about 30% compared to having four operators in a TS or six operators in an OHS. This

decreased labor costs by 4% (OHS, four operators) and 13% (OHS, six operators) for the TS with the six operators. This noteworthy difference in labor costs highlighted PINA in the OHS as the second-best option for PINA.

For PEMB, the TS with six operators was found to be 28% faster and 10% cheaper than the second-best option. This OHS used six operators, as it was significantly different (15%) to the OHS with four operators in the campaign duration. This extra production day produced only minimally higher labor costs (2%).

As previously mentioned, the obtained results indicate that production with six operators in a TS is superior to all types of production with four operators for both products. Here, only four operators were fully qualified. Therefore, the best option under a headcount of four is of grea<sup>t</sup> importance. For PINA and PEMB, OHS is the best option due to its faster production (PINA: 25%; PEMB: 23%) and lower labor costs (PINA: 5%; PEMB: 3%) compared to the prevailing OS system. The head of production confirmed the superiority of OHS compared to OS post-hoc based on his own experiences.

### **4. Discussion**

Computer simulations enable testing and evaluating different production scenarios by changing relevant parameters in the according model and running it. In this way, the best possible scenarios were found in this case study. The prevailing conditions of a small headcount and limited resources led to a user-friendly, practice-oriented simulation approach for optimizing two approved pharmaceutical production processes. The majority of computer simulation studies in pharmaceutical supply chain and manufacturing, as reported in the introduction [5,16,19,24] concerned with planning or conducting the complex issue of a whole production process, were performed by experts in modelling and process design. In contrast, this study was conducted by non-computer experts, but experts in pharmaceutical technology and production. We have deliberately modeled an already established process to show that there is still a lot of untapped optimization potential. Our determined potential savings of 50% campaign duration, and of up to 14% labor costs, highlight the significance of this approach. Other less complex optimization attempts, as published by Bähner et al., who evaluated the machine utilization, benefit from the fact, that the time-consuming modeling is not required [30]. However, only the processor utilization is mapped and the operator utilization is disregarded. With modeling, especially having a small amount of historical data, one has a problem with a formal proof of validity, but on the other hand, after implementation, one has more application possibilities. With the still feasible efforts, our simulations intend to close the gap of published industrial case studies.

#### *4.1. Case Study Limitations*

While a case study allows to obtain detailed and usually well-protected information on specific production processes, it is also limited to it. The deliberate exclusion of extraordinary events, such as breakdowns or process times with deviations, in the very beginning, defined the simulations to represent standard processes without any incidence. Also, data collection and analysis highlighted two challenges in this case study. Time recording was performed manually minute-wise. This was disadvantageous for short processing times. Setting up of scales takes 8 min on average, which is only 3% of the compaction time. However, the time resolution for both process steps is the same. Additionally, each process step was started, and sometimes stopped, manually. Therefore, the processing times strongly depend on the availability of an operator. This produces high relative standard deviations for short processing times (PINA: 5–115%; PEMB: 6–75%), although the production process is still within the necessary specifications. As an example, the process step of compulsory mixing takes 11–20 min in the historical batch documentation for PINA. This time must be split into the actual mixing time (10 min, fix) and the manual setting up/starting/stopping time (1–10 min, operator dependent). Here, a waiting time of 10 min has a stronger impact on the relative standard deviation than the same waiting time has on a compaction process

with a mean duration of 269 min. The impact of these limitations on the significance of this case study are, despite everything, acceptable. Even though FlexSim offers the integration of breakdowns, a substantiated analysis of past quality issues would have been necessary and was beyond the scope of this work. Nevertheless, relevant assumptions can be made. The investigated solid production processes have a linear structure. A total breakdown of one processor stops the production of all subsequent batches. Moreover, the start of some batches will additionally be delayed since intermediate products have limited shelf lives in the validated processes. Such factors prolong the campaign duration but do not influence the processing times of single process steps. The effects of the unprecise time recording and low time resolution on the overall model are also not critical, as short processing times have only a small impact on the duration of the entire campaign. The dependency on available operators also has small impact on the campaign duration, but grea<sup>t</sup> impact on the model logic. This factum is negligible since all employees are well trained to prioritize between different steps and since those priorities are implemented in the models as well.

#### *4.2. Case Study Outcomes*

In addition to the main aim of this paper (the optimization of existing processes by discrete-event simulations), the implementation of this intuitive process can be evaluated. Initially, the steps of data collection up to model verification were very time-intensive but gave precious insight into the production processes. The as-is states of the actual production processes were scrutinized more intensively than those during daily routines. The obtained results challenged the workflow and the dependencies between different working steps. During data analysis, process steps with significant economic potential were identified; thus, analyses were performed, and possible improvements were developed. Even without actually running any discrete-event simulations, significant knowledge was gained. Therefore, it can be assumed that process owners already profit from such case studies regardless of the simulation outcomes.

Knowledge of the relevant processes and regulations (e.g., GMP, quality management, galenics, and marketing authorization) is important for successful model development in a pharmaceutical production setting. Model implementation is manageable and worth the effort, as the present case study demonstrates. Most elements of model building are performed using simple drag and drop options, apart from the implementation of complex logic. Creating possibilities to track processing times and process flow requires more complex logic and, therefore, requires low-level programming. It is, however, possible to overcome these obstacles, especially after gaining some experience. Consequently, the necessary efforts during model building and verification strongly decrease for any other comparable production process. The next step of model validation is already described as being a crucial point for model-based debottlenecking approaches. Irregular or unsteady production steps are known to complicate or even preclude a successful model validation [30]. Unfortunately, unfavorable circumstances also led to a failed predictive model validation. It is therefore of grea<sup>t</sup> importance to choose the best validation strategy. The efforts for a historical data validation or a predictive validation are easily manageable, since the data handling identical to the one during model verification. Establishing optimization scenarios, is less time-intensive and challenging, even though the optimization module in FlexSim is not part of the applied student version. This means that all optimization scenarios had to be developed and implemented from scratch. While the results are clear on the superiority of six compared to four operators, more factors need to be considered. The examined work only covers bulk production, which is only one part of the entire production process and strongly depends on other departments, such as warehousing and quality control. Whenever warehousing and quality control work in an OS system, sufficient cooperation with the bulk production working in the OS or OHS system is granted. Modifications and adjustments must only be made when a TS system is established. The establishment of a TS system in bulk production can also produce a more rigid structure, decrease spontaneity, and increase the headcount. The necessary financial and work inputs

needed to qualify two more operators for the production of about 20 products must also be taken into consideration. The impact of these disadvantages could be tested with the pilot run of a TS system. It could also be further explored whether an increased headcount in other departments should be mandatory, and whether the pressure on the involved staff is bearable.

Besides analysis of the effects of different shift systems, further simulations could examine more extensive questions, such as a change of the process layout. The simulated processes are based on a long-established production site. Hence, the layout and consequently the equipment localization depend on the floor plan and the structural conditions, such as media supply systems and electrical equipment. As shown in Figure 4, the current floor plan does not allow a lean production flow; an according spaghetti diagram would reveal inefficient transportation and employee movements. A re-layout including a rebuilding of the manufacturing premises of the site would enable an efficient and continuous workflow with significantly reduced non-value-added time (NVA), such as work in process inventory (WIP) or repeating pathways. Simulations could test the benefits of a re-layout and thereby also substantiate such far-reaching considerations.
