**1. Introduction**

The pharmaceutical industry is known to be prosperous but inflexible. Regulatory authorities expect increasing standards for medicinal products, e.g., during clinical trials or production [1–3], which makes the industry less prosperous and even more inflexible. Multinational pharmaceutical companies react with various strategies, such as outsourcing or mergers and acquisitions [4], while small- and middle-sized companies need to compensate for their losses differently. For both, the strategy of addressing production costs is promising because of this strategy's generally low equipment utilization [5] and because of production's high costs (production makes up to 30% of the overall costs) [6]. Instead of real-world experiments and tests, computer simulations enable one to test different scenarios without any interruptions or threats to daily business.

The chosen processes for this case study involve the production of two film-coated tablets for the treatment of tuberculosis. As of 2020, about one-quarter of the world's population is infected with latent tuberculosis. Ending this epidemic by 2030 is one of

**Citation:** Hering, S.; Schäuble, N.; Buck, T.M.; Loretz, B.; Rillmann, T.; Stieneker, F.; Lehr, C.-M. Analysis and Optimization of Two Film-Coated Tablet Production Processes by Computer Simulation: A Case Study. *Processes* **2021**, *9*, 67. https://doi.org/ 10.3390/pr9010067

Received: 30 November 2020 Accepted: 26 December 2020 Published: 30 December 2020

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

the health targets of the United Nations Sustainable Development Goals, so the incidence of the disease is decreasing by 2% each year. This disease is mainly caused by acid-fast rod-like Mycobacterium tuberculosis. Most patients suffer from a curable lung infection that is treated with a combination of different agents [7]. According to the World Health Organization, isoniazid, rifampicin, pyrazinamide, and ethambutol are the most essential first-line anti-tuberculosis drugs [8]. The main active pharmaceutical ingredients of the two investigated products are isoniazid and ethambutol; thus, the products are abbreviated as PINA and PEMB.

Background information about simulations can be found in Banks (2005), who defined them as the "imitation of the operation of a real-world process or system over time" [9], while a model can be described as a "representation of a [ ... ] process intended to enhance our ability to understand, predict, or control its behavior" [10]. The link between both is that "the exercise or use of a model to produce a result" is a simulation [11]. Simulating different scenarios, thereby changing the parameters in a model and running it, enables one to test and evaluate the effects of certain parameters. Hence, diverse fields of application are possible in a pharmaceutical context. These applications could support decision making for pipeline managemen<sup>t</sup> [12–14] and optimize supply chain managemen<sup>t</sup> [15–17]. Simulations of production processes are also of interest. They can be divided into simulations of single process steps, such as computational fluid dynamics for mixing steps [18], and simulations of multiple production steps [19] or even of entire continuous production processes [20]. The models of Sundaramoorthy et al. [16] and Matsunami et al. [21] were used to investigate a mixture of the abovementioned factors. Their prospective models targeted the planning of production capacities when a product is still in its developmental stage. Matsunami et al. compared a batch with continuous production considering various uncertainties, prices, and market demands for one product. Sundaramoorthy et al., however, address the production capacities of multiple products. Habibifar et al. recently published a study on the optimization of an existing production line, including a sensitivity analysis, the design of multiple scenarios, and a data envelopment analysis. Other comparable work was also examined intensively—11 references from 2007–2019 were investigated and compared [5]. The applied techniques (simulations, mathematical modeling, and statistical techniques) differed, as did the focuses of the studies. Some papers concentrated on the optimization of specific process steps [22], while others pursued a more holistic approach [23]. The high variability in this small population demonstrates that there are many different approaches and even more available software solutions for optimizing pharmaceutical production. One described attempt involved a discrete-event simulation, in which the variables of a model changed due to defined events. Recent work on discrete-event simulation addressed room occupancies in a hospital [24], a flow shop [25], and manufacturing scheduling [26].

In contrast to the above-mentioned studies (i.e., prospective simulations for future medicinal products), this study addressed two already existing production processes. The motivation of this work was to optimize the validated and approved production processes of PINA and PEMB via discrete-event simulations without the need to interrupt or interfere with the continuing production processes themselves. Initially, it was investigated whether it is possible to establish a discrete-event simulation approach with limited resources and simulation knowhow. Meanwhile, the three most critical steps in model building (implementing model logic, operating schedule, and processing times) were determined, verified, and partly validated. Based on the generated as-is models of the PINA and PEMB production processes, bottlenecks in the production process were identified and different production scenarios designed to find the optimal one under existing conditions. The optimizations focused on process efficiency, not pharmaceutical or validation questions. The market authorization of these products limited the possible changes to only organizational ones. Therefore, the shift systems of the created as-is models for PINA and PEMB were changed from the existing one-shift system to a one-and-a-half shift system and a two-shift system to optimize employee utilization. The campaign duration, labor costs, and used resources were calculated to generate comparable outcomes.

#### **2. Materials and Methods**

For discrete event simulation of these pharmaceutical production processes, the software package FlexSim was chosen since it is easy to use and already widely implemented in various industrial sectors for logistics or production, as well as in national institutions. To the best of our knowledge, this is the first time that FlexSim has been used to optimize pharmaceutical bulk production in its entirety. This report suggests a possible implementation path for a batch production. It starts in a semi-automated facility to obtain data, continues with the creation of a representing simulation model and ends with a case study optimizing the capacity utilization of the investigated batch production. Where the available FlexSim software was not sufficient, additional software, such as Cmap Tools (Florida Institute for Human and Machine Cognition, Pensacola, FL, USA), Microsoft® Excel (Microsoft, Seattle, WA, USA) and Minitab® (Minitab GmbH, Munich, Germany), was used.

Since our literature research yielded few standards for creating a discrete-event simulation in the context of pharmaceutical processes, an intuitive attempt was pursued and implemented. At the beginning, basic decisions about the model were made for model description. Afterwards, information about the investigated production processes was collected. All information was clustered into numerical and logical information. The numerical information covers the collection of historical processing times, their analysis as well as the selection of the most representative distribution for each process step. The logical information was used for model building. The most meaningful information sources were historical batch data, official validation and qualification data of the process owner, instruction manuals, on site observations, and work experiences. The gathered process information was firstly depicted in a flow chart and afterwards transferred to a simulation model. These elements of the methodological approach were addressed separately, but simultaneously. Together, they resulted in an as-is model of the production process, which was later verified and partly validated. It furthermore served as a process analysis and optimizing tool. This methodological approach is depicted in Figure 1.


**Figure 1.** Methodological approach of this case study: Seven steps (blue boxes) were implemented from model description until its application as optimizing tool. The according sub-steps are listed underneath and include further information, such as applied software and cross references to other figures and tables.

#### *2.1. Employed Software*

Different software was used during this case study on a Windows (Microsoft, Seattle, WA, USA)-based laptop equipped with 12.0 GB RAM and a 64-bit processor. Initially, the historical processing times and their deviations, as well as the new data for model verification and application, were collected in Microsoft® Excel. Moreover, the basic questions were analyzed statistically, such as the calculations of standard deviations and the minimum and maximum values.

An extensive flow chart was created in Cmap Tools, which is freely available from the Florida Institute for Human and Machine Cognition (Pensacola, FL, USA), to understand and take stock of the entire process. This program is ideal for creating flow charts or visualizing logical relations and features easy drag and drop handling. The knowledge gained from the program facilitates the steps for later model building in FlexSim [27].

Further statistical analyses were performed using Minitab® (Minitab GmbH, Munich, Germany, version 18.0), a commercial statistics package that is widely used in Lean Six Sigma projects [28]. Some basic Lean Six Sigma approaches were integrated into the data handling of this case study. Data analysis, such as hypothesis tests, and graphical evaluations, such as boxplots or individual moving range charts, can be easily performed in Minitab®.

Discrete-event simulations were conducted in FlexSim (FlexSim Deutschland—Ingeni eurbüro für Simulationsdienstleistung Ralf Gruber, Kirchlengern, Germany), a commercially available 3D simulation software designed for modeling production and logistic processes. FlexSim provides discrete-event simulations that are object-oriented, which means to implement all components as objects, to assign specific attributes and methods to them for their characterization as well as for manipulating the overall system. In addition to a graphical 3D click-and-drag simulation environment, programming in C++ is offered. The user can choose between different views and methods for representing data [9]. The applied, non-configured student version is FlexSim 19.0.0. Here, dynamic process flows can be captured on a functional level, plainly visualized, and extensively analyzed. This program gives decision makers the opportunity to forecast the outcomes of possible changes in their processes, such as changes in product flow, resource utilization (staff, money, and machinery), or plant design.

The production processes were captured by implementing and connecting all single components of the system by their procedural functions and attributes. Additional critical parameters (set-up times, staff) and logic (priorities, random events) made the models as close to reality as possible. The 3D visualization of the process provided an intuitive understanding of the current state of the system and future possibilities. The results of the simulations were analyzed via performance and output statistics. Capacity utilization, transport time, and state statistics are examples of the metrics of interest.

#### *2.2. Production Processes*

Two approved coated tablet production processes were selected for this case study because they were similar but different in their ingredients and product properties, such as tablet size. Their obviously different punches require different processes and cleaning times, but their other equipment and operations are similar. Since both processes are comparable, comparable results were expected. Comparable results would enable the transfer of optimizations to other processes and thereby increase the impact of this study. The processes consisted of 47 sub-steps that were merged to the following 13 superordered steps:


• Sieving


Most machines (in FlexSim referred to as "processors") are about 25 years old and largely still operated manually. The two products, PINA and PEMB, are produced batchwise in variable campaign sizes from three to 18 batches. The simulations were chosen to represent the average campaign sizes to identify bottlenecks and increase productivity. Therefore, simulations for PINA covered four batches and those for PEMB covered ten batches.

Since the intention of this work was to describe standard production campaigns of these products, deviations as machine breakdowns, personnel shortages, and other human failures were excluded. During data preparation, we assessed whether the observed deviations influenced any of the historical processing times. In addition, outlier tests were performed to exclude deviating historical batch data, so the resulting data pool solely represented standard processing times. Further excluded data are product-dependent cleaning times as well as times for setting up. The cleaning and set-up efforts before a product campaign vary widely, since the production of analogous products containing the same APIs at different doses lowers the necessary effort dramatically. Thus, this study only includes product-independent daily routine cleaning times, as well as the daily times for setting up the machines.

#### *2.3. Statistical Data Processing*

The processing times are the most important since the campaign duration endpoint in this study influences a second endpoint: the labor costs. Therefore, the collection and handling of processing times are of grea<sup>t</sup> importance. We distinguished between the initial historical batch data and the FlexSim-generated verification data. The collection of historical batch data strongly depends on the production equipment. While data is easily available in automated production lines, most semiautomated production plants do not have automated tracking and data generation. For these two production processes, no digitally workable data were available. Hence, the processing times of each process step were collected manually and batch-wise before they were transferred into digital files. The overall statistical data process is depicted in Figure 2.

During *data preparation*, we investigated in Minitab® whether the data were under statistical control with individual moving range charts (I-MR chart). Even though I-MR charts are, strictly speaking, only to be used for normally distributed data, they nevertheless indicate shifts, trends, and process variations. Therefore, the first impressions of the process stability were obtained. Secondly, we tested if the data were normally distributed using probability plots for further data handling. Additional statistical tests for the outliers, data pooling possibilities, and visualization were also performed in Minitab®.

FlexSim contains a tool named ExpertFit, which helps to find the best fit distribution based on raw data. FlexSim distinguishes two main types of probability distributions: discrete and continuous distributions. Continuous distributions are subdivided into nonnegative, unbounded, and bounded distributions. Historical batch data were entered as raw data in ExpertFit during *model building*. The results were extracted, and the suitability of all distributions were evaluated. Afterwards, several graphical comparisons between different distributions were performed to select the best-fitting distribution. The last step in ExpertFit was to transfer the selected distribution and its parameters into the according process step in the FlexSim model. If none of the available distributions provided a satisfying evaluation, the usage of an empirical table was recommended and implemented. The results of this analysis can be found in the Supplementary Materials (Table S1). Interestingly, even though

some data were earlier shown to be normally distributed, ExpertFit never evaluated a normal distribution as the best choice.

**Figure 2.** Chronological workflow of statistical data processing: Minitab® was used during data preparation and model verification to analyze the historical batch data and to later compare it to the FlexSim-generated data. The ExpertFit tool of FlexSim performed the automated fitting of the historical batch data to over 20 distributions to identify the best representation for each process step during model building.

> During *model verification*, the FlexSim-generated data were compared to the historical batch data. Since the probability tests during data preparation proved that most processing times were not normally distributed, Mann–Whitney tests were performed in Minitab®.

> Choosing the best way of *model validation* was challenging. Initially, a face validity was made. The head of production investigated the models, their behavior, and logics. For a historical data validation, the data pool of historical processing times was statistically too small. Therefore, an additional, predictive validation was attempted. Therefore, FlexSimgenerated data were compared to processing times of new campaigns.
