Next Issue
Volume 6, January
Previous Issue
Volume 5, September
 
 

Processes, Volume 5, Issue 4 (December 2017) – 35 articles

Cover Story (view full-size image): The study of radical copolymerization in water is challenging due to the influence of both solvent and ionic charges on reactivity. In this work nuclear magnetic resonance spectroscopy is used to follow the individual consumption rates of both acrylamide and an anionic comonomer (SHMeMB) formed from saponification of a bio-sourced butyrolactone monomer. A model developed to represent the system is used to demonstrate that a reduced termination rate combines with slow and reversible SHMeMB propagation to control the copolymerization kinetics during synthesis of superabsorbent hydrogels. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
2152 KiB  
Article
Systematic and Model-Assisted Evaluation of Solvent Based- or Pressurized Hot Water Extraction for the Extraction of Artemisinin from Artemisia annua L.
by Maximilian Sixt and Jochen Strube
Processes 2017, 5(4), 86; https://doi.org/10.3390/pr5040086 - 20 Dec 2017
Cited by 33 | Viewed by 7669
Abstract
In this study, the solvent based extraction of artemisinin from Artemisia annua L. using acetone in percolation mode is compared to the method of pressurized hot water extraction. Both techniques are simulated by a physico-chemical process model. The model as well as the [...] Read more.
In this study, the solvent based extraction of artemisinin from Artemisia annua L. using acetone in percolation mode is compared to the method of pressurized hot water extraction. Both techniques are simulated by a physico-chemical process model. The model as well as the model parameter determination, including the thermal degradation of artemisinin are shown and discussed. For the conventional extraction, a solvent screening is performed considering various organic solvents. A temperature screening is presented for the systematic design of the pressurized hot water extraction. The best temperature with regards to thermal decomposition and high productivity was found to be 80 °C. Both, conventional percolation and Pressurized Hot Water Extraction (PHWE) are suitable for the extraction of artemisinin. The extraction curves show a high conformity with the simulation results. Full article
Show Figures

Figure 1

2062 KiB  
Article
Efficient Control Discretization Based on Turnpike Theory for Dynamic Optimization
by Ali M. Sahlodin and Paul I. Barton
Processes 2017, 5(4), 85; https://doi.org/10.3390/pr5040085 - 18 Dec 2017
Cited by 7 | Viewed by 5615
Abstract
Dynamic optimization offers a great potential for maximizing performance of continuous processes from startup to shutdown by obtaining optimal trajectories for the control variables. However, numerical procedures for dynamic optimization can become prohibitively costly upon a sufficiently fine discretization of control trajectories, especially [...] Read more.
Dynamic optimization offers a great potential for maximizing performance of continuous processes from startup to shutdown by obtaining optimal trajectories for the control variables. However, numerical procedures for dynamic optimization can become prohibitively costly upon a sufficiently fine discretization of control trajectories, especially for large-scale dynamic process models. On the other hand, a coarse discretization of control trajectories is often incapable of representing the optimal solution, thereby leading to reduced performance. In this paper, a new control discretization approach for dynamic optimization of continuous processes is proposed. It builds upon turnpike theory in optimal control and exploits the solution structure for constructing the optimal trajectories and adaptively deciding the locations of the control discretization points. As a result, the proposed approach can potentially yield the same, or even improved, optimal solution with a coarser discretization than a conventional uniform discretization approach. It is shown via case studies that using the proposed approach can reduce the cost of dynamic optimization significantly, mainly due to introducing fewer optimization variables and cheaper sensitivity calculations during integration. Full article
(This article belongs to the Special Issue Combined Scheduling and Control)
Show Figures

Graphical abstract

804 KiB  
Article
Economic Benefit from Progressive Integration of Scheduling and Control for Continuous Chemical Processes
by Logan D. R. Beal, Damon Petersen, Guilherme Pila, Brady Davis, Sean Warnick and John D. Hedengren
Processes 2017, 5(4), 84; https://doi.org/10.3390/pr5040084 - 13 Dec 2017
Cited by 12 | Viewed by 5960
Abstract
Performance of integrated production scheduling and advanced process control with disturbances is summarized and reviewed with four progressive stages of scheduling and control integration and responsiveness to disturbances: open-loop segregated scheduling and control, closed-loop segregated scheduling and control, open-loop scheduling with consideration of [...] Read more.
Performance of integrated production scheduling and advanced process control with disturbances is summarized and reviewed with four progressive stages of scheduling and control integration and responsiveness to disturbances: open-loop segregated scheduling and control, closed-loop segregated scheduling and control, open-loop scheduling with consideration of process dynamics, and closed-loop integrated scheduling and control responsive to process disturbances and market fluctuations. Progressive economic benefit from dynamic rescheduling and integrating scheduling and control is shown on a continuously stirred tank reactor (CSTR) benchmark application in closed-loop simulations over 24 h. A fixed horizon integrated scheduling and control formulation for multi-product, continuous chemical processes is utilized, in which nonlinear model predictive control (NMPC) and continuous-time scheduling are combined. Full article
(This article belongs to the Special Issue Combined Scheduling and Control)
Show Figures

Figure 1

363 KiB  
Article
Combined Noncyclic Scheduling and Advanced Control for Continuous Chemical Processes
by Damon Petersen, Logan D. R. Beal, Derek Prestwich, Sean Warnick and John D. Hedengren
Processes 2017, 5(4), 83; https://doi.org/10.3390/pr5040083 - 13 Dec 2017
Cited by 8 | Viewed by 5188
Abstract
A novel formulation for combined scheduling and control of multi-product, continuous chemical processes is introduced in which nonlinear model predictive control (NMPC) and noncyclic continuous-time scheduling are efficiently combined. A decomposition into nonlinear programming (NLP) dynamic optimization problems and mixed-integer linear programming (MILP) [...] Read more.
A novel formulation for combined scheduling and control of multi-product, continuous chemical processes is introduced in which nonlinear model predictive control (NMPC) and noncyclic continuous-time scheduling are efficiently combined. A decomposition into nonlinear programming (NLP) dynamic optimization problems and mixed-integer linear programming (MILP) problems, without iterative alternation, allows for computationally light solution. An iterative method is introduced to determine the number of production slots for a noncyclic schedule during a prediction horizon. A filter method is introduced to reduce the number of MILP problems required. The formulation’s closed-loop performance with both process disturbances and updated market conditions is demonstrated through multiple scenarios on a benchmark continuously stirred tank reactor (CSTR) application with fluctuations in market demand and price for multiple products. Economic performance surpasses cyclic scheduling in all scenarios presented. Computational performance is sufficiently light to enable online operation in a dual-loop feedback structure. Full article
(This article belongs to the Special Issue Combined Scheduling and Control)
Show Figures

Figure 1

5013 KiB  
Article
Dry Reforming of Methane Using a Nickel Membrane Reactor
by Jonas M. Leimert, Jürgen Karl and Marius Dillig
Processes 2017, 5(4), 82; https://doi.org/10.3390/pr5040082 - 12 Dec 2017
Cited by 19 | Viewed by 8374
Abstract
Dry reforming is a very interesting process for synthesis gas generation from CH 4 and CO 2 but suffers from low hydrogen yields due to the reverse water–gas shift reaction (WGS). For this reason, membranes are often used for hydrogen separation, which in [...] Read more.
Dry reforming is a very interesting process for synthesis gas generation from CH 4 and CO 2 but suffers from low hydrogen yields due to the reverse water–gas shift reaction (WGS). For this reason, membranes are often used for hydrogen separation, which in turn leads to coke formation at the process temperatures suitable for the membranes. To avoid these problems, this work shows the possibility of using nickel self-supported membranes for hydrogen separation at a temperature of 800 C. The higher temperature effectively suppresses coke formation. The paper features the analysis of the dry reforming reaction in a nickel membrane reactor without additional catalyst. The measurement campaign targeted coke formation and conversion of the methane feedstock. The nickel approximately 50% without hydrogen separation. The hydrogen removal led to an increase in methane conversion to 60–90%. Full article
(This article belongs to the Special Issue Membrane Materials, Performance and Processes)
Show Figures

Graphical abstract

2120 KiB  
Article
Optimization of Stimulation Parameters for Targeted Activation of Multiple Neurons Using Closed-Loop Search Methods
by Michelle L. Kuykendal, Stephen P. DeWeerth and Martha A. Grover
Processes 2017, 5(4), 81; https://doi.org/10.3390/pr5040081 - 11 Dec 2017
Viewed by 4595
Abstract
Differential activation of neuronal populations can improve the efficacy of clinical devices such as sensory or cortical prostheses. Improving stimulus specificity will facilitate targeted neuronal activation to convey biologically realistic percepts. In order to deliver more complex stimuli to a neuronal population, stimulus [...] Read more.
Differential activation of neuronal populations can improve the efficacy of clinical devices such as sensory or cortical prostheses. Improving stimulus specificity will facilitate targeted neuronal activation to convey biologically realistic percepts. In order to deliver more complex stimuli to a neuronal population, stimulus optimization techniques must be developed that will enable a single electrode to activate subpopulations of neurons. However, determining the stimulus needed to evoke targeted neuronal activity is challenging. To find the most selective waveform for a particular population, we apply an optimization-based search routine, Powell’s conjugate direction method, to systematically search the stimulus waveform space. This routine utilizes a 1-D sigmoid activation model and a 2-D strength–duration curve to measure neuronal activation throughout the stimulus waveform space. We implement our search routine in both an experimental study and a simulation study to characterize potential stimulus-evoked populations and the associated selective stimulus waveform spaces. We found that for a population of five neurons, seven distinct sub-populations could be activated. The stimulus waveform space and evoked neuronal activation curves vary with each new combination of neuronal culture and electrode array, resulting in a unique selectivity space. The method presented here can be used to efficiently uncover the selectivity space, focusing experiments in regions with the desired activation pattern. Full article
Show Figures

Figure 1

2022 KiB  
Article
Mathematical Modeling of Tuberculosis Granuloma Activation
by Steve M. Ruggiero, Minu R. Pilvankar and Ashlee N. Ford Versypt
Processes 2017, 5(4), 79; https://doi.org/10.3390/pr5040079 - 11 Dec 2017
Cited by 8 | Viewed by 16810
Abstract
Tuberculosis (TB) is one of the most common infectious diseases worldwide. It is estimated that one-third of the world’s population is infected with TB. Most have the latent stage of the disease that can later transition to active TB disease. TB is spread [...] Read more.
Tuberculosis (TB) is one of the most common infectious diseases worldwide. It is estimated that one-third of the world’s population is infected with TB. Most have the latent stage of the disease that can later transition to active TB disease. TB is spread by aerosol droplets containing Mycobacterium tuberculosis (Mtb). Mtb bacteria enter through the respiratory system and are attacked by the immune system in the lungs. The bacteria are clustered and contained by macrophages into cellular aggregates called granulomas. These granulomas can hold the bacteria dormant for long periods of time in latent TB. The bacteria can be perturbed from latency to active TB disease in a process called granuloma activation when the granulomas are compromised by other immune response events in a host, such as HIV, cancer, or aging. Dysregulation of matrix metalloproteinase 1 (MMP-1) has been recently implicated in granuloma activation through experimental studies, but the mechanism is not well understood. Animal and human studies currently cannot probe the dynamics of activation, so a computational model is developed to fill this gap. This dynamic mathematical model focuses specifically on the latent to active transition after the initial immune response has successfully formed a granuloma. Bacterial leakage from latent granulomas is successfully simulated in response to the MMP-1 dynamics under several scenarios for granuloma activation. Full article
(This article belongs to the Special Issue Biological Networks)
Show Figures

Figure 1

2870 KiB  
Article
Fuel Evaporation in an Atmospheric Premixed Burner: Sensitivity Analysis and Spray Vaporization
by Dávid Csemány and Viktor Józsa
Processes 2017, 5(4), 80; https://doi.org/10.3390/pr5040080 - 07 Dec 2017
Cited by 13 | Viewed by 8153
Abstract
Calculation of evaporation requires accurate thermophysical properties of the liquid. Such data are well-known for conventional fossil fuels. In contrast, e.g., thermal conductivity or dynamic viscosity of the fuel vapor are rarely available for modern liquid fuels. To overcome this problem, molecular models [...] Read more.
Calculation of evaporation requires accurate thermophysical properties of the liquid. Such data are well-known for conventional fossil fuels. In contrast, e.g., thermal conductivity or dynamic viscosity of the fuel vapor are rarely available for modern liquid fuels. To overcome this problem, molecular models can be used. Currently, the measurement-based properties of n-heptane and diesel oil are compared with estimated values, using the state-of-the-art molecular models to derive the temperature-dependent material properties. Then their effect on droplet evaporation was evaluated. The critical parameters were liquid density, latent heat of vaporization, boiling temperature, and vapor thermal conductivity where the estimation affected the evaporation time notably. Besides a general sensitivity analysis, evaporation modeling in a practical burner ended up with similar results. By calculating droplet motion, the evaporation number, the evaporation-to-residence time ratio can be derived. An empirical cumulative distribution function is used for the spray of the analyzed burner to evaluate evaporation in the mixing tube. Evaporation number did not exceed 0.4, meaning a full evaporation prior to reaching the burner lip in all cases. As droplet inertia depends upon its size, the residence time has a minimum value due to the phenomenon of overshooting. Full article
Show Figures

Figure 1

2724 KiB  
Article
Dynamics of the Bacterial Community Associated with Phaeodactylum tricornutum Cultures
by Fiona Wanjiku Moejes, Antonella Succurro, Ovidiu Popa, Julie Maguire and Oliver Ebenhöh
Processes 2017, 5(4), 77; https://doi.org/10.3390/pr5040077 - 07 Dec 2017
Cited by 16 | Viewed by 8570
Abstract
The pennate diatom Phaeodactylum tricornutum is a model organism able to synthesize industrially-relevant molecules. Commercial-scale cultivation currently requires large monocultures, prone to bio-contamination. However, little is known about the identity of the invading organisms. To reduce the complexity of natural systems, we systematically [...] Read more.
The pennate diatom Phaeodactylum tricornutum is a model organism able to synthesize industrially-relevant molecules. Commercial-scale cultivation currently requires large monocultures, prone to bio-contamination. However, little is known about the identity of the invading organisms. To reduce the complexity of natural systems, we systematically investigated the microbiome of non-axenic P. tricornutum cultures from a culture collection in reproducible experiments. The results revealed a dynamic bacterial community that developed differently in “complete” and “minimal” media conditions. In complete media, we observed an accelerated “culture crash”, indicating a more stable culture in minimal media. The identification of only four bacterial families as major players within the microbiome suggests specific roles depending on environmental conditions. From our results we propose a network of putative interactions between P. tricornutum and these main bacterial factions. We demonstrate that, even with rather sparse data, a mathematical model can be reconstructed that qualitatively reproduces the observed population dynamics, thus indicating that our hypotheses regarding the molecular interactions are in agreement with experimental data. Whereas the model in its current state is only qualitative, we argue that it serves as a starting point to develop quantitative and predictive mathematical models, which may guide experimental efforts to synthetically construct and monitor stable communities required for robust upscaling strategies. Full article
Show Figures

Figure 1

2909 KiB  
Article
Effect of Moisture Content on Lignocellulosic Power Generation: Energy, Economic and Environmental Impacts
by Karthik Rajendran
Processes 2017, 5(4), 78; https://doi.org/10.3390/pr5040078 - 06 Dec 2017
Cited by 9 | Viewed by 6807
Abstract
The moisture content of biomass affects its processing for applications such as electricity or steam. In this study, the effects of variation in moisture content of banagrass and energycane was evaluated using techno-economic analysis and life-cycle assessments. A 25% loss of moisture was [...] Read more.
The moisture content of biomass affects its processing for applications such as electricity or steam. In this study, the effects of variation in moisture content of banagrass and energycane was evaluated using techno-economic analysis and life-cycle assessments. A 25% loss of moisture was assumed as a variation that was achieved by field drying the biomass. Techno-economic analysis revealed that high moisture in the biomass was not economically feasible. Comparing banagrass with energycane, the latter was more economically feasible; thanks to the low moisture and ash content in energycane. About 32 GWh/year of electricity was produced by field drying 60,000 dry MT/year energycane. The investment for different scenarios ranged between $17 million and $22 million. Field-dried energycane was the only economically viable option that recovered the investment after 11 years of operation. This scenario was also more environmentally friendly, releasing 16-gCO2 equivalent/MJ of electricity produced. Full article
Show Figures

Figure 1

4728 KiB  
Article
A Validated Model for Design and Evaluation of Control Architectures for a Continuous Tablet Compaction Process
by Fernando Nunes de Barros, Aparajith Bhaskar and Ravendra Singh
Processes 2017, 5(4), 76; https://doi.org/10.3390/pr5040076 - 01 Dec 2017
Cited by 8 | Viewed by 7365
Abstract
The systematic design of an advanced and efficient control strategy for controlling critical quality attributes of the tablet compaction operation is necessary to increase the robustness of a continuous pharmaceutical manufacturing process and for real time release. A process model plays a very [...] Read more.
The systematic design of an advanced and efficient control strategy for controlling critical quality attributes of the tablet compaction operation is necessary to increase the robustness of a continuous pharmaceutical manufacturing process and for real time release. A process model plays a very important role to design, evaluate and tune the control system. However, much less attention has been made to develop a validated control relevant model for tablet compaction process that can be systematically applied for design, evaluation, tuning and thereby implementation of the control system. In this work, a dynamic tablet compaction model capable of predicting linear and nonlinear process responses has been successfully developed and validated. The nonlinear model is based on a series of transfer functions and static polynomial models. The model has been applied for control system design, tuning and evaluation and thereby facilitate the control system implementation into the pilot-plant with less time and resources. The best performing control algorithm was used in the implementation and evaluation of different strategies for control of tablet weight and breaking force. A characterization of the evaluated control strategies has been presented and can serve as a guideline for the selection of the adequate control strategy for a given tablet compaction setup. A strategy based on a multiple input multiple output (MIMO) model predictive controller (MPC), developed using the simulation environment, has been implemented in a tablet press unit, verifying the relevance of the simulation tool. Full article
(This article belongs to the Special Issue Combined Scheduling and Control)
Show Figures

Graphical abstract

6211 KiB  
Article
RadViz Deluxe: An Attribute-Aware Display for Multivariate Data
by Shenghui Cheng, Wei Xu and Klaus Mueller
Processes 2017, 5(4), 75; https://doi.org/10.3390/pr5040075 - 22 Nov 2017
Cited by 15 | Viewed by 7490
Abstract
Modern data, such as occurring in chemical engineering, typically entail large collections of samples with numerous dimensional components (or attributes). Visualizing the samples in relation of these components can bring valuable insight. For example, one may be able to see how a [...] Read more.
Modern data, such as occurring in chemical engineering, typically entail large collections of samples with numerous dimensional components (or attributes). Visualizing the samples in relation of these components can bring valuable insight. For example, one may be able to see how a certain chemical property is expressed in the samples taken. This could reveal if there are clusters and outliers that have specific distinguishing properties. Current multivariate visualization methods lack the ability to reveal these types of information at a sufficient degree of fidelity since they are not optimized to simultaneously present the relations of the samples as well as the relations of the samples to their attributes. We propose a display that is designed to reveal these multiple relations. Our scheme is based on the concept of RadViz, but enhances the layout with three stages of iterative refinement. These refinements reduce the layout error in terms of three essential relationships—sample to sample, attribute to attribute, and sample to attribute. We demonstrate the effectiveness of our method via various real-world domain examples in the domain of chemical process engineering. In addition, we also formally derive the equivalence of RadViz to a popular multivariate interpolation method called generalized barycentric coordinates. Full article
(This article belongs to the Collection Process Data Analytics)
Show Figures

Figure 1

2848 KiB  
Article
An Integrated Mathematical Model of Microbial Fuel Cell Processes: Bioelectrochemical and Microbiologic Aspects
by Andrea G. Capodaglio, Daniele Cecconet and Daniele Molognoni
Processes 2017, 5(4), 73; https://doi.org/10.3390/pr5040073 - 20 Nov 2017
Cited by 34 | Viewed by 7274
Abstract
Microbial Fuel Cells (MFCs) represent a still relatively new technology for liquid organic waste treatment and simultaneous recovery of energy and resources. Although the technology is quite appealing due its potential benefits, its practical application is still hampered by several drawbacks, such as [...] Read more.
Microbial Fuel Cells (MFCs) represent a still relatively new technology for liquid organic waste treatment and simultaneous recovery of energy and resources. Although the technology is quite appealing due its potential benefits, its practical application is still hampered by several drawbacks, such as systems instability (especially when attempting to scale-up reactors from laboratory prototypes), internally competing microbial reactions, and limited power generation. This paper is an attempt to address some of the issues related to MFC application in wastewater treatment with a simulation model. Reactor configuration, operational schemes, electrochemical and microbiological characterization, optimization methods and modelling strategies were reviewed and have been included in a mathematical simulation model written with a multidisciplinary, multi-perspective approach, considering the possibility of feeding real substrates to an MFC system while dealing with a complex microbiological population. The conclusions drawn herein can be of practical interest for all MFC researchers dealing with domestic or industrial wastewater treatment. Full article
Show Figures

Figure 1

8046 KiB  
Article
Stochasticity in the Parasite-Driven Trait Evolution of Competing Species Masks the Distinctive Consequences of Distance Metrics
by Christian Alvin H. Buhat, Dylan Antonio S. J. Talabis, Anthony L. Cueno, Maica Krizna A. Gavina, Ariel L. Babierra, Genaro A. Cuaresma and Jomar F. Rabajante
Processes 2017, 5(4), 74; https://doi.org/10.3390/pr5040074 - 17 Nov 2017
Cited by 1 | Viewed by 4820
Abstract
Various distance metrics and their induced norms are employed in the quantitative modeling of evolutionary dynamics. Minimization of these distance metrics, when applied to evolutionary optimization, are hypothesized to result in different outcomes. Here, we apply the different distance metrics to the evolutionary [...] Read more.
Various distance metrics and their induced norms are employed in the quantitative modeling of evolutionary dynamics. Minimization of these distance metrics, when applied to evolutionary optimization, are hypothesized to result in different outcomes. Here, we apply the different distance metrics to the evolutionary trait dynamics brought about by the interaction between two competing species infected by parasites (exploiters). We present deterministic cases showing the distinctive selection outcomes under the Manhattan, Euclidean, and Chebyshev norms. Specifically, we show how they differ in the time of convergence to the desired optima (e.g., no disease), and in the egalitarian sharing of carrying capacity between the competing species. However, when randomness is introduced to the population dynamics of parasites and to the trait dynamics of the competing species, the distinctive characteristics of the outcomes under the three norms become indistinguishable. Our results provide theoretical cases of when evolutionary dynamics using different distance metrics exhibit similar outcomes. Full article
Show Figures

Figure 1

8110 KiB  
Article
Development of Molecularly Imprinted Polymers to Target Polyphenols Present in Plant Extracts
by Catarina Gomes, Gayane Sadoyan, Rolando C. S. Dias and Mário Rui P. F. N. Costa
Processes 2017, 5(4), 72; https://doi.org/10.3390/pr5040072 - 14 Nov 2017
Cited by 25 | Viewed by 6863
Abstract
The development of molecularly imprinted polymers (MIPs) to target polyphenols present in vegetable extracts was here addressed. Polydatin was selected as a template polyphenol due to its relatively high size and amphiphilic character. Different MIPs were synthesized to explore preferential interactions between the [...] Read more.
The development of molecularly imprinted polymers (MIPs) to target polyphenols present in vegetable extracts was here addressed. Polydatin was selected as a template polyphenol due to its relatively high size and amphiphilic character. Different MIPs were synthesized to explore preferential interactions between the functional monomers and the template molecule. The effect of solvent polarity on the molecular imprinting efficiency, namely owing to hydrophobic interactions, was also assessed. Precipitation and suspension polymerization were examined as a possible way to change MIPs morphology and performance. Solid phase extraction and batch/continuous sorption processes were used to evaluate the polyphenols uptake/release in individual/competitive assays. Among the prepared MIPs, a suspension polymerization synthesized material, with 4-vinylpyridine as the functional monomer and water/methanol as solvent, showed a superior performance. The underlying cause of such a significant outcome is the likely surface imprinting process caused by the amphiphilic properties of polydatin. The uptake and subsequent selective release of polyphenols present in natural extracts was successfully demonstrated, considering a red wine solution as a case study. However, hydrophilic/hydrophobic interactions are inevitable (especially with complex natural extracts) and the tuning of the polarity of the solvents is an important issue for the isolation of the different polyphenols. Full article
(This article belongs to the Special Issue Water Soluble Polymers)
Show Figures

Graphical abstract

1937 KiB  
Article
Multistage Stochastic Programming Models for Pharmaceutical Clinical Trial Planning
by Zuo Zeng and Selen Cremaschi
Processes 2017, 5(4), 71; https://doi.org/10.3390/pr5040071 - 09 Nov 2017
Cited by 3 | Viewed by 5114
Abstract
Clinical trial planning of candidate drugs is an important task for pharmaceutical companies. In this paper, we propose two new multistage stochastic programming formulations (CM1 and CM2) to determine the optimal clinical trial plan under uncertainty. Decisions of a clinical trial plan include [...] Read more.
Clinical trial planning of candidate drugs is an important task for pharmaceutical companies. In this paper, we propose two new multistage stochastic programming formulations (CM1 and CM2) to determine the optimal clinical trial plan under uncertainty. Decisions of a clinical trial plan include which clinical trials to start and their start times. Its objective is to maximize expected net present value of the entire clinical trial plan. Outcome of a clinical trial is uncertain, i.e., whether a potential drug successfully completes a clinical trial is not known until the clinical trial is completed. This uncertainty is modeled using an endogenous uncertain parameter in CM1 and CM2. The main difference between CM1 and CM2 is an additional binary variable, which tracks both start and end time points of clinical trials in CM2. We compare the sizes and solution times of CM1 and CM2 with each other and with a previously developed formulation (CM3) using different instances of clinical trial planning problem. The results reveal that the solution times of CM1 and CM2 are similar to each other and are up to two orders of magnitude shorter compared to CM3 for all instances considered. In general, the root relaxation problems of CM1 and CM2 took shorter to solve, CM1 and CM2 yielded tight initial gaps, and the solver required fewer branches for convergence to the optimum for CM1 and CM2. Full article
Show Figures

Figure 1

2776 KiB  
Article
Organic Polymers as Porogenic Structure Matrices for Mesoporous Alumina and Magnesia
by Zimei Chen, Christian Weinberger, Michael Tiemann and Dirk Kuckling
Processes 2017, 5(4), 70; https://doi.org/10.3390/pr5040070 - 08 Nov 2017
Cited by 6 | Viewed by 6050
Abstract
Mesoporous alumina and magnesia were prepared using various polymers, poly(ethylene glycol) (PEG), poly(vinyl alcohol) (PVA), poly(N-(2-hydroxypropyl) methacrylamide) (PHPMA), and poly(dimethylacrylamide) (PDMAAm), as porogenic structure matrices. Mesoporous alumina exhibits large Brunauer–Emmett–Teller (BET) surface areas up to 365 m2 g−1, [...] Read more.
Mesoporous alumina and magnesia were prepared using various polymers, poly(ethylene glycol) (PEG), poly(vinyl alcohol) (PVA), poly(N-(2-hydroxypropyl) methacrylamide) (PHPMA), and poly(dimethylacrylamide) (PDMAAm), as porogenic structure matrices. Mesoporous alumina exhibits large Brunauer–Emmett–Teller (BET) surface areas up to 365 m2 g−1, while mesoporous magnesium oxide possesses BET surface areas around 111 m2 g−1. Variation of the polymers has little impact on the structural properties of the products. The calcination of the polymer/metal oxide composite materials benefits from the fact that the polymer decomposition is catalyzed by the freshly formed metal oxide. Full article
(This article belongs to the Special Issue Water Soluble Polymers)
Show Figures

Figure 1

1813 KiB  
Article
A General State-Space Formulation for Online Scheduling
by Dhruv Gupta and Christos T. Maravelias
Processes 2017, 5(4), 69; https://doi.org/10.3390/pr5040069 - 08 Nov 2017
Cited by 24 | Viewed by 6100
Abstract
We present a generalized state-space model formulation particularly motivated by an online scheduling perspective, which allows modeling (1) task-delays and unit breakdowns; (2) fractional delays and unit downtimes, when using discrete-time grid; (3) variable batch-sizes; (4) robust scheduling through the use of conservative [...] Read more.
We present a generalized state-space model formulation particularly motivated by an online scheduling perspective, which allows modeling (1) task-delays and unit breakdowns; (2) fractional delays and unit downtimes, when using discrete-time grid; (3) variable batch-sizes; (4) robust scheduling through the use of conservative yield estimates and processing times; (5) feedback on task-yield estimates before the task finishes; (6) task termination during its execution; (7) post-production storage of material in unit; and (8) unit capacity degradation and maintenance. Through these proposed generalizations, we enable a natural way to handle routinely encountered disturbances and a rich set of corresponding counter-decisions. Thereby, greatly simplifying and extending the possible application of mathematical programming based online scheduling solutions to diverse application settings. Finally, we demonstrate the effectiveness of this model on a case study from the field of bio-manufacturing. Full article
(This article belongs to the Special Issue Combined Scheduling and Control)
Show Figures

Figure 1

20401 KiB  
Article
Selected Phenomena of the In-Mold Nodularization Process of Cast Iron That Influence the Quality of Cast Machine Parts
by Marcin Stawarz, Krzysztof Janerka and Malwina Dojka
Processes 2017, 5(4), 68; https://doi.org/10.3390/pr5040068 - 06 Nov 2017
Cited by 4 | Viewed by 7280
Abstract
This paper discusses a problem connected with the production process of ductile iron castings made using the in-mold method. The study results are presented showing that this method compromises the quality of the cast machine parts and of the equipment itself. Specifics of [...] Read more.
This paper discusses a problem connected with the production process of ductile iron castings made using the in-mold method. The study results are presented showing that this method compromises the quality of the cast machine parts and of the equipment itself. Specifics of the nodularization process using the in-mold method do not provide the proper conditions for removal of chemical reaction products to the slag, i.e., the products stay in the mold cavity and they also decrease the quality of the casting. In this work, corrosion-type defects were diagnosed mostly on the surface of the casting and some compounds in the near-surface layer—i.e., fayalite (Fe2SiO4) and forsterite (Mg2SiO4)—which cause discontinuities in the metal matrix. The results presented here were selected based on experimental melts of ductile iron. The elements of the mold used in this study, the shape of the mixing chamber, charge materials, method of melting, temperature of liquid metal, etc. were directly related to the production conditions. An analysis was conducted of the chemical composition using a Leco GDS500A spectrometer and a carbon and sulfur Leco CS125 analyzer. Metallographic examinations were conducted using a Phenom-ProX scanning electron microscope with an EDS system. Full article
Show Figures

Figure 1

2436 KiB  
Article
Stop Smoking—Tube-In-Tube Helical System for Flameless Calcination of Minerals
by Nils Haneklaus, Yanhua Zheng and Hans-Josef Allelein
Processes 2017, 5(4), 67; https://doi.org/10.3390/pr5040067 - 03 Nov 2017
Cited by 11 | Viewed by 7718
Abstract
Mineral calcination worldwide accounts for some 5–10% of all anthropogenic carbon dioxide (CO2) emissions per year. Roughly half of the CO2 released results from burning fossil fuels for heat generation, while the other half is a product of the calcination [...] Read more.
Mineral calcination worldwide accounts for some 5–10% of all anthropogenic carbon dioxide (CO2) emissions per year. Roughly half of the CO2 released results from burning fossil fuels for heat generation, while the other half is a product of the calcination reaction itself. Traditionally, the fuel combustion process and the calcination reaction take place together to enhance heat transfer. Systems have been proposed that separate fuel combustion and calcination to allow for the sequestration of pure CO2 from the calcination reaction for later storage/use and capture of the combustion gases. This work presents a new tube-in-tube helical system for the calcination of minerals that can use different heat transfer fluids (HTFs), employed or foreseen in concentrated solar power (CSP) plants. The system is labeled ‘flameless’ since the HTF can be heated by other means than burning fossil fuels. If CSP or high-temperature nuclear reactors are used, direct CO2 emissions can be divided in half. The technical feasibility of the system has been accessed with a brief parametric study here. The results suggest that the introduced system is technically feasible given the parameters (total heat transfer coefficients, mass- and volume flows, outer tube friction factors, and –Nusselt numbers) that are examined. Further experimental work will be required to better understand the performance of the tube-in-tube helical system for the flameless calcination of minerals. Full article
Show Figures

Graphical abstract

5254 KiB  
Article
Using Simulation for Scheduling and Rescheduling of Batch Processes
by Girish Joglekar
Processes 2017, 5(4), 66; https://doi.org/10.3390/pr5040066 - 02 Nov 2017
Cited by 6 | Viewed by 6912
Abstract
The problem of scheduling multiproduct and multipurpose batch processes has been studied for more than 30 years using math programming and heuristics. In most formulations, the manufacturing recipes are represented by simplified models using state task network (STN) or resource task network (RTN), [...] Read more.
The problem of scheduling multiproduct and multipurpose batch processes has been studied for more than 30 years using math programming and heuristics. In most formulations, the manufacturing recipes are represented by simplified models using state task network (STN) or resource task network (RTN), transfers of materials are assumed to be instantaneous, constraints due to shared utilities are often ignored, and scheduling horizons are kept small due to the limits on the problem size that can be handled by the solvers. These limitations often result in schedules that are not actionable. A simulation model, on the other hand, can represent a manufacturing recipe to the smallest level of detail. In addition, a simulator can provide a variety of built-in capabilities that model the assignment decisions, coordination logic and plant operation rules. The simulation based schedules are more realistic, verifiable, easy to adapt for changing plant conditions and can be generated in a short period of time. An easy-to-use simulator based framework can be developed to support scheduling decisions made by operations personnel. In this paper, first the complexities of batch recipes and operations are discussed, followed by examples of using the BATCHES simulator for off-line scheduling studies and for day-to-day scheduling. Full article
(This article belongs to the Special Issue Combined Scheduling and Control)
Show Figures

Figure 1

3848 KiB  
Article
Dispersal-Based Microbial Community Assembly Decreases Biogeochemical Function
by Emily B. Graham and James C. Stegen
Processes 2017, 5(4), 65; https://doi.org/10.3390/pr5040065 - 01 Nov 2017
Cited by 92 | Viewed by 10017
Abstract
Ecological mechanisms influence relationships among microbial communities, which in turn impact biogeochemistry. In particular, microbial communities are assembled by deterministic (e.g., selection) and stochastic (e.g., dispersal) processes, and the relative balance of these two process types is hypothesized to alter the influence of [...] Read more.
Ecological mechanisms influence relationships among microbial communities, which in turn impact biogeochemistry. In particular, microbial communities are assembled by deterministic (e.g., selection) and stochastic (e.g., dispersal) processes, and the relative balance of these two process types is hypothesized to alter the influence of microbial communities over biogeochemical function. We used an ecological simulation model to evaluate this hypothesis, defining biogeochemical function generically to represent any biogeochemical reaction of interest. We assembled receiving communities under different levels of dispersal from a source community that was assembled purely by selection. The dispersal scenarios ranged from no dispersal (i.e., selection-only) to dispersal rates high enough to overwhelm selection (i.e., homogenizing dispersal). We used an aggregate measure of community fitness to infer a given community’s biogeochemical function relative to other communities. We also used ecological null models to further link the relative influence of deterministic assembly to function. We found that increasing rates of dispersal decrease biogeochemical function by increasing the proportion of maladapted taxa in a local community. Niche breadth was also a key determinant of biogeochemical function, suggesting a tradeoff between the function of generalist and specialist species. Finally, we show that microbial assembly processes exert greater influence over biogeochemical function when there is variation in the relative contributions of dispersal and selection among communities. Taken together, our results highlight the influence of spatial processes on biogeochemical function and indicate the need to account for such effects in models that aim to predict biogeochemical function under future environmental scenarios. Full article
Show Figures

Figure 1

1582 KiB  
Article
How to Generate Economic and Sustainability Reports from Big Data? Qualifications of Process Industry
by Esa Hämäläinen and Tommi Inkinen
Processes 2017, 5(4), 64; https://doi.org/10.3390/pr5040064 - 01 Nov 2017
Cited by 11 | Viewed by 6612
Abstract
Big Data may introduce new opportunities, and for this reason it has become a mantra among most industries. This paper focuses on examining how to develop cost and sustainable reporting by utilizing Big Data that covers economic values, production volumes, and emission information. [...] Read more.
Big Data may introduce new opportunities, and for this reason it has become a mantra among most industries. This paper focuses on examining how to develop cost and sustainable reporting by utilizing Big Data that covers economic values, production volumes, and emission information. We assume strongly that this use supports cleaner production, while at the same time offers more information for revenue and profitability development. We argue that Big Data brings company-wide business benefits if data queries and interfaces are built to be interactive, intuitive, and user-friendly. The amount of information related to operations, costs, emissions, and the supply chain would increase enormously if Big Data was used in various manufacturing industries. It is essential to expose the relevant correlations between different attributes and data fields. Proper algorithm design and programming are key to making the most of Big Data. This paper introduces ideas on how to refine raw data into valuable information, which can serve many types of end users, decision makers, and even external auditors. Concrete examples are given through an industrial paper mill case, which covers environmental aspects, cost-efficiency management, and process design. Full article
(This article belongs to the Collection Process Data Analytics)
Show Figures

Figure 1

351 KiB  
Article
Multi-Objective Optimization of Experiments Using Curvature and Fisher Information Matrix
by Erica Manesso, Srinath Sridharan and Rudiyanto Gunawan
Processes 2017, 5(4), 63; https://doi.org/10.3390/pr5040063 - 01 Nov 2017
Cited by 10 | Viewed by 6527
Abstract
The bottleneck in creating dynamic models of biological networks and processes often lies in estimating unknown kinetic model parameters from experimental data. In this regard, experimental conditions have a strong influence on parameter identifiability and should therefore be optimized to give the maximum [...] Read more.
The bottleneck in creating dynamic models of biological networks and processes often lies in estimating unknown kinetic model parameters from experimental data. In this regard, experimental conditions have a strong influence on parameter identifiability and should therefore be optimized to give the maximum information for parameter estimation. Existing model-based design of experiment (MBDOE) methods commonly rely on the Fisher information matrix (FIM) for defining a metric of data informativeness. When the model behavior is highly nonlinear, FIM-based criteria may lead to suboptimal designs, as the FIM only accounts for the linear variation in the model outputs with respect to the parameters. In this work, we developed a multi-objective optimization (MOO) MBDOE, for which the model nonlinearity was taken into consideration through the use of curvature. The proposed MOO MBDOE involved maximizing data informativeness using a FIM-based metric and at the same time minimizing the model curvature. We demonstrated the advantages of the MOO MBDOE over existing FIM-based and other curvature-based MBDOEs in an application to the kinetic modeling of fed-batch fermentation of baker’s yeast. Full article
(This article belongs to the Special Issue Biological Networks)
Show Figures

Figure 1

2840 KiB  
Article
Optimization through Response Surface Methodology of a Reactor Producing Methanol by the Hydrogenation of Carbon Dioxide
by Grazia Leonzio
Processes 2017, 5(4), 62; https://doi.org/10.3390/pr5040062 - 23 Oct 2017
Cited by 22 | Viewed by 8766
Abstract
Carbon dioxide conversion and utilization is gaining significant attention worldwide, not only because carbon dioxide has an impact on global climate change, but also because it provides a source for potential fuels and chemicals. Methanol is an important fuel that can be obtained [...] Read more.
Carbon dioxide conversion and utilization is gaining significant attention worldwide, not only because carbon dioxide has an impact on global climate change, but also because it provides a source for potential fuels and chemicals. Methanol is an important fuel that can be obtained by the hydrogenation of carbon dioxide. In this research, the modeling of a reactor to produce methanol using carbon dioxide and hydrogen is carried out by way of an ANOVA and a central composite design. Reaction temperature, reaction pressure, H2/CO2 ratio, and recycling are the chosen factors, while the methanol production and the reactor volume are the studied responses. Results show that the interaction AC is common between the two responses and allows improvement of the productivity in reducing the volume. A mathematical model for methanol production and reactor volume is obtained with significant factors. A central composite design is used to optimize the process. Results show that a higher productivity is obtained with temperature, CO2/H2 ratio, and recycle factors at higher, lower, and higher levels, respectively. The methanol production is equal to 33,540 kg/h, while the reactor volume is 6 m3. Future research should investigate the economic analysis of the process in order to improve productivity with lower costs. Full article
Show Figures

Figure 1

3162 KiB  
Review
Improving Bioenergy Crops through Dynamic Metabolic Modeling
by Mojdeh Faraji and Eberhard O. Voit
Processes 2017, 5(4), 61; https://doi.org/10.3390/pr5040061 - 18 Oct 2017
Cited by 9 | Viewed by 5685
Abstract
Enormous advances in genetics and metabolic engineering have made it possible, in principle, to create new plants and crops with improved yield through targeted molecular alterations. However, while the potential is beyond doubt, the actual implementation of envisioned new strains is often difficult, [...] Read more.
Enormous advances in genetics and metabolic engineering have made it possible, in principle, to create new plants and crops with improved yield through targeted molecular alterations. However, while the potential is beyond doubt, the actual implementation of envisioned new strains is often difficult, due to the diverse and complex nature of plants. Indeed, the intrinsic complexity of plants makes intuitive predictions difficult and often unreliable. The hope for overcoming this challenge is that methods of data mining and computational systems biology may become powerful enough that they could serve as beneficial tools for guiding future experimentation. In the first part of this article, we review the complexities of plants, as well as some of the mathematical and computational methods that have been used in the recent past to deepen our understanding of crops and their potential yield improvements. In the second part, we present a specific case study that indicates how robust models may be employed for crop improvements. This case study focuses on the biosynthesis of lignin in switchgrass (Panicum virgatum). Switchgrass is considered one of the most promising candidates for the second generation of bioenergy production, which does not use edible plant parts. Lignin is important in this context, because it impedes the use of cellulose in such inedible plant materials. The dynamic model offers a platform for investigating the pathway behavior in transgenic lines. In particular, it allows predictions of lignin content and composition in numerous genetic perturbation scenarios. Full article
(This article belongs to the Special Issue Biological Networks)
Show Figures

Figure 1

3897 KiB  
Article
Minimizing the Effect of Substantial Perturbations in Military Water Systems for Increased Resilience and Efficiency
by Corey M. James, Michael E. Webber and Thomas F. Edgar
Processes 2017, 5(4), 60; https://doi.org/10.3390/pr5040060 - 18 Oct 2017
Cited by 1 | Viewed by 4583
Abstract
A model predictive control (MPC) framework, exploiting both feedforward and feedback control loops, is employed to minimize large disturbances that occur in military water networks. Military installations’ need for resilient and efficient water supplies is often challenged by large disturbances like fires, terrorist [...] Read more.
A model predictive control (MPC) framework, exploiting both feedforward and feedback control loops, is employed to minimize large disturbances that occur in military water networks. Military installations’ need for resilient and efficient water supplies is often challenged by large disturbances like fires, terrorist activity, troop training rotations, and large scale leaks. This work applies the effectiveness of MPC to provide predictive capability and compensate for vast geographical differences and varying phenomena time scales using computational software and actual system dimensions and parameters. The results show that large disturbances are rapidly minimized while maintaining chlorine concentration within legal limits at the point of demand and overall water usage is minimized. The control framework also ensures pumping is minimized during peak electricity hours, so costs are kept lower than simple proportional control. Thecontrol structure implemented in this work is able to support resiliency and increased efficiency on military bases by minimizing tank holdup, effectively countering large disturbances, and efficiently managing pumping. Full article
Show Figures

Figure 1

1526 KiB  
Article
Thermal and Rheological Properties of Crude Tall Oil for Use in Biodiesel Production
by Peter Adewale and Lew P. Christopher
Processes 2017, 5(4), 59; https://doi.org/10.3390/pr5040059 - 15 Oct 2017
Cited by 9 | Viewed by 8835
Abstract
The primary objective of this work was to investigate the thermal and rheological properties of crude tall oil (CTO), a low-cost by-product from the Kraft pulping process, as a potential feedstock for biodiesel production. Adequate knowledge of CTO properties is a prerequisite for [...] Read more.
The primary objective of this work was to investigate the thermal and rheological properties of crude tall oil (CTO), a low-cost by-product from the Kraft pulping process, as a potential feedstock for biodiesel production. Adequate knowledge of CTO properties is a prerequisite for the optimal design of a cost-effective biodiesel process and related processing equipment. The study revealed the correlation between the physicochemical properties, thermal, and rheological behavior of CTO. It was established that the trans/esterification temperature for CTO was greater than the temperature at which viscosity of CTO entered a steady-state. This information is useful in the selection of appropriate agitation conditions for optimal biodiesel production from CTO. The point of interception of storage modulus (G′) and loss modulus (G′′) determined the glass transition temperature (40 °C) of CTO that strongly correlated with its melting point (35.3 °C). The flow pattern of CTO was modeled as a non-Newtonian fluid. Furthermore, due to the high content of fatty acids (FA) in CTO, it is recommended to first reduce the FA level by acid catalyzed methanolysis prior to alkali treatment, or alternatively apply a one-step heterogeneous or enzymatic trans/esterification of CTO for high-yield biodiesel production. Full article
Show Figures

Figure 1

1598 KiB  
Article
A Reaction Database for Small Molecule Pharmaceutical Processes Integrated with Process Information
by Emmanouil Papadakis, Amata Anantpinijwatna, John M. Woodley and Rafiqul Gani
Processes 2017, 5(4), 58; https://doi.org/10.3390/pr5040058 - 12 Oct 2017
Cited by 12 | Viewed by 11409
Abstract
This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic [...] Read more.
This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic solvents in reaction performance improvement. The focus of this reaction database is to provide a data rich environment with process information available to assist during the early stage synthesis of pharmaceutical products. The database is structured in terms of reaction classification of reaction types; compounds participating in the reaction; use of organic solvents and their function; information for single step and multistep reactions; target products; reaction conditions and reaction data. Information for reactor scale-up together with information for the separation and other relevant information for each reaction and reference are also available in the database. Additionally, the retrieved information obtained from the database can be evaluated in terms of sustainability using well-known “green” metrics published in the scientific literature. The application of the database is illustrated through the synthesis of ibuprofen, for which data on different reaction pathways have been retrieved from the database and compared using “green” chemistry metrics. Full article
Show Figures

Figure 1

11476 KiB  
Article
Energy Optimization of Gas–Liquid Dispersion in Micronozzles Assisted by Design of Experiment
by Felix Reichmann, Fabian Varel and Norbert Kockmann
Processes 2017, 5(4), 57; https://doi.org/10.3390/pr5040057 - 12 Oct 2017
Cited by 12 | Viewed by 6181
Abstract
In recent years gas–liquid flow in microchannels has drawn much attention in the research fields of analytics and applications, such as in oxidations or hydrogenations. Since surface forces are increasingly important on the small scale, bubble coalescence is detrimental and leads to Taylor [...] Read more.
In recent years gas–liquid flow in microchannels has drawn much attention in the research fields of analytics and applications, such as in oxidations or hydrogenations. Since surface forces are increasingly important on the small scale, bubble coalescence is detrimental and leads to Taylor bubble flow in microchannels with low surface-to-volume ratio. To overcome this limitation, we have investigated the gas–liquid flow through micronozzles and, specifically, the bubble breakup behind the nozzle. Two different regimes of bubble breakup are identified, laminar and turbulent. Turbulent bubble breakup is characterized by small daughter bubbles and narrow daughter bubble size distribution. Thus, high interfacial area is generated for increased mass and heat transfer. However, turbulent breakup mechanism is observed at high flow rates and increased pressure drops; hence, large energy input into the system is essential. In this work Design of Experiment assisted evaluation of turbulent bubbly flow redispersion is carried out to investigate the effect and significance of the nozzle’s geometrical parameters regarding bubble breakup and pressure drop. Here, the hydraulic diameter and length of the nozzle show the largest impacts. Finally, factor optimization leads to an optimized nozzle geometry for bubble redispersion via a micronozzle regarding energy efficacy to attain a high interfacial area and surface-to-volume ratio with rather low energy input. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop