Next Article in Journal
Modeling Seasonality of Emotional Tension in Social Media
Previous Article in Journal
Healthy Personalized Recipe Recommendations for Weekly Meal Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Twofold Machine-Learning and Molecular Dynamics: A Computational Framework

Department of Physics, University of Thessaly, 35100 Lamia, Greece
*
Author to whom correspondence should be addressed.
Computers 2024, 13(1), 2; https://doi.org/10.3390/computers13010002
Submission received: 3 November 2023 / Revised: 28 November 2023 / Accepted: 19 December 2023 / Published: 22 December 2023

Abstract

:
Data science and machine learning (ML) techniques are employed to shed light into the molecular mechanisms that affect fluid-transport properties at the nanoscale. Viscosity and thermal conductivity values of four basic monoatomic elements, namely, argon, krypton, nitrogen, and oxygen, are gathered from experimental and simulation data in the literature and constitute a primary database for further investigation. The data refers to a wide pressure–temperature (P-T) phase space, covering fluid states from gas to liquid and supercritical. The database is enriched with new simulation data extracted from our equilibrium molecular dynamics (MD) simulations. A machine learning (ML) framework with ensemble, classical, kernel-based, and stacked algorithmic techniques is also constructed to function in parallel with the MD model, trained by existing data and predicting the values of new phase space points. In terms of algorithmic performance, it is shown that the stacked and tree-based ML models have given the most accurate results for all elements and can be excellent choices for small to medium-sized datasets. In such a way, a twofold computational scheme is constructed, functioning as a computationally inexpensive route that achieves high accuracy, aiming to replace costly experiments and simulations, when feasible.

1. Introduction

Machine learning (ML) has been integrated into material-related fields over the past decade, suggesting the possibility of a powerful statistical technique that can be used to aid and/or replace costly simulations and hard-to-setup experiments, and leading the advances in a variety of areas. Accelerating simulations from quantum to continuum scales has been made possible, since ML is exploited to identify the most important features of a system and create simplified models that can be simulated much more quickly. In this way, property calculation and prediction are feasible, even when experiments or simulations are hard to perform. Data science and ML can be used to learn from historical data and make predictions about the properties of materials [1,2,3], even those that have not yet been studied, and even suggest symbolic equations to describe them [4,5,6].
The specific ML techniques that have been used in material-related fields include supervised learning, unsupervised learning, and reinforcement learning [7]. Supervised learning has been the most widely investigated field for regression or classification tasks. It refers to labeled data, which is associated with known parameters that affect a property of interest and tries to predict either interpolated or extrapolated values. Unsupervised ML aims to discover interconnections inside unlabeled data, either with clustering techniques (e.g., the k-means algorithm) or by applying dimensionality reduction in high-dimensional data (e.g., proper orthogonal decomposition, POD, and principal component analysis, PCA). Reinforcement learning is exploited in cases in which interaction with the environment is significant, and is based on an observation/rewarding scheme employed to find the best scenario for a process [8].
Depending on the number of computational layers included in an ML model, another categorization looks to shallow learning (SL) and deep learning (DL). Widely used SL algorithms include linear (or multi-linear) regression, LASSO, ridge, support vector machine (SVM), and decision-based algorithms (e.g., Decision Trees and Random Forest) [9], among others. As these algorithms may perform better for specific parts of a dataset, ensemble and stack methods have also emerged in which groups of more than one algorithm are interconnected serially or in parallel, and in most cases, these present enhanced output results [10]. However, in complex systems, such as turbulent fluid flows [11], more complex computational layers (neural networks, NNs) are exploited, constructing a deeper architecture better able to deal with the ‘big data’ in DL models [12].
In the field of molecular simulation, where the property extraction of materials is of paramount importance, ML has gained a central role [13]. Atomic-scale simulations, with molecular dynamics (MD) being the most popular method, involve costly (in time and hardware resources) simulations which accurately calculate dynamical properties of the materials in all phases (solid, liquid, and gas). They have oftentimes been used in place of experiments when experiments have been difficult to perform. Moreover, they have opened a new pathway for the calculation of properties that cannot be extracted by theoretical or numerical simulations at the macroscale. The transport properties of fluids, specifically, shear viscosity and thermal conductivity, are two of the most computationally intensive properties to deal with in atomistic simulations, involving particle interactions, positions, and velocities in multiple time-frames [14]. Molecular dynamics simulation, either in an equilibrium (EMD) or non-equilibrium (NEMD) manner, has, to this point, been incorporated to provide transport property data. However, data-driven approaches have much to offer in this direction. Current research efforts have already reported methods that combine physics-based and data-driven modeling [15].
Therefore, it would be beneficial to embed novel ML methods into molecular simulations. In this paper, a combined MD simulation/ML prediction scheme has been constructed to accurately derive the transport properties, such as viscosity and thermal conductivity, of basic elements (argon, krypton, nitrogen, and oxygen) at the bulk state. The framework is embedded in a Jupyter Lab environment [16]. It starts by analyzing historical data from the literature [17], performs property prediction with available data, and functions in parallel to an MD computational flow that calculates the transport properties in phase space points where no data is available. The ML model is further fed with MD-extracted points, and it is then retrained, achieving increased accuracy in most cases. The fast ML platform is capable of extracting new phase-state points while remaining linked to the MD simulations, ensuring that results are accurate and bound to physical laws. Of equal importance is the enrichment of the data in the current literature with new values of viscosity and thermal conductivity which can be further processed by the research community.

2. Simulation Model and Data Analysis

2.1. Computational Framework

A flow diagram of the computational model constructed is given in Figure 1. Historical data from the literature [17], after being pre-processed and normalized, enters the computational platform to feed a series of ML algorithms. The available data is divided into a training dataset (80%) and a test dataset (20%) through a 10-fold cross-correlation technique that randomly assigns ten different portions of data to the categories of training and test data. In every iteration, a different portion of data is considered to be training (e.g., 8 out of 10) and test (e.g., the remaining 2 out of 10), which is a common practice used to minimize the possibility of overfitting [18].
To find the best-performing algorithm that fits in the transport properties dataset, a number of different ML algorithms have been tested in order to determine their accuracy, based on kernel, ensemble, and stack methods. In regression problems, it has been found that different algorithms achieve different metrics in scalability and predictive accuracy [19]. Here, the Gaussian Process Regressor (GPR), the Support Vector Regressor (SVR), the AdaBoost Regressor (ABR), and the Extra Trees Regressor (ETR) are employed. GPR and SVR are kernel-based methods, while ABR and ETR combine the decision-tree principle with an ensemble learning approach. Furthermore, a stack method, employing various algorithms in a stacked manner, has also been created. All algorithms are set after a hyperparameter tuning procedure which adjusts their parameters to the specific problem [20], which ensures that they run with the optimal parameters to achieve maximum accuracy [21]. The accuracy of the computational procedure is validated by comparing it to newly extracted EMD simulation data. The MD computational flow is executed in parallel in the open-source LAMMPS software [22]. New data coming from the MD flow enter the ML process to retrain and further validate the output.
More specifically, the calculations begin by feeding the initial elements database into the ML algorithm, and after the pre-processing stage, the algorithm is trained and validated on existing data through the cross-correlation pipeline. This process is repeated for each algorithm and for each element incorporated here. Next, the predictive accuracy of each algorithm is assessed through a computational loop. At this point, an MD simulation is established for a new phase space point (P-T) that does not exist in the initial database and the derived viscosity and thermal conductivity values are added to the database. The ML algorithm runs once more with the enriched database and its performance accuracy is tabulated. It becomes clear that the MD simulations provide new data for processing, at fluid states not reported in the literature, and, at the same time, an alternative ML model is trained on this data and is capable of predicting more state points in a future step, bypassing the MD simulations. This computation scheme exploits the accuracy of MD and combines it with the ML speed to suggest a twofold procedure that could be employed in similar property-prediction applications.
A major advantage of the proposed framework is the fact that all computational subsystems are controlled by a Jupyter Lab Python environment. This means that MD and ML codes communicate and can interact with each other, while data is available for processing and analysis at every stage of the process.

2.2. Data Description and Pre-Processing

The database of the elements used in our study [17] includes values for the viscosity (η) and thermal conductivity (λ) of argon (Ar), krypton (Kr), nitrogen (N), and oxygen (O). The data refer to (P-T) phase-state points, i.e., the obtained values for viscosity and thermal conductivity at specific (P-T) values. This means that the ML models implemented accept two inputs, P and T, and output η and λ. In the available database, viscosity data for Ar and N were obtained using experimental methods such as capillary flow (CF) and the oscillating disc (OD). Thermal conductivity data were acquired by techniques such as the parallel plate (PP) and concentric cylinder (CC) apparatuses. For Kr, the principle of corresponding states has been used to calculate the transport properties, while for O, limited data for thermal conductivity, compared to the other elements, is given due to challenges in obtaining reliable values, particularly at higher temperatures.
Data is provided for temperature (T), starting from the triple-point temperature up to 500 K, and for pressures to the value of P = 100 MPa. The (P-T) phase space for each element is presented in Figure 2. The critical and the triple points define the regions where phase changes occur, and have different values for each element. Data statistics are given in Table 1. The amount of data provided from the literature is sufficient for application to an ML investigation, and the data will also be enriched by the MD we perform. The phase space covers the range from ambient gas and liquid conditions to extreme, at supercritical states. There is no investigation of solids in the present work.
Moreover, in order to check for possible correlations within the data, we also present, in Figure 3, correlation maps for each element. The inputs P and T are clearly not correlated for all elements. For Ar, a medium negative correlation is observed between T and both transport properties (Figure 3a,b). Krypton’s properties are mostly affected by temperature. Nitrogen’s properties are slightly correlated with those of T. On the other hand, the properties of O are strongly correlated only with those of T. In all cases, both thermal conductivity and viscosity present near-to-zero correlation with P.
Data is normalized in order to restrict the input value range and allow the ML algorithms to perform better, whereby a data point,  x , transforms to
z = x x ¯ s ,
where  x ¯  is the mean value of the samples and  s  the standard deviation.

2.3. Machine-Learning Algorithms

2.3.1. Support Vector Regression

Support vector regression is a variation of support vector machines (SVM) that is specifically tailored for regression analysis. SVR aims to find a regression function that maps input data points to continuous output values while maximizing the margin between the predicted values and a defined margin of error. This margin is controlled by two parameters: the error tolerance (epsilon, ε) and a regularization parameter (C) which balances the trade-off between achieving a smaller margin and minimizing prediction errors. The implementation of the technique is summarized in five steps:
  • Data preparation: Input data is collected, including features (variables) and corresponding target values (continuous output).
  • Feature scaling: It is essential to scale the features to ensure that they have similar ranges. Common scaling methods include normalization (scaling to [0, 1]) or standardization (mean = 0, variance = 1).
  • Kernel selection: SVR uses kernels to map the data into a higher-dimensional space where it may be easier to find a linear separation. Common kernel functions include linear, polynomial, radial basis function (RBF), and sigmoid kernels.
  • Model training: SVR aims to find a boundary window that has the maximum margin on both sides of the data points, while allowing some points to fall within a margin of error ε. The training process involves solving an optimization problem to determine the support vectors (data points that influence the margin) and the model parameters.
  • Prediction: Once the SVR model is trained, it can predict the continuous output values for new unseen data points.
The SVR can be applied to various property estimation tasks in fluid mechanics, such as the determination of the minimum miscible pressure in modeling membrane separation systems, or the prediction of the pan evaporation in hydrological and ecological settings [23,24,25], to mention a few.

2.3.2. Gaussian Process Regression

Gaussian process regression is a non-parametric ML algorithm used for regression tasks. It uses probabilistic modeling to make predictions and provide uncertainty estimates based on training data. GPR assumes that the relationship between input features and outputs follows a Gaussian process distribution, which allows it to capture complex patterns and provide confidence intervals for predictions. The equation for GPR is expressed as
y = µ + ε ,
where y is the predicted output, µ is the mean of the Gaussian process distribution, and ε is the random noise term. This method also uses kernels (or covariance functions) to determine the shape and characteristics of the Gaussian process distribution. Common kernel functions include both the Matèrn and exponential forms of the radial basis function (RBF). The choice of kernel function influences how the GPR model captures correlations between data points. Here, we have applied a Matèrn kernel, which is common choice in the applied sciences and in engineering [26,27]. The algorithm has hyperparameters that need to be tuned, such as the parameters of the kernel function and the noise level. Properly tuning these hyperparameters is essential for the model’s performance and for its ability to capture the underlying patterns in the data [28,29,30].
GPR not only provides point predictions but also estimates the uncertainty associated with each prediction. It computes prediction intervals that indicate the range within which the true output is likely to fall. This is particularly useful in fluid mechanics applications, for which understanding the uncertainty of predictions can be crucial. Examples of applications include property predictions such as fluid flow rate, pressure distribution, and heat-transfer coefficients. GPR has also the advantage of dealing with limited and noisy data effectively by further quantifying the uncertainty inside the predictions [31,32,33].

2.3.3. The Extra Trees Regressor

The Extra Trees Regressor is an ensemble learning algorithm that belongs to the family of decision tree algorithms, functioning as an extension of the Random Forest (RF) algorithm [34]. ETR builds multiple decision trees and combines their predictions to provide more robust and accurate results. Each decision tree predicts the target variable for a given input, and the final prediction is a combination of predictions from all trees. However, unlike RF, the ETR does not consider bootstrap sampling, and it selects random features at each split point without using any specific criteria. Equation (3) can express the general prediction equation for the Extra Trees Regressor:
y ^ = 1 N i = 1 N f i X ,
where  y ^  is the predicted target value, N is the total number of decision trees in the ensemble, and  f i X  is the prediction of the ith decision tree for the input X. The randomness introduced during the tree-building process can reduce overfitting. Additionally, the ETR is computationally efficient due to its parallelism in training multiple trees. However, it is essential to note that the choice of hyperparameters, such as the number of trees and their maximum depth, can impact the algorithm’s performance. Cross-validation is also important here. In fluid mechanics applications, ETR has been employed to predict various fluid-related properties, such as molecular separation in membrane systems, Vibrio fischeri toxicities, properties of ionic liquid, ink droplet velocity profiles of printing cells, and drug particle solubility optimization [10,35,36,37,38].

2.3.4. AdaBoost Regressor

The AdaBoost Regressor combines the predictions of multiple weak learners, often decision trees, to create a strong ensemble model. The final prediction is a weighted average of the predictions made by each weak learner. The general prediction equation can be expressed by
y ^ = i = 1 N w i f i X ,
where  y ^  is the predicted target value, N is the total number of weak learners in the ensemble, wi is the weight assigned to the prediction of the ith weak learner, and  f i X  is the prediction of the ith weak learner for the input X. In each iteration, the weights of misclassified instances are increased, forcing the algorithm to focus more on those instances. The weak learners are trained sequentially, with each one adjusting its predictions based on the weighted errors of the previous iterations. ABR is particularly useful when dealing with complex relationships in data, capturing non-linear patterns. It is effective in reducing bias and improving the overall accuracy of predictions. However, it can be sensitive to noisy data and outliers, as these can negatively impact the performance of weak learners. In fluid mechanics applications, AdaBoost Regressor has been successfully employed to predict fluid flow rates, pressure drop, concentration in water purification via porous membranes, droplet volume in printing technologies, and biodiesel yield and viscosity [39,40,41,42].

2.3.5. Stacked Models

Stacking, also known as stacked generalization, is an ML ensemble algorithm that combines multiple models (learners) to create a more accurate and robust predictive model. The basic mechanism behind stacking is to train several diverse base models and then use a meta-model to combine their predictions to make the final prediction. The stacking algorithm involves two main steps: training base models and training a meta-model. For stacking, multiple base models are trained on the same dataset. Each base model generates predictions for the target variable based on the input features. The predictions from these base models are then used as features for the next step. A meta-model, also known as a blending or aggregator model, is trained using predictions generated by the base models as features. The target variable is the same as the original target variable. The meta-model learns how to weigh or combine the predictions of the base models to produce the final prediction,
y ^ f i n a l = M e t a M o d e l y ^ b a s e 1 , y ^ b a s e 1 , ,   y ^ b a s e N ,
where  y ^ f i n a l  is the final predicted target value,  y ^ b a s e i  is the predicted target value from the ith base model, and MetaModel is the function that combines the predictions of the base models. Stacking is powerful because it leverages the diversity of predictions from multiple models. By combining the strengths of different models, it can achieve higher predictive accuracy than individual models. However, it can be computationally intensive and may require thoughtful tuning to avoid overfitting. In fluid mechanics applications, stacking can be used to predict various fluid properties, like flows in membranes. Soil industry, seismology, and geophysics have been the major utilizers of this method [43,44,45,46,47,48].
In this paper, we have employed the predictive power of ensemble, tree-based algorithms to create the stacked model. More specifically, the ABR, ETR, and Random Forest (RFR) algorithms have been used for calculations, and their outcomes are combined by Decision Trees (DT) algorithm. A general flowchart is embedded in the ML side of Figure 1.

2.4. Molecular Dynamics Model

Molecular Dynamics (MD) simulation is a computational technique widely used to investigate the behavior of systems at the atomic level [49,50,51]. For our calculations, we have employed the LAMMPS software to simulate the elements at the bulk state. The Lennard-Jones (LJ) potential is considered for the construction of the system’s inter-relations and the output of the simulation provides the two important thermophysical properties, viscosity and thermal conductivity, η = f (P, T) and λ = f (P, T), respectively, through a wide range of pressure (P) and temperature (T) values.
The 12-6 LJ potential is given by
u L J = 4 ε σ r i j 12 σ r i j 6 ,
with a cut-off radius rc = 2.5σ. The values of the LJ parameters σ and ε and the masses of the particles, m, are presented in Table 2.
The simulation box that encloses the bulk fluids is a cube with dimensions  10 × 10 × 10  LJ units (σ) in the x, y, and z directions. Fluid particles are initially placed in a face-centered cubic (fcc) lattice and left to attain their final positions through an initialization stage. After an equilibration phase of 106 MD timesteps (the timestep is Δt = 1 fs), in which temperature and pressure stabilize at constant values, the system reaches the stable state and parallel production runs (at least 10 for every case, for statistical accuracy) begin for 107 MD steps. Fluid density, pressure and temperature are the properties that control the simulations. Different combinations produce new values for our MD database. Simulations evolve through the NPT (Isothermal-Isochoric) ensemble, with a Nose–Hoover thermostat used to control the fluid temperature and a barostat used to control the pressure [52,53,54,55,56].

2.4.1. Thermal Conductivity Calculation

The heat flux is calculated for each atom in the system. In an fcc lattice, the heat flux vectors represent heat flow along the face diagonals of the cubic cells. Heat flux quantifies the rate at which heat energy is transferred through the system. It is computed for each particle by considering the particle’s kinetic energy, potential energy, and stress contributions. It is a vector quantity, representing heat flow along each Cartesian direction (x, y, z), with components defined as ‘Jx’, ‘Jy’, and ‘Jz’. The ‘fix ave/correlate’ LAMMPS command is used to calculate the correlation of the heat flux components over time, and the correlation function is then integrated to obtain the thermal conductivity components λxx, λyy, and λzz.
The heat flux Jx(t) in the x-direction for each atom is computed using Equation (7), as
J x t = K E t V + P E t V + σ x x t V ,
where KE(t) and PE(t) are the kinetic and potential energies, respectively, and σxx(t) is the stress tensor component along the x-direction. The Green–Kubo (GK) method is employed to calculate the thermal conductivity λ, as
λ = V k B 3 0 J x t · J x 0 d t ,
with V as the system volume and kB the Boltzmann’s constant [57,58].

2.4.2. Viscosity Calculation

Viscosity characterizes a fluid’s resistance to flow. In this MD model, viscosity, η, is calculated using the GK method from the components of the stress tensor  σ x y  [59], in a manner similar to thermal conductivity, as
η = V k B T 0 σ x y t · σ x y 0 d t .
The stress velocity correlation function in Equation (9) is calculated to determine how the stress (pressure) influences the fluid’s flow behavior,
σ x y t · σ x y 0 .

3. Results and Discussion

With a series of parallel MD simulations, viscosity and thermal conductivity values for (P-T) state points missing from the elements database have been calculated. We have to note here that the base MD program has been verified on the specific database values. To validate the results of the simulations, we first calculated the values of η and λ provided in the database (i.e., the existing data points), for which we have taken the same or nearly the same values, within statistical accuracy, and next, we proceeded to calculate the unknown data points.
Next, characteristic algorithmic implementations concerning the MD code were described; all calculated transport properties values are given in the respective Tables. All of this data is embedded in the initial elements database and the predictive ability of our ML algorithms is evaluated.

3.1. MD Programming

Both thermal conductivity and viscosity components are reported as running averages over the whole simulation time [60]. Due to the sophisticated computational techniques needed to automatically extract MD-calculated values for viscosity and thermal conductivity, a hypercomputer system (HPC) has been deployed. Computational techniques incorporated to manage data files for each element, for every (P-T) pair and for every simulation instance (as discussed above, we have performed 10 parallel simulations to extract one average value for statistical reasons), are embedded in a Jupyter Lab Python environment.
First, we prepare the input files and place them in separate folders for each available case that runs on LAMMPS, and for each element (4 elements), temperature (20 different temperature values), and pressure (10 different pressure values) range. This is performed automatically for the  ( 4 × 20 × 10 ) = 800  simulation instances; the procedure is shown in Algorithm 1.
Algorithm 1. Prepare and run MD simulations
1:Open generic LAMMPS bulk fluid simulation file
2:Define element masses list, m = [·]
3:  Create file path for every element
4:Define temperature range list, T = [·]
5:  Create file path for every temperature value
6:Define pressure range list, P = [·]
7:  Create file path for every pressure value
8:for i, j, k in mi = [·], Tj = [·], Pk = [·]
9:  Create LAMMPS input files
Algorithm 2 depicts the MD program flow in LAMMPS. After proper initialization, the code runs in 10 parallel instances to achieve better statistical accuracy and calculates the properties of interest. This is performed for every case constructed by Algorithm 1. In the final stage of a simulation, calculated values for viscosity and thermal conductivity are stored in a Pandas DataFrame and, finally, in .csv files. All values are embedded in the initial elements database.
Algorithm 2. Backbone of a bulk fluid MD simulation
1:Define simulation box, units, atom_style
2:Setup simulation variables (P, T, ρ, rc, lattice constant)
3:Setup LAMMPS computes: temperature and pressure
4:Initialization run for t = 106 timesteps
5:  Give random initial velocity values to particles
6:  Define pair styles and pair coefficients
7:  Begin NPT simulation
8:for i = 1:10
9:ith production run:
10:     Calculate kinetic and potential energy per atom
11:     Calculate stress tensors and heat flux components
12:     Perform time auto-correlations
13:     Calculate pair correlation function
14:     Calculate viscosity and thermal conductivity
15:Statistical averaging of the results
16Store calculated values to pandas dataframe
17:Store calculated values to .csv in the respective folder

3.2. MD Simulations

Turning our attention to the transport properties extracted from the MD simulations now, we present calculated viscosity and thermal conductivity values for Ar in Table 3 and Table 4, for Kr in Table 5 and Table 6, for N in Table 7 and Table 8, and for O in Table 9 and Table 10, respectively. These values are embedded in the initial property database (taken from the literature) and employed for transport property prediction in the ML model constructed. We have to note that tabulated values shown here are taken far from the critical points, and their values are close to neighboring, validated results found in the literature-only database.
The initial database for Kr is significantly smaller compared to that for Ar (see Table 1). The Kr MD simulations have given many outlier values; in order to ensure that our ML model performs in an equally fine manner, we have kept only the simulation values close to neighboring ones. Also, a similar strategy has been followed for the N and O elements.

3.3. Machine-Learning Predictions

The ability of the implied ML algorithms to predict the transport properties of the elements is captured by the accuracy measures shown in Table 11. The mean absolute error (MAE) is given by
M A E = 1 n i = 1 n | Y i | ,
where  Y i = y r e a l , i * y p r e d , i *  for the ith data point; index real, the database value; pred, the ML predicted value.
The root mean squared error (RMSE) is
R M S E = i = 1 n Y i 2 n ,
the mean absolute percentage error (MAPE) is
M A P E = 1 n i = 1 n Y i y e x p , i * ,
and, finally, the coefficient of determination, R2, is given by
R 2 = 1 i = 1 N y r e a l . , i * y ¯ r e a l . * 2 i = 1 N Y i 2 .
In Table 11, only the initial database (without MD-extracted values) has been considered for the calculations. In general, the algorithms incorporated here are capable of providing accurate predictions for all element-transport properties. As expected, the stacked algorithm (SA) has given the most accurate results for all elements. However, the ETR has given almost similar and in some cases, even more accurate results than SA. The algorithms that exploit kernels, such as SVR and GPR, are also good choices, although there may be some increased computational cost, as compared to the tree-based algorithms. Nevertheless, this overhead is not important in small and medium-sized datasets, such as the one that has been incorporated into this paper.
Furthermore, the initial database contains Kr values that have been calculated based on the principle of the corresponding states [61]. This is an indirect method used to obtain the properties of the elements, but we do not expect it to be of the same accuracy as an experimental or a proper simulation technique. Taking also into consideration that the Kr data points only number 516 (see Table 1), we believe that these are the two reasons why the accuracy measures for Kr are smaller, as compared to all other elements.
In Table 12, the MD viscosity and thermal conductivity data have also been employed in the training and validation process, and every ML algorithm has been retrained. The accuracy of each algorithm is similar to those shown in Table 11, but some error metrics here have slightly increased. This deviation is small; it falls within the range of statistical accuracy. The new MD data points may be considered accurate, but, on the other hand, they come from a different method, computational—not experimental, and this may cause a kind of deviation from the real values. In any case, the deviations are small for Ar and N, and they increase a small amount more for O and Kr.

4. Conclusions

The transport properties of Ar, Kr, N, and O are examined in this paper. The main objectives are to calculate and predict viscosity and thermal conductivity. Experimental and theoretical data from the literature, after being pre-processed, has been initially exploited to train various machine-learning algorithms.
Ensemble, classical, kernel-based, and stacked algorithmic techniques have been proven effective in predicting the transport properties of fluid elements, achieving high levels of accuracy and low levels of error. Notwithstanding their performance, we have shown that they can be used in parallel with classical molecular dynamics simulations, in a twofold framework capable of exchanging information between the atomistic simulation and the machine-learning statistical backbone. The molecular dynamics framework incorporated is capable of producing training data automatically over a broad range of simulation conditions. Taking also into consideration that the observed accuracy of the developed machine-learning methods is enhanced, we expect that this twofold computational model will lead to substantial improvements in simulation applications, where possible.
This could open new directions in dealing with and calculating material properties. In cases where molecular dynamics (or another material-focused method’s) simulations are too time-consuming or expensive, machine learning can be incorporated as a faster and cost-effective alternative. Therefore, we believe that the bulk element simulation presented here can be upscaled in the future to deal with more complex fluids and confined geometries, even within wide ranges of temperatures and pressure conditions.
While the purpose of machine learning is not to create new physics, machine learning can be exploited to accelerate traditional computational methods. The future challenge is to become more interpretable, transparent, and explainable. This will allow scientists to better understand how the models work, and it will make these processes easier to apply in new and innovative ways. Incorporating machine learning into the field of fluid mechanics still has much to offer, and there is great potential for this technology to revolutionize the way that fluids are studied, modeled, and manufactured.

Author Contributions

Conceptualization, F.S.; methodology, F.S. and C.S.; software, C.S. and M.S.; validation, F.S., T.E.K., and D.V.; formal analysis, F.S. and T.E.K.; investigation, C.S. and M.S.; resources, F.S. and T.E.K.; writing—original draft preparation, C.S. and M.S.; writing—review and editing, F.S., T.E.K., and D.V.; visualization, C.S. and M.S.; funding acquisition, F.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Center of Research Innovation and Excellence of the University of Thessaly, and funded by the Special Account for Research Grants of the University of Thessaly (Grant: 5600.03.0803).

Data Availability Statement

Data used in this work is available on github: https://github.com/FilSofos/ComputersPaper (accessed on 18 December 2023).

Acknowledgments

This work was supported by computational time granted by the National Infrastructures for Research and Technology S.A. (GRNET S.A.) in the National HPC facility—ARIS.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Allers, J.P.; Garzon, F.H.; Alam, T.M. Artificial Neural Network Prediction of Self-Diffusion in Pure Compounds over Multiple Phase Regimes. Phys. Chem. Chem. Phys. 2021, 23, 4615–4623. [Google Scholar] [CrossRef] [PubMed]
  2. Desgranges, C.; Delhommelle, J. Towards a Machine Learned Thermodynamics: Exploration of Free Energy Landscapes in Molecular Fluids, Biological Systems and for Gas Storage and Separation in Metal–Organic Frameworks. Mol. Syst. Des. Eng. 2021, 6, 52–65. [Google Scholar] [CrossRef]
  3. Yang, B.; Zhu, X.; Wei, B.; Liu, M.; Li, Y.; Lv, Z.; Wang, F. Computer Vision and Machine Learning Methods for Heat Transfer and Fluid Flow in Complex Structural Microchannels: A Review. Energies 2023, 16, 1500. [Google Scholar] [CrossRef]
  4. Sofos, F.; Charakopoulos, A.; Papastamatiou, K.; Karakasidis, T.E. A Combined Clustering/Symbolic Regression Framework for Fluid Property Prediction. Phys. Fluids 2022, 34, 062004. [Google Scholar] [CrossRef]
  5. Sanjuán, E.L.; Parra, M.I.; Pizarro, M.M. Development of Models for Surface Tension of Alcohols through Symbolic Regression. J. Mol. Liq. 2020, 298, 111971. [Google Scholar] [CrossRef]
  6. El Hasadi, Y.M.F.; Padding, J.T. Solving Fluid Flow Problems Using Semi-Supervised Symbolic Regression on Sparse Data. AIP Adv. 2019, 9, 115218. [Google Scholar] [CrossRef]
  7. Brunton, S.L.; Noack, B.R.; Koumoutsakos, P. Machine Learning for Fluid Mechanics. Annu. Rev. Fluid Mech. 2020, 52, 477–508. [Google Scholar] [CrossRef]
  8. Garnier, P.; Viquerat, J.; Rabault, J.; Larcher, A.; Kuhnle, A.; Hachem, E. A Review on Deep Reinforcement Learning for Fluid Mechanics. Comput. Fluids 2021, 225, 104973. [Google Scholar] [CrossRef]
  9. Stergiou, K.; Ntakolia, C.; Varytis, P.; Koumoulos, E.; Karlsson, P.; Moustakidis, S. Enhancing Property Prediction and Process Optimization in Building Materials through Machine Learning: A Review. Comput. Mater. Sci. 2023, 220, 112031. [Google Scholar] [CrossRef]
  10. Sagi, O.; Rokach, L. Ensemble Learning: A Survey. WIREs Data Min. Knowl. Discov. 2018, 8, e1249. [Google Scholar] [CrossRef]
  11. Drikakis, D.; Sofos, F. Can Artificial Intelligence Accelerate Fluid Mechanics Research? Fluids 2023, 8, 212. [Google Scholar] [CrossRef]
  12. Callaham, J.L.; Maeda, K.; Brunton, S.L. Robust Flow Reconstruction from Limited Measurements via Sparse Representation. Phys. Rev. Fluids 2019, 4, 103907. [Google Scholar] [CrossRef]
  13. Jirasek, F.; Hasse, H. Perspective: Machine Learning of Thermophysical Properties. Fluid Phase Equilibria 2021, 549, 113206. [Google Scholar] [CrossRef]
  14. Karniadakis, G.; Beşkök, A.; Aluru, N. Microflows and Nanoflows: Fundamentals and Simulation; Springer: Berlin/Heidelberg, Germany, 2005; ISBN 978-0-387-22197-7. [Google Scholar]
  15. Agarwal, A.; Arya, V.; Golani, B.; Bakli, C.; Chakraborty, S. Mapping Fluid Structuration to Flow Enhancement in Nanofluidic Channels. J. Chem. Phys. 2023, 158, 214701. [Google Scholar] [CrossRef] [PubMed]
  16. Kluyver, T.; Ragan-Kelley, B.; Perez, F.; Granger, B.; Bussonnier, M.; Frederic, J.; Kelley, K.; Hamrick, J.; Grout, J.; Corlay, S.; et al. Jupyter Notebooks—A Publishing Format for Reproducible Computational Workflows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas; IOS Press: Amsterdam, The Netherlands, 2016; pp. 87–90. [Google Scholar]
  17. Hanley, H.J.M.; McCarty, R.D.; Haynes, W.M. The Viscosity and Thermal Conductivity Coefficients for Dense Gaseous and Liquid Argon, Krypton, Xenon, Nitrogen, and Oxygen. J. Phys. Chem. Ref. Data 1974, 3, 979–1017. [Google Scholar] [CrossRef]
  18. Mendez, M.A.; Ianiro, A.; Noack, B.R.; Brunton, S.L. (Eds.) Data-Driven Fluid Mechanics: Combining First Principles and Machine Learning; Cambridge University Press: Cambridge, UK, 2023; ISBN 978-1-108-84214-3. [Google Scholar]
  19. Huang, J.-C.; Ko, K.-M.; Shu, M.-H.; Hsu, B.-M. Application and Comparison of Several Machine Learning Algorithms and Their Integration Models in Regression Problems. Neural Comput. Applic. 2020, 32, 5461–5469. [Google Scholar] [CrossRef]
  20. Yang, L.; Shami, A. On Hyperparameter Optimization of Machine Learning Algorithms: Theory and Practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
  21. Shahhosseini, M.; Hu, G.; Pham, H. Optimizing Ensemble Weights and Hyperparameters of Machine Learning Models for Regression Problems. Mach. Learn. Appl. 2022, 7, 100251. [Google Scholar] [CrossRef]
  22. Plimpton, S. Fast Parallel Algorithms for Short-Range Molecular Dynamics. J. Comput. Phys. 1995, 117, 1–19. [Google Scholar] [CrossRef]
  23. El Bilali, A.; Abdeslam, T.; Ayoub, N.; Lamane, H.; Ezzaouini, M.A.; Elbeltagi, A. An Interpretable Machine Learning Approach Based on DNN, SVR, Extra Tree, and XGBoost Models for Predicting Daily Pan Evaporation. J. Environ. Manag. 2023, 327, 116890. [Google Scholar] [CrossRef]
  24. Wang, X.; Ping, W.; Al-Shati, A.S. Numerical Simulation of Ozonation in Hollow-Fiber Membranes for Wastewater Treatment. Eng. Appl. Artif. Intell. 2023, 123, 106380. [Google Scholar] [CrossRef]
  25. Al-Khafaji, H.F.; Meng, Q.; Hussain, W.; Khudhair Mohammed, R.; Harash, F.; Alshareef AlFakey, S. Predicting Minimum Miscible Pressure in Pure CO2 Flooding Using Machine Learning: Method Comparison and Sensitivity Analysis. Fuel 2023, 354, 129263. [Google Scholar] [CrossRef]
  26. Palar, P.S.; Parussini, L.; Bregant, L.; Shimoyama, K.; Zuhal, L.R. On Kernel Functions for Bi-Fidelity Gaussian Process Regressions. Struct. Multidisc. Optim. 2023, 66, 37. [Google Scholar] [CrossRef]
  27. Pang, G.; Perdikaris, P.; Cai, W.; Karniadakis, G.E. Discovering Variable Fractional Orders of Advection–Dispersion Equations from Field Data Using Multi-Fidelity Bayesian Optimization. J. Comput. Phys. 2017, 348, 694–714. [Google Scholar] [CrossRef]
  28. Traverso, T.; Coletti, F.; Magri, L.; Karayiannis, T.G.; Matar, O.K. A Machine Learning Approach to the Prediction of Heat-Transfer Coefficients in Micro-Channels 2023. In Proceedings of the 17th International Heat Transfer Conference, Cape Town, South Africa, 14–18 August 2023. [Google Scholar]
  29. Zhu, K.; Müller, E.A. Generating a Machine-Learned Equation of State for Fluid Properties. J. Phys. Chem. B 2020, 124, 8628–8639. [Google Scholar] [CrossRef] [PubMed]
  30. Dai, X.; Andani, H.T.; Alizadeh, A.; Abed, A.M.; Smaisim, G.F.; Hadrawi, S.K.; Karimi, M.; Shamsborhan, M.; Toghraie, D. Using Gaussian Process Regression (GPR) Models with the Matérn Covariance Function to Predict the Dynamic Viscosity and Torque of SiO 2 /Ethylene Glycol Nanofluid: A Machine Learning Approach. Eng. Appl. Artif. Intell. 2023, 122, 106107. [Google Scholar] [CrossRef]
  31. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; Adaptive Computation and Machine Learning; MIT Press: Cambridge, MI, USA, 2006; ISBN 978-0-262-18253-9. [Google Scholar]
  32. Sepehrnia, M.; Davoodabadi Farahani, S.; Hamidi Arani, A.; Taghavi, A.; Golmohammadi, H. Laboratory Investigation of GO-SA-MWCNTs Ternary Hybrid Nanoparticles Efficacy on Dynamic Viscosity and Wear Properties of Oil (5W30) and Modeling Based on Machine Learning. Sci. Rep. 2023, 13, 10537. [Google Scholar] [CrossRef] [PubMed]
  33. Shahsavar, A.; Sepehrnia, M.; Maleki, H.; Darabi, R. Thermal Conductivity of Hydraulic Oil-GO/Fe3O4/TiO2 Ternary Hybrid Nanofluid: Experimental Study, RSM Analysis, and Development of Optimized GPR Model. J. Mol. Liq. 2023, 385, 122338. [Google Scholar] [CrossRef]
  34. Sofos, F.; Stavrogiannis, C.; Exarchou-Kouveli, K.K.; Akabua, D.; Charilas, G.; Karakasidis, T.E. Current Trends in Fluid Research in the Era of Artificial Intelligence: A Review. Fluids 2022, 7, 116. [Google Scholar] [CrossRef]
  35. Zhou, T.; Tian, Y.; Liao, H.; Zhuo, Z. Computational Simulation of Molecular Separation in Liquid Phase Using Membrane Systems: Combination of Computational Fluid Dynamics and Machine Learning. Case Stud. Therm. Eng. 2023, 44, 102845. [Google Scholar] [CrossRef]
  36. Tabaaza, G.A.; Tackie-Otoo, B.N.; Zaini, D.B.; Otchere, D.A.; Lal, B. Application of Machine Learning Models to Predict Cytotoxicity of Ionic Liquids Using VolSurf Principal Properties. Comput. Toxicol. 2023, 26, 100266. [Google Scholar] [CrossRef]
  37. Huang, X.; Ng, W.L.; Yeong, W.Y. Predicting the Number of Printed Cells during Inkjet-Based Bioprinting Process Based on Droplet Velocity Profile Using Machine Learning Approaches. J. Intell. Manuf. 2023. [Google Scholar] [CrossRef]
  38. Alanazi, M.; Huwaimel, B.; Alanazi, J.; Alharby, T.N. Development of a Novel Machine Learning Approach to Optimize Important Parameters for Improving the Solubility of an Anti-Cancer Drug within Green Chemistry Solvent. Case Stud. Therm. Eng. 2023, 49, 103273. [Google Scholar] [CrossRef]
  39. Yang, Y.; Gao, L.; Abbas, M.; Elkamchouchi, D.H.; Alkhalifah, T.; Alturise, F.; Ponnore, J.J. Innovative Composite Machine Learning Approach for Biodiesel Production in Public Vehicles. Adv. Eng. Softw. 2023, 184, 103501. [Google Scholar] [CrossRef]
  40. Almohana, A.I.; Ali Bu Sinnah, Z.; Al-Musawi, T.J. Combination of CFD and Machine Learning for Improving Simulation Accuracy in Water Purification Process via Porous Membranes. J. Mol. Liq. 2023, 386, 122456. [Google Scholar] [CrossRef]
  41. Shanmugasundar, G.; Vanitha, M.; Čep, R.; Kumar, V.; Kalita, K.; Ramachandran, M. A Comparative Study of Linear, Random Forest and AdaBoost Regressions for Modeling Non-Traditional Machining. Processes 2021, 9, 2015. [Google Scholar] [CrossRef]
  42. Pan, F.; Hu, C.; Lei, C.; Chen, J. 8.3: A Method for Measuring Droplet Volume of Electrospray Deposition Based on AdaBoost Regression. Symp. Dig. Tech. Pap. 2023, 54, 84–89. [Google Scholar] [CrossRef]
  43. Roshankhah, R.; Pelton, R.; Ghosh, R. Optimization of Fluid Flow in Membrane Chromatography Devices Using Computational Fluid Dynamic Simulations. J. Chromatogr. A 2023, 1699, 464030. [Google Scholar] [CrossRef]
  44. Tavakoli, H.; Correa, J.; Sabetizade, M.; Vogel, S. Predicting Key Soil Properties from Vis-NIR Spectra by Applying Dual-Wavelength Indices Transformations and Stacking Machine Learning Approaches. Soil Tillage Res. 2023, 229, 105684. [Google Scholar] [CrossRef]
  45. Ghavidel, A.; Ghousi, R.; Atashi, A. An Ensemble Data Mining Approach to Discover Medical Patterns and Provide a System to Predict the Mortality in the ICU of Cardiac Surgery Based on Stacking Machine Learning Method. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2023, 11, 1316–1326. [Google Scholar] [CrossRef]
  46. Taghizadeh-Mehrjardi, R.; Schmidt, K.; Amirian-Chakan, A.; Rentschler, T.; Zeraatpisheh, M.; Sarmadian, F.; Valavi, R.; Davatgar, N.; Behrens, T.; Scholten, T. Improving the Spatial Prediction of Soil Organic Carbon Content in Two Contrasting Climatic Regions by Stacking Machine Learning Models and Rescanning Covariate Space. Remote Sens. 2020, 12, 1095. [Google Scholar] [CrossRef]
  47. Koopialipoor, M.; Asteris, P.G.; Salih Mohammed, A.; Alexakis, D.E.; Mamou, A.; Armaghani, D.J. Introducing Stacking Machine Learning Approaches for the Prediction of Rock Deformation. Transp. Geotech. 2022, 34, 100756. [Google Scholar] [CrossRef]
  48. Saikia, P.; Baruah, R.D. Investigating Stacked Ensemble Model for Oil Reservoir Characterisation. In Proceedings of the 2019 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 13–20. [Google Scholar] [CrossRef]
  49. Bedrov, D.; Piquemal, J.-P.; Borodin, O.; MacKerell, A.D.; Roux, B.; Schröder, C. Molecular Dynamics Simulations of Ionic Liquids and Electrolytes Using Polarizable Force Fields. Chem. Rev. 2019, 119, 7940–7995. [Google Scholar] [CrossRef] [PubMed]
  50. Hansson, T.; Oostenbrink, C.; van Gunsteren, W.F. Molecular Dynamics Simulations. Curr. Opin. Struct. Biol. 2002, 12, 190–196. [Google Scholar] [CrossRef] [PubMed]
  51. Travis, K.P.; Gubbins, K.E. Poiseuille flow of Lennard-Jones fluids in narrow slit pores. J. Chem. Phys. 1999, 112, 1984–1994. [Google Scholar] [CrossRef]
  52. Martyna, G.J.; Tobias, D.J.; Klein, M.L. Constant Pressure Molecular Dynamics Algorithms. J. Chem. Phys. 1994, 101, 4177–4189. [Google Scholar] [CrossRef]
  53. Parrinello, M.; Rahman, A. Polymorphic Transitions in Single Crystals: A New Molecular Dynamics Method. J. Appl. Phys. 1981, 52, 7182–7190. [Google Scholar] [CrossRef]
  54. Tuckerman, M.E.; Alejandre, J.; López-Rendón, R.; Jochim, A.L.; Martyna, G.J. A Liouville-Operator Derived Measure-Preserving Integrator for Molecular Dynamics Simulations in the Isothermal–Isobaric Ensemble. J. Phys. A Math. Gen. 2006, 39, 5629–5651. [Google Scholar] [CrossRef]
  55. Shinoda, W.; Shiga, M.; Mikami, M. Rapid Estimation of Elastic Constants by Molecular Dynamics Simulation under Constant Stress. Phys. Rev. B 2004, 69, 134103. [Google Scholar] [CrossRef]
  56. Dullweber, A.; Leimkuhler, B.; McLachlan, R. Symplectic Splitting Methods for Rigid Body Molecular Dynamics. J. Chem. Phys. 1997, 107, 5840–5851. [Google Scholar] [CrossRef]
  57. Ikeshoji, T.; Hafskjold, B. Non-Equilibrium Molecular Dynamics Calculation of Heat Conduction in Liquid and through Liquid-Gas Interface. Mol. Phys. 1994, 81, 251–261. [Google Scholar] [CrossRef]
  58. Wirnsberger, P.; Frenkel, D.; Dellago, C. An Enhanced Version of the Heat Exchange Algorithm with Excellent Energy Conservation Properties. J. Chem. Phys. 2015, 143, 124104. [Google Scholar] [CrossRef] [PubMed]
  59. Sofos, F.; Karakasidis, T.E.; Liakopoulos, A. Transport Properties of Liquid Argon in Krypton Nanochannels: Anisotropy and Non-Homogeneity Introduced by the Solid Walls. Int. J. Heat Mass Transf. 2009, 52, 735–743. [Google Scholar] [CrossRef]
  60. Hess, B. Determining the Shear Viscosity of Model Liquids from Molecular Dynamics Simulations. J. Chem. Phys. 2002, 116, 209–217. [Google Scholar] [CrossRef]
  61. Pitzer, K.S. Corresponding States for Perfect Liquids. J. Chem. Phys. 1939, 7, 583–590. [Google Scholar] [CrossRef]
Figure 1. The twofold MD/ML computational framework for calculation of transport properties.
Figure 1. The twofold MD/ML computational framework for calculation of transport properties.
Computers 13 00002 g001
Figure 2. The (P-T) phase space, taken from the available databases, for (a) Ar, (b) Kr, (c) N, and (d) O. Dotted lines denote regions of phase change.
Figure 2. The (P-T) phase space, taken from the available databases, for (a) Ar, (b) Kr, (c) N, and (d) O. Dotted lines denote regions of phase change.
Computers 13 00002 g002
Figure 3. Estimating the correlation of P and T inputs with thermal conductivity and viscosity, respectively, for (a,b) Ar, (c,d) Kr, (e,f) N, and (g,h) O. Maximum correlation corresponds to 1, and minimum to −1.
Figure 3. Estimating the correlation of P and T inputs with thermal conductivity and viscosity, respectively, for (a,b) Ar, (c,d) Kr, (e,f) N, and (g,h) O. Maximum correlation corresponds to 1, and minimum to −1.
Computers 13 00002 g003
Table 1. Element data statistics. Apart from the literature, phase-state points enhanced with MD have also been calculated and incorporated in the analysis made. Critical points for T (Tc) and P (Pc) are shown.
Table 1. Element data statistics. Apart from the literature, phase-state points enhanced with MD have also been calculated and incorporated in the analysis made. Critical points for T (Tc) and P (Pc) are shown.
Elementη (μg/cm·s) [17]λ (μW/(K·m)) [17]η (μg/cm·s) (from MD)λ (μW/(K·m)) (from MD)T (K) RangeTc (K)P (MPa) RangePc (MPa)
Ar1219125920020086–500150.680.1–1004.86
Kr506506305125–500209.410.1–205.5
N12751253204075–500126.200.1–1003.39
O814814181475–500154.600.1–355.04
Table 2. LJ parameters of the elements incorporated in the MD simulations.
Table 2. LJ parameters of the elements incorporated in the MD simulations.
Element ϵ /k (K)σ (nm)m (a.u.)
Argon (Ar)152.83.29739.948
Krypton (Kr)215.83.51383.798
Nitrogen (N)118.03.5414.0067
Oxygen (O)113.03.43715.999
Table 3. MD-extracted Ar viscosity values, in (μg/cm · s).
Table 3. MD-extracted Ar viscosity values, in (μg/cm · s).
P, MPa
T, K
2.533.544.555.566.57
122915.92898.93984.931031.25918.71966.451073.30999.741038.841056.03
127856.09870.47846.76902.07825.29885.12822.00840.84991.45899.43
132739.48668.68712.28738.41772.78722.18743.04761.02703.95848.49
137560.58597.45601.79659.04585.75664.99651.53660.57702.26693.07
142117.64147.1486.62481.58487.14555.57580.50604.51632.08643.75
147126.46122.85136.91141.95177.16426.69464.30477.40504.92570.44
152136.84140.36138.27150.60172.48179.28229.33318.64405.40432.61
157146.23144.80158.39152.49153.81174.39181.17216.45229.40328.56
162146.32148.01155.79151.34162.92143.66185.92193.01205.05222.15
167144.75151.00142.66150.15174.37157.78168.46190.36196.99202.15
172154.98138.00149.39162.98162.99178.80186.19191.40192.16196.04
177155.70160.01157.61163.30155.16187.89192.83184.26197.63205.62
182162.06151.75174.93177.68191.47185.64177.72180.83199.93185.79
187146.68151.42177.72172.29171.85185.45191.96187.61198.33229.21
192162.91173.95173.98190.22172.68184.91177.29187.52208.48193.12
197163.13164.06164.92185.37165.11169.06168.62184.49184.17191.25
202181.09183.14190.11175.38180.24181.74188.42184.45200.43208.96
207173.99190.19185.76181.99183.47170.59198.19210.16178.74220.90
212160.56165.03193.46192.82201.60223.12175.56202.08205.86217.05
217177.43183.18179.69198.42183.86189.87198.23205.96191.67194.21
Table 4. MD-extracted Ar thermal conductivity values, in (μW/(K · m).
Table 4. MD-extracted Ar thermal conductivity values, in (μW/(K · m).
P, MPa
T, K
2.533.544.555.566.57
12273.0175.4975.3380.6075.3880.0879.0881.2780.2876.70
12770.1571.6569.8170.6668.7475.7768.7476.7672.1079.16
13260.0354.5064.0961.4166.3464.2466.4867.1468.0268.29
13755.4556.0155.1054.2358.5262.1463.1459.0064.6360.67
14211.6713.9041.2745.6844.0549.0056.2355.5351.2154.05
14712.1513.3814.2717.6025.0243.4141.9547.1648.4347.37
15210.6013.1613.5914.2415.5018.7327.7237.9042.0842.11
15711.7913.5211.8114.6215.6217.4521.6124.0028.8331.95
16211.0912.8713.6615.4616.2915.8518.2520.9123.9126.93
16711.3312.3312.4113.8014.5517.8316.6618.8823.7522.84
17211.3112.0414.3014.8414.9716.5115.6818.2519.4122.22
17712.8313.6912.7413.0114.5415.9913.6818.7718.7118.90
18213.2513.6013.3613.7214.7016.7915.4917.3218.4619.23
18712.2312.7213.8014.9115.7315.9516.1817.7518.0216.17
19212.5013.3214.7813.1415.0715.0215.3416.4217.5719.08
19712.2513.1814.2914.6715.6315.9218.0918.0218.3116.61
20212.6511.9214.4115.1815.4316.3916.8016.3919.7617.96
20712.2612.1813.9014.9114.4517.3615.4917.0017.2019.44
21213.4113.8614.2214.7415.6217.1217.1117.2718.2319.74
21713.0914.8514.3014.2316.5315.4516.8717.7819.6119.38
Table 5. MD-extracted Kr viscosity values, in (μg/cm · s).
Table 5. MD-extracted Kr viscosity values, in (μg/cm · s).
T, KP, MPa η ,   μ g / cm · sT, KP, MPa η ,   μ g / cm · s
1770.5157.741526.02485.56
2120.5186.811626.02143.70
1521.02184.461776.01647.74
1571.01998.471876.01314.75
1621.0171.381926.01183.46
2071.0185.601976.01151.88
1472.02579.932026.0982.18
1572.02136.942076.0830.09
1622.01881.722126.0590.38
1772.0182.642176.0351.66
1474.02511.7216210.02190.60
1674.01868.8616710.02004.87
1874.01223.9517210.01814.65
2024.0386.8118210.01548.39
2174.0223.7319710.01213.54
Table 6. MD-extracted Kr thermal conductivity values, in (μW/(K · m).
Table 6. MD-extracted Kr thermal conductivity values, in (μW/(K · m).
T, KP, MPa λ ,   μ W / ( K · m)
1222.096.48
1272.090.52
1372.082.57
1472.075.04
1522.072.04
Table 7. MD-extracted N viscosity values, in (μg/cm · s).
Table 7. MD-extracted N viscosity values, in (μg/cm · s).
T, KP, MPa η ,   μ g / cm · sT, KP, MPa η ,   μ g / cm · s
1220.182.731274.0188.13
1220.584.401324.0158.94
1321.595.711374.0124.86
1421.5100.941524.0123.76
1471.5105.321325.0196.40
1222.095.711525.0135.21
1572.5115.141376.0227.88
1223.0120.581426.0176.06
1373.0110.321476.0165.24
1273.5137.572176.0157.40
Table 8. MD-extracted N thermal conductivity values, in (μW/(K · m).
Table 8. MD-extracted N thermal conductivity values, in (μW/(K · m).
T, KP, MPa λ ,   μ W / ( K · m)T, KP, MPa λ ,   μ W / ( K · m)
1222.519.88157524.40
1272.518.98187523.27
1572.519.37217524.42
1922.520.25132660.90
1972.520.95137649.37
2072.521.24152630.34
127324.18217625.43
132320.55137756.31
192320.90142744.29
207321.78157732.55
1423.523.66167728.00
1823.521.14172727.51
1923.521.43182726.56
132433.04187726.39
137427.81217726.47
147422.42132857.70
192422.08142849.84
127548.07152837.44
142532.75157834.85
152526.26182828.34
Table 9. MD-extracted O viscosity values, in (μg/cm · s).
Table 9. MD-extracted O viscosity values, in (μg/cm · s).
T, KP, MPa η ,   μ g / cm · sT, KP, MPa η ,   μ g / cm · s
1321.0101.481372.0112.02
1371.0106.911522.0121.22
1471.0112.021622.0127.15
1571.0121.221672.0130.26
1721.0130.261422.5118.88
1271.5100.631423.0124.90
1421.5112.021473.0126.80
1571.5122.001523.0128.88
1621.5129.851473.5128.59
Table 10. MD-extracted O thermal conductivity values, in (μW/(K · m).
Table 10. MD-extracted O thermal conductivity values, in (μW/(K · m).
T, KP, MPa λ ,   μ W / ( K · m)T, KP, MPa λ ,   μ W / ( K · m)
2070.119.09172525.94
2120.119.65177525.23
2170.119.94192524.11
1271.527.39182627.39
152426.74187626.74
157425.23192625.94
162423.89197625.74
Table 11. Metrics of accuracy for viscosity (η) and thermal conductivity (λ) coefficients, as achieved by each ML algorithm, for the initial database.
Table 11. Metrics of accuracy for viscosity (η) and thermal conductivity (λ) coefficients, as achieved by each ML algorithm, for the initial database.
ArKrNO
ηληληληλ
SVRΜAΕ17.731.3368.190.949.891.4924.692.15
RMSE69.414.01292.164.0841.077.8783.947.36
MAPE0.090.050.380.080.070.040.130.08
R20.990.990.950.980.990.970.990.98
GPRΜAΕ27.222.7655.151.4519.302.1433.433.48
RMSE68.496.65182.734.4977.777.89136.8010.12
MAPE0.080.070.190.130.070.040.110.09
R20.990.980.980.980.950.970.970.97
ABRΜAΕ19.931.4639.571.4712.332.0027.081.91
RMSE84.544.7972.676.3950.628.6199.609.05
MAPE0.090.050.040.130.070.060.100.05
R20.980.980.990.970.980.960.980.98
ETRΜAΕ10.201.0341.750.808.31.2414.281.15
RMSE56.683.36201.333.1747.147.0563.805.92
MAPE0.060.040.220.090.050.040.080.06
R20.990.990.980.990.980.980.990.99
SAΜAΕ12.381.0148.251.099.330.9813.290.88
RMSE61.962.87205.114.0344.804.1850.493.38
MAPE0.060.040.200.070.050.040.060.03
R20.990.990.980.990.980.990.990.99
Table 12. Metrics of accuracy for viscosity (η) and thermal conductivity (λ) coefficients, as achieved by each ML algorithm, for the initial database with the addition of new MD-calculated data.
Table 12. Metrics of accuracy for viscosity (η) and thermal conductivity (λ) coefficients, as achieved by each ML algorithm, for the initial database with the addition of new MD-calculated data.
ArKrNO
ηληληληλ
SVRΜAΕ20.241.3569.201.166.921.1737.632.80
RMSE58.844.03255.794.4428.045.15128.8112.41
MAPE0.060.050.330.090.040.040.200.08
R20.990.990.950.980.990.990.980.96
GPRΜAΕ42.113.1257.241.5317.342.0650.393.22
RMSE128.808.71186.754.5658.045.28135.6811.65
MAPE0.080.080.160.130.040.050.230.06
R20.970.960.970.980.980.990.980.97
ABRΜAΕ18.681.3827.101.333.691.5322.892.77
RMSE54.574.6452.326.1127.926.6871.2013.07
MAPE0.040.050.030.120.020.050.090.04
R20.990.990.990.970.990.970.990.96
ETRΜAΕ15.210.9627.061.339.101.5322.892.77
RMSE66.623.2852.326.1127.926.6871.2013.70
MAPE0.040.040.030.120.020.050.090.04
R20.990.990.990.970.990.980.990.96
SAΜAΕ17.380.8346.911.056.470.8613.412.09
RMSE76.912.62172.134.5132.163.9035.018.53
MAPE0.050.020.160.090.020.040.050.07
R20.990.990.980.990.990.990.990.99
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stavrogiannis, C.; Sofos, F.; Sagri, M.; Vavougios, D.; Karakasidis, T.E. Twofold Machine-Learning and Molecular Dynamics: A Computational Framework. Computers 2024, 13, 2. https://doi.org/10.3390/computers13010002

AMA Style

Stavrogiannis C, Sofos F, Sagri M, Vavougios D, Karakasidis TE. Twofold Machine-Learning and Molecular Dynamics: A Computational Framework. Computers. 2024; 13(1):2. https://doi.org/10.3390/computers13010002

Chicago/Turabian Style

Stavrogiannis, Christos, Filippos Sofos, Maria Sagri, Denis Vavougios, and Theodoros E. Karakasidis. 2024. "Twofold Machine-Learning and Molecular Dynamics: A Computational Framework" Computers 13, no. 1: 2. https://doi.org/10.3390/computers13010002

APA Style

Stavrogiannis, C., Sofos, F., Sagri, M., Vavougios, D., & Karakasidis, T. E. (2024). Twofold Machine-Learning and Molecular Dynamics: A Computational Framework. Computers, 13(1), 2. https://doi.org/10.3390/computers13010002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop