1. Introduction
Elastohydrodynamic lubrication (EHL) is a subset of hydrodynamic lubrication (HL), wherein the contacting surfaces remain fully separated, but the forces involved lead to substantial deformation of the contacting surfaces. Critically, this surface deformation is at least the same magnitude as the surface topography and the thin film separating the contacting surfaces [
1]. As a result of the similarity in scale, surface topography has a substantial effect on the performance of the contacting surfaces. Etsion [
2] showed that the surface features can reduce the contact friction for an EHL contact. Even now, the effect of microscale surface topography on friction between lubricated contacts is difficult to predict, and effective micro-texturing strategies are largely derived through trial and error [
3]. Despite advances in modern computing, representative simulations are often computationally non-viable because these simulations generally require a prohibitively large mesh.
Forming an average flow model by applying flow factors to the Reynolds equation is commonly used to model the EHL regime quickly. This approach was first employed by Patir and Cheng who used the flow factors to model homogenized surface topography [
4], but subsequent authors have expanded on their work. Some average flow models focus on generating tables of flow factors from practical data [
5], while others use flow factors calculated from simulations [
6,
7]. This simulation approach is referred to as homogenization, wherein periodic microscale lubrication is integrated into unit cells and coupled back to the macroscale [
8]. Flow factor methods are fast and reliable for predicting global scale parameters; however, these methods cannot predict local scale parameters (such as pressure or asperity deformations) with the required degree of accuracy. The average flow models are only fast if there is an existing body of calculated flow factors. If this is not the case, then another method would likely be more appropriate. This means that the average flow factors approaches are not suitable for many uses, such as for innovative or original designs. Despite this, flow factors remain commonly employed in contemporary research into full film lubrication [
9,
10].
It must be noted that flow factors remain an effective tool for modeling the mixed lubrication regime. For example, König et al. [
11] have developed a flow factors method that avoids this flaw through the implementation of a deterministic asperity contact model coupled with experimental data. Until new methods are developed, flow factors will remain one of the most popular methods for mixed lubrication modeling [
4,
12]. However, the use of a flow factors model without modifications to explore the impact of non-negligible surface topography on EHL is questionable [
3]. This paper represents the first publication in a course of research that will culminate in the HMM being expanded to model mixed lubrication; until then, flow factors remain a viable option for mixed lubrication modeling. Within this work, the analysis is focused upon EHL modelling.
The generalized Reynolds method [
13] or the Navier–Stokes equations [
14] offer a more complete description of lubricant flow through the inclusion of the inertial forces within the fluid. de Kraker et al. [
15] developed a flow factors model that would enable more complex descriptions of lubricant flow; however, this approach can only be applied to the contacting region, not across the entire macroscale domain. Computational fluid dynamics (CFD) has also been extensively applied to model smooth geometries with inertial lubricant effects [
16,
17]. CFD represents an undeniably useful tool for modeling lubricant phenomena, but it is notably weak at modeling the effect of surface topography on the lubricant flow. This is not due to an inherent mathematical flaw but simply that the mesh resolution required to map the microscale phenomena is so fine that the resulting mesh is extremely computationally expensive to use.
Deterministic modeling strategies, together with flow factor methods, dominate the modeling landscape, with Patir and Cheng’s 1976 paper alone receiving hundreds of citations since 2020, demonstrating the prevalence of these methods to this day. Within a deterministic modeling strategy, the microscale topography is fully defined across the macroscale. As with CFD, the mesh resolution required to fully define the microscale geometry will lead to an impractically large mesh. Running such a model can take weeks, even on a high-performance computing (HPC) cluster. Whilst the resulting outputs are undeniably accurate at both the local and global scale, the sheer computational load often renders deterministic modeling computationally prohibitive, thus preventing the widespread adoption of these methods.
Homogenization is an approach that has the potential to rival the accuracy of the deterministic approach whilst ensuring a more practical execution time. Within homogenization, the microscale surface features are characterized as variables and subsequently coupled into a macroscale lubricant flow model. A periodicity constraint ensures that the whole framework is computationally inexpensive and that the results are characteristic of the mechanics at both the microscale and macroscale. Patir and Cheng and their average flow model can be considered precursors to this approach [
4], as can Tzeng and Saibel [
18], and Christensen [
19], due to their stochastic two-scale approach to modeling surface roughness. Elrod [
20] was among the first to use a homogenized Reynolds-type equation through their work on multiscale analysis. Homogenization methods have been developed for modeling all lubrication regimes and a variety of contacting surfaces [
15,
19,
20,
21,
22].
More advanced homogenization methods have been derived to model cavitation, lubricant starvation, and surface deformation [
21,
23,
24,
25]. One such method that has been extensively applied to HL and EHL are the heterogenous multiscale methods (HMM). The HMM were first derived by Engquist and E [
26], and they facilitate the coupling of disparate models. Gao and Hewson were the first to apply HMM in a tribological context [
27]; within their work, a rough microscale domain was coupled to a macroscale model that covered the entire solution domain. The separation of scale between the microscale and macroscale traditionally renders modeling the effect of microscale surface roughness on the macroscale difficult. However, through the HMM, the local scale variables associated with microscale surface roughness can be predicted accurately in a macroscale model, as has been extensively validated [
24,
25,
28].
The HMM framework can be best understood through
Figure 1, a simplified diagram that explains the flow of data between the various component models for one-dimensional flow.
The first step within the HMM framework is to run an initial solution to seed the data for the other models. In the case of tribological simulation, this initial solution does not include the surface topography. For every node within the smooth solution, a corresponding microscale model is generated, which models surface topography. The outputs of these microscale models are then used to create a metamodel that will guide the convergence of another macroscale solution, therefore accounting for the impact of microscale topography. If convergence has not yet been achieved, new microscale models are run, and the metamodel is updated before the textured macroscale model is run again. This step repeats until convergence is achieved.
Surface topography is an inherently multiscale phenomenon, with surface features occurring across the microscale and macroscale domains. Microscale roughness is largely formed during the manufacturing process, whereas macroscale surface roughness is generally caused by the operation and handling of the contacting surfaces, which causes scratches or pitting. HMM can be deployed to model roughness across domains of both sizes. However, experimental measurements suggest that microscale surface topography varies substantially across the contacting surfaces [
29]. To date, HMM have only accepted microscale domains with identical topography across the surface. Within this work, the microscale models will vary so that more representative surface topography can be modeled. This is desirable as it will facilitate greater accuracy in outputs and ensure that the HMM have full parity with a deterministic approach for modeling complex geometry. Although the models operate within an EHL regime and there will be no wear-in operation, there is already substantial variation in topography across the surface when measured. Tilted pad bearings are modeled within this work as a specific illustration, but the presented HMM can be applied to a range of lubricated contacts, providing the underpinning assumptions of scale separation and periodicity are maintained. The central aim of this paper is to produce a novel HMM framework that can model representative surface roughness in a computationally effective way.
2. Materials and Methods
Within this section, the methods and formulation of this research will be presented. This will include descriptions of the macroscale, microscales, and the mathematical framework used to couple these two domains.
The HMM framework can be applied to model most lubricated contacts, assuming that the underlying constraints are maintained. In this work, tilted pad bearings are modeled. This is because the HMM framework has already been extensively validated for tilted pad bearings [
24,
28], and components as ubiquitous as tilted pad bearings are readily available from manufacturers.
2.1. Macroscale Model
Within this section, the equations and parameters that define the macroscale are presented.
Figure 2 defines the geometry of the model, and the numerical values for these geometrical parameters are presented in
Table 1, alongside the parameters that define the lubrication scenario.
Previous work has proven that it is not necessary to model the entire bearing to get accurate results; modeling the pad will provide the same degree of accuracy with reduced computational expense [
24]. Hence, within this work, the collar will be neglected, and the pad alone will be modeled.
2.1.1. Macroscale Fluid Mechanics
Lubricant flow within the macroscale is defined through the homogenized microscale flow, inherently employing the same assumptions as the Reynolds equation and, therefore, considers the conservation of mass and momentum for the fluid transport. Moving least square (MLS) weighting coefficients are included within the Reynolds formulation and are used to incorporate the effect of the homogenized surface topography into the macroscale solution. Effectively, these coefficients will allow the microscale models to inform the solution of the macroscale domain. It is important to note that these MLS weighting coefficients are entirely informed by the microscale models; hence, this approach will not result in the loss of local scale accuracy, which occurs when a conventional Patir and Cheng flow factor approach is employed. This will be explained further in
Section 2.5.1. The equations that define the macroscale fluid mechanics and allow the lubricating pressure
to be calculated are:
where
and
are the coordinate direction mass fluxes in the lubricating region;
,
,
, and
are coefficients that are functions of the macroscale variables
, using the data that comes from the homogenization of the microscale variables;
is the macroscale fluid density;
is the film thickness;
is the macroscale fluid viscosity, and
and
are the velocity components due to rotation in the
and
directions, respectively. The parameter
is properly explained in
Section 2.5. The HMM can be validated by setting the microscale surface topography to be perfectly smooth. The governing equations will then collapse to just be Reynolds equation.
Lubrication is modeled across the boundary of the pad defined by the points “abcd” in
Figure 2. This means that macroscale fluid flow is two-dimensional, defined by just
and
, as can be seen in Equations (1)–(3). This is an appropriate assumption within the EHL regime as the surfaces are sufficiently close together. However, the effects of lubricant flow (deformation, internal pressures) are modeled across the full volume of the pad. Hence, fluid flow occurs on surface
, but later calculations occur on volume
.
It is important to note that macroscale density and viscosity are not necessarily identical to the microscale values, as the cavitation modeling approach (detailed in
Section 2.2.2) that is applied to the microscale, requires density and viscosity to vary.
A cylindrical coordinate system is defined at the origin, vertical to the Cartesian system, such that and . Assuming an angular velocity of around the vertical, this allows the rotational speed components to be specified as and .
Because of the multiscale approach, load-per-unit-area
and the pressure
will not necessarily be equal. Within the macroscale,
drives the deformation and load capacity, whereas
is used within the hydrodynamic solution. These two variables are coupled via the MLS weight coefficient
:
These coefficients are explained more fully in the metamodeling section. Macroscale load-carrying capacity is also modeled using
where
is a specified load-carrying capacity and
is the pad surface area.
As the tilted pad bearings are operating within the EHL regime, there will be no contact between the surfaces or their asperities, meaning that there will be negligible temperature gradients, and the peak pressures will not be sufficient to cause a significant variation in lubricant density. The lubricant is, therefore, assumed to be isoviscous and incompressible within the macroscale model.
The boundary conditions of the macroscale require that the pressure along the outer perimeter of the contacting surface is gauge pressure ().
2.1.2. Film Thickness
The film thickness specific to the macroscale,
, is calculated using
where
is the contact separation,
is determined through the pad geometry, and
is the normal mechanical deformation of the pad, solid deformation is defined subsequently in Equation (12).
is calculated through
where
is the arc length of the pad in radians.
Finally, the equation
is used to couple the scales of the problem, as
is the film thickness used in the coupling and
is defined within
Section 2.2 as Equation (16).
2.1.3. Solid Mechanics
Localized deformation of the contacting surfaces is considered within the macroscale domain using a linear elastic model based on the infinitesimal strain theory. The governing equations are:
where
is the gradient operator for the
Cartesian coordinate directions;
is the stress tensor;
is the shear modulus defined by
;
is the strain tensor;
is Lamé’s first parameter, defined by
;
is the identity tensor; and
is the vector of deformation. It is important to note that both the shear modulus
and Lamé’s first parameter
are related to the material properties of the contacting surfaces, specifically their Young’s modulus
and Poisson’s ratio
, these values must be specified for all the different materials employed within the model.
Inherent to the tilted pad bearing problem, various deformation constraints are applied, as specified within
Figure 3, where
is the surface normal vector. The lubricated surface (abcd) will experience deformation because of the applied load, and the rear face of the pad (efgh) cannot deform vertically as it would be in direct contact with the pad backing. The pad backing will not appreciably deform because of the loading.
2.2. Microscale Model
Within this section, the mathematics that governs the microscale will be presented. It is important to note that the solution domains (macroscale and microscale) are disparate, hence values must be passed between the two domains. This means that the variables for these two domains must be clearly separated. By convention, this is done using capital letters for the macroscale variables and lower-case letters for the microscale variables. Within
Figure 4, we can see the relationship and scales of the two domains. As the names suggest, the key difference between the domains is the scale separation. The equations that define this separation are
where
is the microscale length,
is the macroscale length,
is the ratio of the microscale length to the macroscale length. For the HMM to hold true,
must be less than 0.1 [
27]. Effectively,
sets the minimum permissible scale difference between the domains to ensure mathematical viability. As the microscale length decreases, the microscale model will tend towards a point, and the solution will tend towards Reynolds equation.
Within the microscale, the deformation is purely elastic and vertical. Hence, the loading within the microscale will cause the surface to move in the
z direction, but the individual surface features will not deform. This is referred to as modeling the deformation like a spring. These assumptions are valid because when operating within the EHL regime, deformation is purely elastic. The equation to calculate the microscale deformation is
where
is the stiffness-per-unit-area.
2.2.1. Microscale Fluid Mechanics
Similar to the macroscale, lubricant flow in the microscale is governed by the Reynolds equation:
where
and
are the microscale mass flux (per unit depth),
is the microscale fluid density,
is the microscale film thickness,
is the microscale fluid viscosity, and
is the microscale pressure. Again, the Reynolds equation is used to conserve the mass and momentum of the lubricant.
The solution of Equations (17)–(19) necessitates the implementation of three boundary conditions:
Microscale film thickness
is calculated using
where
is the function that defines the microscale surface topography and
is the microscale deformation, as defined in Equation (16).
The topography describing function
will vary between the models analyzed within this work, hence the specific functions will be presented in
Section 2.3. However, all these functions must meet a periodicity constraint on the microscale boundaries. This periodicity constraint has been formulated for
as
where
is the microscale length in the x direction, and
is the microscale length in the
direction. It is important to note that within this work,
, hence the microscale domain is square. The origin is defined in
Figure 4.
Within the microscale domain, the surface topography can take any value within the domain so long as the average value over the domain is zero. This is because the surface topography within the microscale is used to determine the effect of surface topography on the fluid flow. The results are then used as inputs for the macroscale model so that the initially smooth macroscale model will account for the surface topography. As we want to use this microscale model to seed the convergence of the macroscale, it is mathematically necessary that the average value of
must be equal to zero, meaning that the average initial film thickness is equal to the macroscale film thickness, and the scales can be related; this is expressed mathematically as
where
is the topography retaliative to a non-zero average value.
2.2.2. Cavitation
The macroscale models the full extent of a tilted pad bearing; hence, the contact shape is fully convergent, and cavitation cannot occur within the macroscale. In contrast, the surface topography of the microscale will converge and diverge. Low-pressure conditions within the diverging sections will lead to negative pressures. Cavitation must be implemented to ensure that pressure cannot be negative, and that mass and momentum are conserved. Within this work, the cavitation model of Söderfjäll et al. [
30] has been implemented and is defined by:
where
is the degree of saturation of the fluid;
is the threshold pressure (the point at which cavitation would occur);
is a threshold constant;
and
are the initial values of lubricant density and viscosity, respectively;
and
are the values of lubricant density and viscosity that are allowed to reduce and therefore account for cavitation.
The values of
and
are specified in
Table 2. These values are as specified in the work of Söderfjäll et al. [
30]. It is notable that if cavitation occurs, a parametric sweep for
can be required to achieve convergence.
2.3. Surface Topography Modeling
Within this work, the HMM framework is expanded to facilitate modeling varying topography, and the various techniques for representing measured topography are analyzed. The input parameters required for each model are defined in
Table 3.
roughness is the average asperity height above the mean line and can be calculated using
where
is the area of the measurement region and
is the function that defines how topography varies with
and
.
roughness is the average distance between adjacent asperities and can be calculated using
where
m is the number of asperities on a cross-section and
is the distance between the
i′
th and
i+1′
th asperity. The
roughness value can then be converted into a spatial frequency
for inclusion in the metamodel.
Definitions of the parameters used to define Gaussian roughness will not be provided within this work. This is because it is a conventional way to define the Gaussian topography, and it has been explained in great detail within the work of Almqvist [
31]. It is also worth noting that these parameters did not appreciably vary across the pad.
It is obviously true that the most representative topography for the microscale would be the actual measured microscale topography that is applied in our third model. However, the acquisition of these measurements requires expensive equipment and is time-consuming. Hence it is worth exploring if the same accuracy can be achieved with approximations.
2.3.1. Topography Measurements
Modeling the representative surface topography first requires that the topography is measured. Within this work, the Bruker NPFLEX was employed [
32]. The NPFLEX is an optical profilometer and was used to measure a full map of the topography within the sampling region. Key performance characteristics were extracted from this map at each sampling point, and the map was applied to the microscale domain to provide the most representative depiction of the topography.
Two pads of identical dimensions were used within this work, as can be seen in
Figure 5. Both were sourced from the same company and were manufactured using the same techniques. Pads 1 and 2 are both presented within this work because when the results were analyzed (presented in the
Section 3) as it was discovered the microscale surface topography varies substantially between the two nominally identical pads. This serves to demonstrate the importance of modeling representative microscale topography.
Pads one and two were both sampled at 25 points across their surfaces. These points were distributed evenly and can be seen in
Figure 6. Three total measurements were taken at each sampling point so that an average could be formed. It is important to note that these sampling points do not correlate to microscale modeling locations; they are merely the points at which topography was measured. This data is then interpolated to generate the requisite topography data for the individual microscales.
A sensitivity study was conducted for the 5 by 5 sampling pattern. At lower resolutions (4 by 4), the variations in the topography were not fully captured. At higher resolutions (7 by 7), there was no appreciable change in the generated topography map.
2.3.2. Idealized Sinusoidal Topography
Using parameters measured from real topography, an idealized sinusoidal topography map can be applied on the microscale using
where
is the microscale surface topography,
is the asperity height and
is the asperity spatial frequency.
To ensure that the first constraint is met, the number of repeats for the surface topography must always be an integer. For example, in the case of the idealized sinusoidal roughness, the topography will contain only complete oscillations (an integer quantity of cycles). As explained in more detail in later sections, the non-idealized topography requires mirroring to ensure that the periodicity constraint is upheld.
Through this implementation, representative roughness can be modeled with reduced computational costs and with simple-to-measure parameters.
2.3.3. Gaussian Topography
Gaussian roughness modeling occupies a middle ground between the alternative geometry implementations in that it requires more measured data than the idealized sinusoidal profiles, but this data is still far less than the complete map required for the measured topography. The full list of required inputs is provided above in
Table 3. These parameters are then used within Almqvist’s fractal surface generator [
33] to generate the topography used in this paper. Finally, the average value of the generated topography map is subtracted from every point, such that the average value is 0. This topography is now ready to be used in microscale models.
This topography map is then “stretched” so that the asperity height and frequency and the Gaussian roughness are representative of the topography at the location of each microscale model.
2.3.4. Measured Topography
Like the idealized and Gaussian topography, measured values of and , are used to alter the microscale topography. The key difference is simply that the measured topography map is applied to the microscale domain as the topography rather than using data extracted from the topography map to generate new topographies. This topography map is the same for all microscale models, but it is then “stretched” according to the values of and .
Applying the measured topography directly to the microscale does lead to the most accurate representation, but the acquisition of a topography map requires a far more expensive piece of equipment (in this work, the Bruker NPFLEX, Billerica, MA, USA) than measuring a few parameters. That is why multiple topography representations are applied within this work.
2.4. Homogenization
This work presents multiple models so that the effect of more realistic but less efficient, topography modeling can be determined. This will allow conclusions to be drawn about when the additional accuracy, through greater realism (and complexity), is no longer offsetting the increased computational cost. The variables passed into the metamodel
are:
These macroscale variables are all functions of the microscale. Coupling the macroscale and microscale domains requires that the variables to be compared are homogenized. The homogenized variables are functions of the eight parameters used to define the metamodel and the microscale model. These equations are:
The homogenized variables are then expressed at the macroscale through the moving least squares (MLS) weighting coefficients of Equations (1), (2) and (4).
Through this method, we can include the effects of surface topography and cavitation in the macroscale by using the coefficients to describe the variations in load-per-unit-area and the mass flux calculated in the microscale.
2.5. Metamodel
The MLS weighting coefficients present within Equations, and (1), (2), and (4) are coupled to the microscale through the use of a metamodel. Within the subsequent sections, a full mathematical explanation of the metamodel will be provided, with a summary given in the following paragraph.
A multidimensional metamodel is constructed of the macroscale parameters, using the moving least squares (MLS) method to determine the weighting coefficients. Within the first iteration of the model, a microscale model is run for every node within the macroscale. This microscale data is required to initialize the metamodel. The weighting coefficients are then updated within the metamodel, and the macroscale is run again. If convergence in the macroscale pressure on all grid nodes is achieved, the model ends. Otherwise, the calculated macroscale data from the previous iteration is used to determine the input parameters for a new batch of microscale nodes. A design of experiments (DOE) approach means that any microscale model with negligible variation in input parameters compared to previous microscale models does not need to be rerun as the model output values are already known. This will be explained in
Section 2.5.3. As a rule of thumb, the number of microscale models required to be computed will decrease as iterations increase; thus, each subsequent iteration will typically require less computation than the previous. This elegant coupling framework allows highly loaded line contacts to be solved with greater efficiency and accuracy. The efficiency of this approach facilitates the key novelty of this paper, modeling representative surface topography efficiently.
2.5.1. Moving Least Squares (MLS)
The coefficients within Equations (1), (2) and (4) are such that they are functions of the variable . These equations are then used as the basis functions for the metamodel. Once an initial set of data is calculated from the microscale models, a regression analysis can be performed with as the coefficients.
Moving least squares (MLS) is the least squares regression technique that is employed within the metamodel. When using MLS, the coefficients
become functions of the space in which the metamodel is assessed
. The weighting parameter
is required to adjust the influence of individual data points. In this paper, a Gaussian decay function is used so that weight diminishes with increasing Euclidean distance from the assessment location,
where
is the closeness of the fit,
is the square of the normalized Euclidean distance between the assessment location and the
known experiment, and
is the size of the DOE.
Normalized Euclidean distance
is calculated via
where
is the normalized
component dimension of
for the
known experiment and
is the normalized
component dimension of
.
The normalized variables of
are obtained from:
An example of the MLS metamodel for
is
Here, all variables are as previously defined, except that the addition of subscripts indicates the position within the DOE. The metamodel is then assessed numerically to calculate the MLS weighting coefficients as a function of the space described by
2.5.2. Cross Validation (CV)
Put simply, the metamodel takes the known data and uses it to generate predictions. In this application, that means using the results of the last round of models to predict a new set of inputs that, when solved, will reduce the overall error.
This is achieved through the closeness of fit moderating the rate of decay within the MLS weighting functions; the parameter is used to tune the metamodel to minimize the error between predicted and known values. The exact impact of is best seen in Equation (39), wherein a value of would result in a value of unity, reducing the MLS metamodel to a conventional least squares regression. In contrast, when , the metamodel will not provide a numerical response.
Cross-validation (CV) is employed to determine the value of . In this paper, the specific CV procedure used is the leave-one-out method, wherein a range of values are initially selected. Next, one node from the design of experiments (DOE) is sequentially removed, and the value of the missing node is predicted using a metamodel constructed from the remaining nodes. This is repeated for every node in the DOE. The difference between the predicted and known value for every node is then calculated. An average value of mean, absolute error can be found for the closeness of fit parameter. The whole process is then repeated for the full range of values, and the value which results in the lowest mean error is selected. This final value is then used to generate a metamodel and calculate the weighting coefficients.
2.5.3. Design of Experiments (DOE)
One of the key advantages of the metamodeling approach employed within this work is that microscale models are not unnecessarily run. This is achieved through the DOE approach, wherein the outputs from all the microscale models are combined into one dataset.
During the first iteration, an initial batch of microscale models are computed. These modes are then added to the DOE and used to generate the initial metamodel. In turn, this metamodel is used to generate the weighting coefficients for the interim macroscale solution. The outputs of this interim macroscale solution are then compared to the DOE to see if additional microscale solutions are required for the next iteration. Specifically, this means that if the new solution is outside the existing bounds of data within the DOE, more microscale models must be solved to expand the DOE. This ensures that the weighting coefficients can be accurately described within that region.
Within the macroscale domain, all nodes and their variables will be homogenized as a curve in an 8-dimensional space, defined by
. The curve is generated using
where
is the arc length of the curve
.
A discretized curve can then be created. The Euclidean distance between the curve and the existing DOE is calculated. If this distance is greater than 1%, a microscale model is run and added to the DOE. The first iteration will, of course, require that the entire curve is added to the DOE, hence, a microscale model is run for every macroscale node. Subsequent iterations will require fewer additions to the DOE, therefore reducing the required number of microscale models and saving computational resources.
Initially, the range of is chosen such that . Over subsequent iterations, the range of is determined by the known value increased/decreased by 50%
The MLS and DOE components of this metamodel were based on the work of de Boer et al. [
24]. The key novelty is that additional variables are added to the metamodel, increasing the curve to 8-dimensional.
2.6. Deterministic Model
The HMM framework can be validated by comparing it with a deterministic model. This is because the deterministic approach models the entire solution domain, resulting in outputs that are accurate at all scales. Deterministic models are often too slow to be practical (as will be discussed further in the
Section 3), but they serve as a useful point of comparison. Within this work, the deterministic model implemented matches the macroscale models as defined in
Section 2.1, but with all the weighting coefficients equal to one. The surface topography is, therefore, defined across the entirety of the macroscale pad.
Initially, a deterministic model was created to model the surface topography as measured, allowing for a direct comparison to the HMM results. The mesh resolution required to model the microscale surface topography resulted in a mesh of 8,348,364 elements, which proved infeasibly intractable. When this model was run, the High-Performance Computing (HPC) cluster reported that 193 GB of random access memory (RAM) were requested just to generate the initial matrices within COMSOL. Such a deterministic modeling approach is far from novel, ergo, it was not worth seeking additional funding to access a high throughput computing cluster.
Instead, an idealized sinusoidal topography of reduced complexity was implemented and, therefore, the mesh resolution could be reduced to a practical level. Asperity height was kept at a constant (the average value measured), but the asperities were increased in size such that the sinusoidal wavelength was , meaning that each asperity peak is exactly apart.
To facilitate direct comparison, the HMM model with idealized sinusoidal microscale topography is adjusted to model non-varying asperities of the same magnitude. This requires that only 25 asperities are modeled per microscale.
2.7. Solution Procedure
2.7.1. Microscale Solution Procedure
The parameterized variables from the macroscale
are inherited and solved within the microscale, such that values of
and
can be determined for any combination of the parametrized variables. Within this work, the microscale is solved within the COMSOL multiphysics software version 5.5 [
34] through the application of the finite element method. The initial pressure within the microscale domain was given by
A mesh independence study was undertaken for the microscale to confirm that the resolution was appropriate. It was found that increasing the mesh from 400 to 900 elements had a negligible (less than 1%) impact on outputs but increased execution time. Reducing the mesh resolution would often lead to the model failing to solve.
2.7.2. Macroscale Solution Procedure
COMSOL Multiphysics [
34] was used to solve the macroscale domain using the finite element method. The initial macroscale model was solved without topography to find the initial estimates for the later models that would include the effects of topography. As topography was not included initially, all weighting coefficients were equal to unity,
. An initial estimate of film thickness was also required at
This value would be subsequently changed to balance the required load capacity, specified as
where
is the solution time and
is a positive constant equal to 0.001.
When , an initial solution of zero is assumed; for every subsequent time increment, the initial solution is equal to the converged solution from the previous time increment. is arbitrarily assigned as the total solution time, and is assigned such that convergence will be achieved over .
Again, a mesh independence study was performed, but this time for the macroscale. The conclusions were the same. With 6923 mesh elements, the macroscale would reliably converge. Increasing incrementally to 27,220 elements did not affect the outputs by more than a percent, but computation time increased linearly with the number of elements. This is especially prevalent as every node on the macroscale boundary face requires at least one microscale model.
4. Discussions
Within this work, the HMM framework for EHL has been expanded to allow modeling varying surface topography. This is critical, as real surfaces will vary significantly, as evidenced by the optical profilometer measurements of tilted pad bearings. In this work, the surfaces are operating under EHL conditions and wear will not occur. Regardless, surface topography will vary across the surface due to the manufacturing techniques employed. The key novelty (and challenge) to this development was adding additional variables to the metamodel and ensuring reliable convergence. Ultimately, the primary advantage of the HMM approach for lubrication modeling is the ability to accurately account for microscale surface topography in a practical amount of time. Therefore, it was necessary to maintain the metamodeling approach when modeling more representative surface topography to ensure that efficiency is not compromised.
It is noteworthy that the two pads investigated had such extensive differences in and values and that these values varied significantly (>60%) across the surface. These two pads were nominally identical, having been manufactured at the same time in the same place. Yet, pad 1 has considerable variation in surface finish, whereas pad 2 had more homogeneous topography. This shows the need to measure topography across the pads and include this variation in modeling. Thus, showing that the ability to model varying microscale topography across the surface and, therefore, model more representative topography is a meritorious novelty to introduce to the HMM.
Upon observing the two pads in
Figure 9 and
Figure 10, it is interesting that both pads have similar topography at the geometric center. If only a single measurement were to be taken, this would be the logical point to do so. This could lead to the incorrect conclusion that the surface topography of the two pads is alike, further showing the need for measurements across the surface.
Another point of interest is that of the three methods for modeling microscale topography, using the representative approach (literally importing and applying the measured topography map) resulted in the fastest solution. Logic would suggest that as the idealized sinusoidal profile is the least geometrically complicated, it would lead to the fastest solution time. This discrepancy is best explained as being the result of the variables chosen to define the geometry. When comparing
Figure 11 and
Figure 13, there are clear and obvious differences in the implemented microscale topography despite the
and
values being identical. Most notable would be the comparatively reduced maximum asperity height of the measured topography. This suggests that just using
and
is not sufficient to fully define the variations in topography.
The macroscale outputs were exceptionally close, regardless of which topography representation was used. Even when compared directly with the smooth solution, the initial and final macroscale differences are similar in magnitude across the three topography modeling approaches. This suggests that speed of execution should be the key factor in designing a topography modeling approach. This is convenient as the idealized sinusoidal and representative topography depictions were the fastest to execute and the simplest to implement. Interestingly, although the initial and converged solutions are similar in magnitude for the three topography modeling approaches, the actual distribution is significantly different. It is well established that microscale topography can substantially impact macroscale operation, but it is remarkable that there is such variation when the topography is nominally analogous.
The application of the HMM framework to model lubricated contacts is well-established and validated [
24,
27,
28,
36,
37]. Within this work, maximum and minimum microscale film thickness are compared with the deterministic approach. It is mathematically proven that the maximum and minimum microscale film thicknesses and pressure will bind the deterministic solution [
36]. As demonstrated in
Figure 19, the HMM framework, as applied in this model, is validated. Furthermore, the specific implementation is validated as when microscale surface topography is set to zero; the output will exactly equal Reynolds equation. Thus, this demonstrates that the expansions to the HMM framework have not compromised the core mathematical principles.
The difficulties in comparing the HMM framework to a deterministic model serve as an excellent demonstration of the inadequacies of the deterministic approach. For the deterministic model to be feasibly solvable, the topography had to be vastly simplified (the number of asperities was reduced by four orders of magnitude), and still, the deterministic model took nearly 40 GB of RAM and several hours to solve.
This paper is the first publication in a course of research that will culminate in the expansion of the HMM framework to model mixed lubrication. The accuracy and speed of the expanded framework presented in this paper are beyond the capabilities of contemporary methods of EHL modeling (deterministic and flow factors); however, work is still required before the HMM framework can model mixed lubrication. Hence the incumbent methods of lubrication modelling cannot be repudiated entirely.
5. Conclusions
The aim of this work was to expand the HMM framework to facilitate the modeling of more representative surface topography. This has been achieved by altering the metamodeling approach such that individual microscale models can have differing topography, which vary over the contacting interface.
Three different methods of modeling microscale topography have been implemented and discussed within this work. The outputs of these were in close agreement, suggesting that the microscale topography modeling approach should be chosen to optimize speed. The model with the measured topography directly applied solved the problem faster; however, as this data is relatively expensive and time-consuming to acquire, there are strong arguments for using the idealized sinusoidal model.
This expanded HMM framework was validated against a deterministic model. Ultimately, the inefficiency of the deterministic approach required that the topography be simplified to facilitate comparison, and it still took far more processing power and time to solve, thereby demonstrating the advantages of the expanded HMM modeling approach.
Variation in surface topography demonstrably affects macroscale solutions when operating within the EHL regime. However, this impact is far greater when contact occurs within the mixed lubrication regime. Regardless of how homogeneous the topography was initially, contact will lead to plastic deformations and tribofilm build-up. That is why this work was essential as a prerequisite to modeling mixed lubrication within the HMM framework. Within this framework, it would also be possible to model both surfaces (collar and pad) as rough. This means that this work will enable future research on modeling mixed lubrication in a far more computationally effective way. The next step in this stream of research is to model mixed lubrication within the HMM.