**1. Introduction**

This article concerns enhancing traditional techniques in PLS-SEM with both artificial neural network (ANN) analysis and Importance-Performance Matrix Analysis (IPMA) when analyzing the maturity stage of the acceptance of enterprise resource planning (ERP) systems.

A review of the literature reveals that traditional PLS-SEM has been a powerful tool in researching business information solutions for the past 40 years [1–9]. With increasingly advanced and complex business information solutions, the PLS-SEM approach has also been increasingly used. However, advanced techniques and their resulting improved usability of PLS-SEM results that are achieved by combining such analysis with techniques such as ANN analysis are rarely reported.

The use of the PLS-SEM technique is very common in various fields of research where the researcher is interested in identifying statistically significant influencing factors for the dependent variables of the model. The limitation of the PLS-SEM technique is reflected, in particular, in the assumption of linear relationships between the model variables [10,11].

**Citation:** Sternad Zabukovšek, S.; Bobek, S.; Zabukovšek, U.; Kalini´c, Z.; Tominc, P. Enhancing PLS-SEM-Enabled Research with ANN and IPMA: Research Study of Enterprise Resource Planning (ERP) Systems' Acceptance Based on the Technology Acceptance Model (TAM). *Mathematics* **2022**, *10*, 1379. https:// doi.org/10.3390/math10091379

Academic Editors: María del Carmen Valls Martínez, José-María Montero and Pedro Antonio Martín Cervantes

Received: 16 March 2022 Accepted: 18 April 2022 Published: 20 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Although this limitation is important for a number of areas where this approach is applied, its importance is even more pronounced in management, which deals with human decisions that are multidimensional and complex by nature [12,13].

One possible approach is to supplement the results achieved by the PLS-SEM technique with the ANN method, which is one of the most valuable and commonly implemented artificial intelligence tools. The need to implement advanced data analysis methods in the discipline of management and business sciences, in general, is growing. In such cases, ANN can be an effective option for solving complex prediction problems. In management, these approaches are particularly successful in modeling nonlinear relationships between a dependent variable (or variables) and input data. Therefore, the previously mentioned disadvantage of PLS-SEM being able to recognize only linear relationships can be overcome precisely by using ANN, which recognizes nonlinear relationships [10,14]. In addition, traditional statistical methods and models assume that consumer decisions are linear and compensatory, which means that the deficiency of one factor may be compensated by improving other factor predictor [15]. This is often not the case in acceptance studies, i.e., consumer assessment as well as the decision-making process may not be compensatory and applied linear models might be inaccurate, so ANNs could be used to successfully resolve this issue [16]. Next, ANN techniques can give higher prediction accuracy as compared to linear ones [11,17], and they are also more robust and flexible [13]. Finally, the introduction of another research method would help to verify and reinforce the results obtained by PLS-SEM, thus improving the validity and reliability of these results [18]. However, because the ANN approach is not aimed at testing hypotheses and studying the impact of factors on the dependent variable(s) [10,14,18,19], the obtained PLS-SEM results from the first part of our research study are used in forming an ANN model that includes the statistically significant factors identified in the PLS-SEM.

IPMA is used in the last part of the research to assess the performance of key factors influencing the key dependent variables of the model. The implementation of IPMA provides additional results and important information that adds value to the PLS-SEM findings. The analysis of path coefficients, which allows the analysis of the importance dimension of an individual factor, is enriched in IPMA by considering the performance dimension through the average values of latent variables along with their indicators [20].

The importance of the research and of methodological approach we propose is high for managerial practice, as presented in this paper by the study of the acceptance of ERP systems based on the theoretical conceptual model—the technology acceptance model (TAM). It should be emphasized that with the digitalization of business, the importance of information systems in companies is enormous. Nowadays, in the time of the so-called digital transformation, the use of ERP systems in companies is necessary as they represent a central (main) information system to support almost all business processes on the operational levels of the companies [21]. In addition to that, the main characteristics of ERP systems are enterprise-wide integration, modular design (including business modules such as accounting and finance, purchasing, sales, manufacturing, services, human resources, etc.), a central common database where each data is written once, real-time operations, integration with other information systems, best business practices, consistent user interface, strategic planning, automatic functions, etc., [21–28]. Although ERP systems were first mentioned by Gartner in 1990, they remain the most important standard software to support business processes in companies, where technology has changed several times over the decades and the functionalities of these solutions are constantly being added to and expanded [29]. Since ERP systems are standard information systems created according to best practice, companies are expected to take over business processes ERP systems when implemented, which often leads to changes in business processes within the company and, among other things, to a different way of working for users. The successful implementation of ERP systems increases the company's competitive advantages since research has shown that the effective use of ERP systems can notably decrease the time required to conclude business processes and increase the process of the effective sharing of information

in companies [22,23]. On the other hand, ERP system implementations very frequently fail, thus leading to unachieved yet expected benefits [24,25], especially in the stage of use (also called the mature stage) of the ERP lifecycle in the company. In this stage, theoretically, users gain in-depth knowledge of how to utilize the ERP system and therefore adopt the system such that the usage itself is beginning to be a constant, daily activity. Several studies (i.e., [26,27,30,31]) have identified that users' unwillingness and their negative attitudes to adopt and use the implemented ERP systems may be a common reason for ERP system implementation failures. Our research provides in-depth insights into the importance of the external and internal factors of the business information system that shape and influence the effective mature use of ERP systems in companies.

The empirical study presented in this paper was conducted in an automotive multinational corporation consisting of several subsidiary companies across several countries. In this industry, great emphasis is placed on the use of ERP systems in the entire manufacturing supply chain, especially in terms of reducing costs, speeding up and automating production, and better product quality [28], so the acceptance of ERP systems is very important for the studied corporation. The corporation implemented an SAP ERP solution provided by SAP AG in all subsidiary companies in the past and is now, after many years of use, in its maturity stage; therefore it should be used at the advanced level by their SAP ERP users. While the acceptance of the newly implemented ERP systems in companies is often studied–-the so-called stabilization phase in the five-year period after the introduction of the ERP into the company–-there are many fewer studies regarding the maturity stage, referring to the advanced and therefore different usage and acceptance issues. In this paper, we therefore present a large corporation that has been using the ERP long enough to be able to analyze its mature, advanced use, while at the same time being multinational and diverse. Better knowledge regarding the factors that shape user acceptance of the ERP system in the mature stage is needed for successful ERP applications and use [30,32–35]. For this reason, the main purpose of this research is to enrich the results of PLS-SEM with the advanced data analysis methods of ANN and IPMA, thus creating the basis for evidence-based, grounded business decisions to support the development of the mature use of ERPs in companies.

The structure of the paper is as follows: Section 2, Materials and Methods, introduces the methodological techniques that were implemented in the paper, followed by a brief description of the ERP systems and the theoretical model, TAM, as the basis of our case research. The last two subsections in this section detail the research model and research approach. Results, described in the subsections according to the defined stages of the research design, are given in Section 3. Then, the research model formed, the results obtained, the practical value for evidence-based decision making and some ideas for future research are discussed in Section 4. Section 5 describes the main conclusions.

#### **2. Materials and Methods**

#### *2.1. PLS-SEM*

PLS path modeling is an SEM technique based on the analysis of variance. It has become an accepted technique for analyzing path models with at first latent variables (called the measurement model) and then their relationships (the structural model) [36]. It is often used in the fields of the social sciences, especially in economics and business science [37–39].

Within SEM, a set of relationships between independent and dependent variables (one or more of both) can be modeled, while variables can be constructs as well as measured variables. The aim of the SEM can be to examine (test) the model, to test determined model hypotheses, to reform the model formed, or to test two or more interrelated models [40]. Covariance-based SEM (such as AMOS, EQS, LISREL) and component-based SEM (such as PLS) are two types of SEM. The SEM allows researchers flexibility in modeling relationships between several endogenous (*η*) and exogenous (*ξ*) latent variables or constructs. There are

two types of linear relationships: inner and outer relationships [41], as presented through the example in Figure 1.

**Figure 1.** Example of the SEM model.

The internal model (also called the measurement model) determines the relationships between unobserved constructs, whereas the external models (or the structural models) determine the relationships between the construct and its observed or measured indicators [42]. The relationships between measured variables and constructs originate from external relations that can be defined either as reflective or formative ones. Reflective items designate the effects of the investigated construct. Formative items compose the studied construct. The Bentler–Weeks method is used for component-based SEM specifications [40], where each latent or measured variable in the model is either dependent or an independent variable. SEM specification is expressed by the following "set of equations [40], p. 743:

$$
\eta = B\eta + I\mathbb{Q} + \mathbb{Q} \tag{1}
$$

where


The measurement model is assessed first. The evaluation of reflective measurement models consists of composite reliability (*CR*) for the assessment of the reliability of each indicator, internal consistency, and average variance extracted (*AVE*) to assess convergent validity [43]. The internal consistency reliability measure for a specific construct used is Cronbach's *α*, where M represents the number of indicators (*i* = 1, 2, ... *M*) by which the specific construct is measured, and *s*<sup>2</sup> *<sup>i</sup>* is the variance of the *i*-th indicator for each construct [43]:

$$\text{Cronbach's } a = \left(\frac{M}{M-1}\right) \left(1 - \frac{\sum\_{i=1}^{M} s\_i^2}{s\_i^2}\right) \tag{2}$$

Due to the limitations of Cronbach's *α* (i.e., assumptions that all indicators are equally reliable and as the number of items in the model increases, the Cronbach's *α* may increase, even if it does not contribute to greater reliability of the measurement scale), an additional measure of internal consistency reliability was used—a *CR* measure, defined as [43]:

$$CR = \frac{\left(\sum\_{i=1}^{M} l\_i\right)^2}{\left(\sum\_{i=1}^{M} l\_i\right)^2 + \sum\_{i=1}^{M} var\ (e\_i)}\tag{3}$$

Here *li* constitutes the standardized outer loading of the *i*-th indicator (*i* = 1, 2 ... *M*) of the specific construct and *var* (*ei*) constitutes the *i*-th indicator's variance of the measurement error. Due to the fact that *CR* tends to overestimate the internal consistency reliability, we report both criteria.

To assess the convergent validity of constructs, *AVE* measure was used [40], representing the communality of a specific construct:

$$AVE = \left(\frac{\sum\_{i=1}^{M} l\_i^2}{M}\right) \tag{4}$$

The Fornell–Larcker criterion, cross-loadings, and HTMT ratio (the heterotrait–monotrait ratio) of correlations may be implemented to assess discriminant validity [37–39]. The Fornell–Larcker criterion [44,45] requires the construct to share more variance with its associated indicators than with any other construct. Therefore, *AVE* should be larger than the squared correlation with any other construct. Garson [38] pointed out that crossloadings are alternative to *AVE* and that at a bottom level, each indicator has the highest correlation with its own construct, compared with any other construct. HTMT ratio presents "the geometric mean of the heterotrait–monotrait correlations divided by the average of the monotrait–hetero method correlations", as defined by Henseler et al. [37] who suggest that HTMT value should not exceed 0.90, while Garson [38] set the threshold at 1.0

The next stage of the research is focused on the structural model analysis–-on hypothesis testing, which consists of the assessment of standardized path coefficients significance and the level of *R*<sup>2</sup> values. Garson [35] pointed out that including predictors in the model be likely to increase *R*2, although the exposed predictors have only an insignificant level of impact on the dependent variable; therefore, it is necessary to use adjusted *R*2. Adjusted *R*<sup>2</sup> can be computed by the formula:

$$Adjusted \ R^2 = 1 - \left(\frac{\left(1 - R^2\right)(n - 1)}{(n - k - 1)}\right) \tag{5}$$

where *R*<sup>2</sup> is the unadjusted *R*2, *n* equals the size of the sample, and *k* is the number of predictors.

Statistical significance of the path coefficients was calculated implementing the bootstrapping resampling method, where five thousand sub-samples were included [46]. The bootstrap method allows testing the null hypothesis that the standardized path coefficient equals 0 in the population. Using the standard error of the bootstrap obtained distribution, a *t* test is used to test whether the path coefficient (for example *β*1) is significantly different from 0, as follows:

$$t = \frac{\beta\_1}{s \varepsilon\_{\beta 1}^\*} \tag{6}$$

Here *se*∗ *<sup>β</sup>*<sup>1</sup> represents the standard error of the bootstrap derived distribution for *β*1, while *β*<sup>1</sup> is the path coefficient estimated from the original model and empirical data.

The coefficient of determination (*R*2), as defined above, describes "the amount of variance of the dependent construct explained by all of the explanatory constructs affecting it. Its values are from 0 to 1. The higher the value better the predictive capacity of the model" [47]. Chin determined that "0.19 is weak, 0.33 is moderate, and 0.67 is substantial explanatory power of the model" [48]. In addition to the *R*<sup>2</sup> values, the size effect *f* <sup>2</sup> is used, which is defined as the change in the coefficient of determination value when an individual independent construct is excluded from the model. Its equation is as follows [49]:

$$f^2 = \frac{R\_{incluled}^2 - R\_{excluded}^2}{1 - R\_{incluled}^2} \tag{7}$$

Here *R*<sup>2</sup> *included* and *<sup>R</sup>*<sup>2</sup> *excluded* are the coefficient of determination values for the dependent variable when an individual independent "construct is included in or excluded from the model" [49].

A mediation effect is generated if certain construct or variable intervenes between two existing constructs. An arrow between the two constructs represents the direct relationship or effect, while indirect effects involve a set of relations where one or more constructs are intervening. Hair et al. [49] pointed out that mediation effects are often present in the models but are often not analyzed. There could be two types of non-mediation in the model: (1) "direct-only non-mediation", with significant direct effect only and (2) "no-effect non-mediation", where there is no significant effect and three mediation types: (i) "complementary", where both direct and indirect effects are significant and pointing in the same direction, (ii) "competitive", where both effects are significant but pointing to opposite directions, and (iii) "indirect-only mediation", where only the indirect effect is significant [43,50]. Hair et al. [45] pointed out to the importance of the bootstrapping of the indirect effects' sampling distribution. They added that bootstrapping needs no assumptions regarding the sampling distribution of the statistics or the form of the variables' distribution and can be applied to small sample sizes with a higher level of confidence.

The next step includes the blindfolding procedure. Its objective is to assess the model's predictive accuracy. The blindfolding approach introduced by Wold [51] was implemented that is based on the cross-validation (cv) strategy and includes the calculations of cvredundancy and cv-communality for constructs and indicators. The index of cv-redundancy index (i.e., Stone-Geisser's *Q*2) "measures the quality of the structural model, where the cv-communality (*H*2) measures the quality of the measurement model" [38]. *H*<sup>2</sup> uses only the measurement model. It measures the capacity of the path model to predict the manifest variables directly from their own latent variable by cv. The mean values of the *Q*<sup>2</sup> that refer to the dependent constructs are used to assess the overall quality of the structural model if they are positive for all dependent constructs' subparts. An *H*<sup>2</sup> and *Q*<sup>2</sup> value that is greater than 0 indicates the relevance and predicting power of the structural models and measurements [38].
