*3.2. Measurement Scales*

Our measurement scales are the same as [7] and are widely used by empirical papers about technology acceptance. Table 2 shows the questionnaire outlining constructs, items, and theoretical foundations of each one. The responses were done over a Likert scale from 0 to 10.

**Table 2.** Constructs, items and their theoretical foundation.


**Table 2.** *Cont.*


Source: [7].

*3.3. Quantitative Methodology*

> The research used the following sequential process:

Stage 1. Measurement model analysis.

To explore potential existence of dimensions in the scales we have run principal component analysis with Varimax rotation. Subsequently we have performed an assessment on reliability, convergen<sup>t</sup> and discriminant validity of scales.

Stage 2. Test hypothesis H1–H6.

In [7] we used for this analysis a PLS regression. It implies calculating R2, Q2, path coefficients, and linked statistical significance degree. On the other hand, in this paper we use Qualitative Comparative Analysis (QCA) [24] and fuzzy set QCA (fsQCA) in [23] to evaluate such hypothesis.

There is a grea<sup>t</sup> deal of applications of fsQCA in managemen<sup>t</sup> and marketing [25]. As far as similar fields to ours are concerned, fsQCA has been applied instead PLS in the assessment of new technologies acceptance [26,27] and also as a complementary method to PLS [28,29].

Any correlational method in general and PLS in particular assumes symmetrical relations between variables and measures the net effect of each variable on assessed output. On the other hand fsQCA allows discovering the combinatorial effects of variables over the output as well as taking into account that these interactions could be asymmetrical [28]. So, to test hypothesis in Section 2, PLS and fsQCA follow different ways. With PLS we find an average value (a coefficient) for the influence each factor on output variables and then we test its statistical relevance from its t-ratio. When applying fsQCA we find the logical implicates that combining the presence/absence of input variables suit better output results

by using Boolean logic. Subsequently consistency and coverage measures inform about the relevance of discovered logical implicates and compare them with initial hypothesis. Figures 3 and 4 depict a graphical comparison about how PLS and fsQCA test hypothesis under the framework exposed in Section 2.

**Figure 3.** Hypothesis tested and analytical methodology with PLS in [7]. Source: Arias-Oliva, M.; Pelegrín-Borondo, J.; Matías-Clavero, G. Variables influencing cryptocurrency use: a technology acceptance model in Spain. *Frontiers in Psychology* **2019**, 10, 475.

**Figure 4.** Hypothesis tested and analytical methodology with fsQCA.

Figure 3 represents how PLS works. Coefficients a1, a2, ... , a6 quantify the sign of average influence of factors individually. Their t-ratios allow testing their particular statistical significance. To assess goodness of whole model, R2 and Q2 must be used. Figure 4 shows how fsQCA process empirical data. After a Boolean minimization, logical implicates Z1, Z2, .., Zr, Z1, Z2, .., Zs' are found. They embed the presence/absence of at least one input variable. Consistency and coverage measures summarize the significance and empirical relevance of every logical implicate. To test hypothesis, the configuration of these implicates must be interpreted accordingly.

Therefore, two measures are of interest for a given logical implicate:

•

	- Coverage (cov) that measures the proportion of the outcome explained by each recipe.

As it is usually advised by literature [26] we analyze the influence of input variables not only on the outcome (IU) but also on its negation (~IU). In our case, a facilitating condition may consist in having a powerful PC and a good internet connection. The influence of this fact in crypto use may be great, little or negligible. On the other hand, it comes fair that the negation of such condition ensures not operating with cryptos. So, if we symbolize as "~" the negation of a variable, we evaluate:

$$\text{IU} = \text{f}(\text{PE, EE, CP, FC, PR, FL}), \tag{1}$$

$$\text{I}-\text{IU} = \text{f}(\text{PE, EE, CP, FC, PR, FL}), \tag{2}$$

where f(·) symbolizes Boolean function. The implementation of fsQCA is done by using fsqca 3.1 software [71] and follows the following steps:

Step 1. Find the factorial punctuation for of the *j*th observation (*j* = 1, 2, ... , 402 in our sample) in ith variable (IU, PE, EE, SI, FC, PR and FL). We symbolize as f*ij* those values.

Step 2. Built up the membership function for input variables PR EE, SI, FC, PR and FL and output variable IU by normalizing within [0,1] their standardized factorial punctuation in step 1. So, for a variable *i*, the membership value for the *j*th observation *Xij* is:

$$m\_{X\_{ij}} = \frac{f\_{ij} - \min\_{j} \{ f\_{ij} \}}{\max\_{j} \{ f\_{ij} \} - \min\_{j} \{ f\_{ij} \}} \tag{3}$$

The membership degree within the negated fuzzy set, ~Xi is defined then as: *<sup>m</sup>*∼*Xij* = 1 − *mXij* .

Step 3. Built up a Boolean truth table which is composed by so-called "min terms". We state for the value of the jth observation in ith variable two unique the possible values: true ( *m Xij*= 1) and false ( *m Xij*= 0). So:

$$m'\_{X\_{ij}} = \begin{cases} 1 & m\_{X\_{ij}} \ge 0.5 \\ 0 & \text{otherwise} \end{cases} \tag{4}$$

Step 4. Maintain only those min terms whose consistency to produce the output are at least a threshold ε. Ragin [71] suggests ε = 0.8.

Step 5. By applying Quine-McCluskey algorithm [72] find essential prime implicates in truth table. These implicates conform so-called Qualitative Comparative Analysis complex solution (CQA-CS). That algorithm supposes implementing the following steps:



Let us remark that essential prime implicates are Boolean products whose factors may be *Xj* or ∼ *Xi*.

Step. 6. CQA-CS is usually hard to interpret and is build up with no more assumption than data. So, fsqca 3.1 also offers a parsimonious solution (QCA-PS). It is fitted from any remainder over non observed configuration of variables in order to make solution as easy as possible regardless whether it constitute an "easy" or "difficult counterfactual" case [71].

Step 7. To continue the minimization process, [23] proposes using simplifying assumptions that should be theoretically well-founded about how a given condition is causally related to the outcome. It must be supposed for non-observed configurations, if an input variable contributes to output exclusively when it is present, absent or in both cases. This step lets us obtaining so-called intermediate solution (QCA-IS).

Step 8. Let be a possible prime implicate (configuration or recipe) *Z* that without a loss of generality we built as:

$$Z = X\_1 \* X\_2 \* \dots \* X\_r \tag{5}$$

where 1 ≤ *r* ≤ *n*, *n* is the number of output variables and "∗" stands for the Boolean product. So, we can obtain for the *j*th observation:

$$m\_{Z\_{\vec{\gamma}}} = \min \{ m\_{X\_{1,\vec{\gamma}}}; m\_{X\_{2,\vec{\gamma}}}; \dots; m\_{X\_{r,\vec{\gamma}}} \} \tag{6}$$

 So, the consistency of recipe *Z* in producing output *Y* is:

$$\text{Cons}\_{Z \to Y} = \frac{\sum\_{j} \min \left\{ m\_{Z\_j}; m\_{Y\_j} \right\}}{\sum\_{j} m\_{Z\_j}} \tag{7}$$

where *Yj* stands for the value of *j*th observation in output variable (Intention to Use in our case). Consistency may be understood similar as a statistical measure of significance [73]. It is widely accepted that to consider *Z* as sufficient condition, consistency must be above 0.8. theof*ZY*is:

Subsequently, coverage recipe to produce 

$$Cov\_{Z \to Y} = \frac{\sum\_{j} \min \left\{ m\_{Z\_j}; m\_{Y\_j} \right\}}{\sum\_{j} m\_{Y\_j}} \tag{8}$$

Coverage provides a measure of empirical relevance. Its analogous statistical measure is R2. A consistency above 0.8 for a recipe (6) implies that the combination is "almost always" necessary or sufficient [23]. It can be considered that a condition is completely sufficient when cons > 0.9 y cov > 0.5 (see [74]).

Step 9. Interpret intermediate and parsimonious solutions. At this point there is no a unified rule about what solution must be interpreted. Ragin in [23] suggests using QCA-IS since it supposes a compromise between simplicity of CQA-PS and complexity of CQA-CS. As it is pointed out by Thiem in [75] empirical studies are usually done over this solution. However, also in [75] it is advised searching causal relations using exclusively QCA-PS instead intermediate solution. That paper argues that QCA-CS and QCA-IS introduces matching counterfactual data with which QCA supplements the empirical observations. These artificial data may induce inferences that violate the actual causal structure that had generated the empirical data in the first place and that QCA is meant to uncover.
