1. Introduction
There is a rising awareness of the significance that innovation can play in both productivity growth and economic wellbeing [
1,
2,
3,
4,
5].
Small and Medium-sized Enterprises (SMEs) are the cornerstone of the European economy, matching 99% of all enterprises, and generate around 100 million jobs in the European Union (EU). Being accountable for more than half of the EU’s Gross Domestic Product (GDP), any upgrading in their innovation can have an important impact on the EU Research and Innovation (R&I) scenery [
6].
The quest for a solution to the challenges involved in the enhancement of enterprises’ innovation capacity is not new [
7]. This is particularly pertinent since it is believed that a company’s success can be determined by its capacity to innovate [
8,
9,
10]. Decades of research exploring the elements that drive this process led to a wide range of perspectives on innovation capacity [
11]. Overall, three main sets of components can particularly affect innovation capacity in enterprises [
11]: human resources and internal and external dimensions.
A paradigm shift in how innovation might be viewed consists of open innovation. In effect, open innovation in SMEs has been drawing the attention of many scholars over the years [
12]. Through this form of innovation, firms may receive knowledge inflow and outflow to improve their innovation capabilities [
13]. A few scholars have further developed this concept, realising that data and knowledge interactions can occur in three forms [
14,
15,
16]: (a) incoming (outside-in), associated with the collection of knowledge from the external environment to generate innovations within the organization; (b) combined, associated with co-generation; and (c) outcoming (inside-out), represented by knowledge exchange. The study of open innovation generally considers distinct rationales, including the triple helix framework of collaboration linkages between universities, industries, and governments [
17,
18,
19], governance models [
20,
21], and absorptive capacity [
22,
23].
Thomä and Zimmermann [
24] proposed a valuable framework for organising the way SMEs innovate. These authors differentiated two types of innovation. One is based on Research and Development (R&D) carried out by specific departments or individuals within a company to generate new scientific knowledge. External collaborations or scientific organizations are used to supplement it as necessary. This method, known as the ‘Science, Technology, and Innovation’ (STI) mode, needs financing to establish a connection between profitability and innovation. The second approach is focused on experience, learning-by-Doing, and Using and Interacting (DUI). Informal techniques of learning and comprehension predominate in this case. Employees that work closely together generate innovations, as does the company environment through interaction with customers, suppliers, and other stakeholders. In effect, businesses do not entirely operate in either mode but rather use a combination of the two, with the balance dictated by the sort of business and the hurdles it faces.
SMEs are confronted with several practical hurdles when it comes to innovation. Access to external capital may be difficult for SMEs, especially when risky projects are at stake [
25,
26,
27]. The degree to which this is a barrier varies according to the age of the company, business size, research intensity, growth orientation [
28], and, in many cases, geography [
29]. Other barriers may involve the difficulty in employing highly skilled workers [
30,
31,
32,
33,
34], leadership problems [
35], reduced absorptive capacity [
36] and difficulties in the capture of value [
37]. Nevertheless, financial obstacles are seen as the most significant impediments to innovation [
38].
SMEs seem to have a much lower propensity and capability to create internal R&D than bigger corporations. They are, in principle, significantly more reliant on public R&D support to promote their in-house R&I capacity [
39]. In this context, public policy has increased both financial and non-financial assistance to SMEs in response to the hurdles they must face while attempting to innovate.
The framework and basis for EU policy on SMEs started with the European Charter for Small Enterprises proposal by the General Affairs Council in 2000 and its endorsement by the EU Council at Fiera. It was subsequently backed by a comprehensive SME policy with the implementation of the Small Business Act (SBA) in 2008 [
40]. The SBA established ten principles that include initiatives to strengthen SMEs, such as enabling funding, improving access to procurement procedures, and fostering female-led start-ups. The Europe 2020 Strategy, which was published in 2010, contains seven Flagship Initiatives, one of which is the achievement of the Innovation Union. Since then (for the 2014–2020 multiannual financial framework), a broad range of SME programs have been developed, many of which are aimed at fostering innovation.
The European Regional Development Fund (ERDF) offered about 66 billion Euros of European Union (EU) financial support to promote innovation and productivity in European enterprises during the 2014–2020 programming period [
41]. Specifically, SMEs were the core of the ERDF financial assistance to enterprises.
As of 2014, member states (MS) were required to assess the efficacy, efficiency, and impacts of these financial assistance programmes. Despite the existing abundance of publications for assessing them (see
Section 2), there are still issues that lack scholarly attention, especially when assessments occur over the planning periods. Ortiz and Fernandez [
42] highlighted that the design and implementation of R&I policies still present significant obstacles for policymakers. This is particularly true in the monitoring and evaluation stages, mainly because of the lack of relevant data, benchmarking assessment studies, and administrative skills/capabilities. Indeed, monitoring processes during the 2014–2020 programming period have placed a high focus on assessing process-oriented outcomes (called herein “procedural efficiency”), lacking data on metrics to evaluate the direct impacts of the investments supported [
42].
Additionally, in the case of R&I policies, the evaluation procedure assumes a prominent role in helping countries and/or regions in the improvement of future policy instruments by recognizing the accomplishments and failures in the preceding policy stage [
43].
In this framework, policymakers may wish to set the appropriate policies to make the necessary adjustments while completing this type of evaluation using the Data Envelopment Analysis (DEA) approach. In this way, it is possible to obtain insightful information, which allows for correcting possible deviations from best practices. When compared to other approaches (for example, macroeconomic evaluations, microeconomic studies that use control groups, and case study evaluations), the DEA model used herein can be especially useful for management authorities (MA). This modelling approach makes it possible to identify the benchmarks and modifications that must be made to enhance the execution of this type of funding while also grasping the disparities between the different regions’ categories.
Since the EU provides significant funding for R&I activities in European MS, and SMEs are the basis of the European economy, we have devoted our evaluation to the programmes subsidised by the ERDF, mainly dedicated to R&I in SMEs.
In this regard, our work is novel in four ways: (1) It proposes the usage of the Network Slack-Based Measure (SBM) DEA model in conjunction with cluster analysis to appraise 53 Operational Programmes (OPs) from 19 EU countries dedicated to fostering R&I in SMEs; (2) It separates efficiency analysis into two stages: the procedural efficiency, mainly concerned with OPs’ implementation, and the potential R&I capacity generation obtained with the funds spent; (3) It makes it possible to assess this sort of funding across the programming period, allowing for the implementation of policy actions to address detected inefficiencies, thus offering supplementary evidence that can be useful in the intermediate monitoring phases of the OPs. (4) It addresses the diverse regional characteristics of the OPs, according to the most recent regional development NUTS2 categorization, which was used to allocate the ERDF for the 2014–2020 programming period. Therefore, it considers less developed regions (GDP lower than 75% of EU GDP), regions of transition (GDP lower than 90% and higher than 75% of EU GDP), and more developed regions (GDP higher than 90% of EU GDP), which are categorized into three separate clusters.
All in all, the main research questions posed in this are:
RQ1. “Does a higher level of procedural efficiency necessarily lead to a higher R&I potential capacity generation?”
RQ2. “Which performance framework indicators preclude the efficient application of funds committed to boosting R&I in EU SMEs?”
RQ3. “Which performance framework indicators showed higher resilience to efficiency classification in face of their potential shifts”?
RQ4. “Which OPs were most frequently considered as a reference of best practices over the programming period under scrutiny?”
RQ5. “Which type of regions showed a higher performance in supporting R&I in SMEs?”
After the previous Introduction, this paper has the following structure.
Section 2 provides a literature review on cohesion policy assessments of R&I in SMEs and other enterprises.
Section 3 defines the underpinning assumptions related to the methodology applied to assess the execution and outcomes of the OPs under consideration.
Section 4 covers the key considerations followed for choosing the inputs and outputs utilised in the efficiency assessment of OPs and some statistics on the data that instantiated the model.
Section 5 analyses the findings and considers possible policy suggestions.
Section 6 delivers the main conclusions and suggests potential recommendations based on the findings obtained.
2. Literature Review
More than 1000 evaluations have been performed by EU MS since 2015, focusing on specific funds, topics, and regions, monitoring the implementation progress, and/or evaluating the effect of initiatives, both for the 2007–2013 and 2014–2020 programming periods [
40].
The frequency of assessments performed by MS varies greatly [
40]. This is due to significant differences in the quantity and type of investment financing, the number of OPs in each MS, and the methodology proposed in the evaluation plans [
40]. Furthermore, some countries choose to undertake many minor assessments, whilst others prefer to do aggregate assessments.
Most of these evaluations are usually centred on implementation issues and assess target accomplishment, with the primary concern being the consistency of projects and activities with programmes’ objectives, as well as the effectiveness and efficiency of their execution. They also examine whether current funds are being spent and whether the specified objectives, particularly those of the performance framework, are being met. The impact evaluations are carried out later in the programme cycle when most activities have already taken place.
Since our study will assess the efficiency of OPs implementation within the context of Thematic Objective (TO) 1, we have outlined the OP evaluations completed by EU MS from 2015 to date that explicitly mention R&I in their abstract description (see
Table A1 included in
Appendix A).
Even though the examination of
Table A1 indicates that MS use systematic methods in their evaluations of TO 1, just a few (primarily dedicated to impact assessments) consider more robust methods (such as statistical techniques or other). Therefore, despite MS’ efforts to improve cohesion policy evaluation, this suggests there is still room for further improvement.
A search was conducted utilizing the Google Scholar database from 2017 to 2022 to obtain a clear idea of the several approaches usually used in assessing Cohesion Policy programmes, specifically dedicated to innovation in SMEs. The keywords “EU innovation SMEs *” and “EU fund innovation assessment *” were used to locate relevant papers. In the end, only renowned journals, highly cited books, and Joint Research Centre’s publications on the subject were retained.
From the literature review conducted, it was ascertained that there are several approaches to assessing the cohesion policy, each with its advantages and disadvantages [
44]. The two most utilised methodologies to evaluate the impact of cohesion policy are macroeconomic and econometric models. In the vein of R&I macroeconomic impact assessment, Computable General Equilibrium Models combined with input-output analysis and econometric approaches are usually used (e.g., [
45,
46,
47]). Despite enabling the study and analysis of the main effects of EU funding on economic development, these models cannot convey specific information on management performance [
48]. Furthermore, they disregard the distribution of EU funds among distinct thematic objectives and activity sectors in each region or country. The dominant research stream is built on econometric analyses (e.g., [
4,
49,
50,
51,
52,
53]). However, it also yields conflicting results [
54], leading various scholars to question its usage [
54,
55,
56]. Other approaches can also be used, but with similar inherent limitations (e.g., [
57,
58]).
An alternative would be to explore ‘microeconomic studies’ that apply ‘control groups’ to compare the efficacy of Structural Fund (SF) beneficiaries against other homogeneous control groups (see
Table A1 from the
Appendix A). Finally, for the evaluation type that relies upon case study assessments, a blend of data involving surveys, OPs’ monitoring reports, and quantitative approaches are employed to study the effects of SFs at distinct levels [
59].
In general, the assessment methods adopted do not enable contrasting any unit under assessment (either a country, region, or OP) with its counterparts. They do not allow identifying the changes that must take place to make an inefficient country, region, or programme efficient [
60]. Furthermore, these approaches usually need the fulfilment of statistical assumptions (e.g., normality, lack of multicollinearity, and homoscedasticity). As a result, the use of nonparametric approaches can be especially helpful and suitable, specifically as performance framework indicators may be employed with nonparametric techniques. These convey a group of indicators in each OP according to which the European Commission, in collaboration with the MS, assesses the performance of the OPs in each MS.
In general, stochastic approaches are utilised to obtain the frontier production function [
61]. However, the vast majority can only handle a single output [
60]. Nonparametric approaches such as DEA can readily tackle several inputs and outputs and are also used to obtain the efficiency frontier. Besides, unlike stochastic approaches, DEA does not depart from any functional form of the production function or the estimation of the error term. DEA assumes that the higher the deviation from the efficient frontier, the higher the inefficiency of the unit under evaluation.
Furthermore, DEA can aid in the identification of the main reasons that preclude efficiency, offering decision-makers relevant insights on how to address them. In this context, [
62] utilised DEA to examine the relative spatial disadvantages of the EU’s Level II areas. Gómez-García et al. [
61] used employment and productivity rates as outputs, as well as the Stochastic Frontier and DEA approach to assess the pure technical efficiency and global technical efficiency of Objective 1 in the application of SFs in EU regions from 2000 to 2006. Anderson and Stejskal [
63] used DEA to appraise the efficiency of diffusion of innovation of EU MS regarding their European Innovation Scoreboard rankings. Also, [
60] used the Value-Based DEA methodology that couples DEA with Multiple Criteria Decision Aiding and considers the main factors that may impact the implementation efficiency of SFs in distinct EU regions and countries. More recently, [
64] employed the SBM model coupled with cluster analysis for evaluating 102 OPs from 22 EU countries dedicated to the promotion of a Low-carbon economy in SMEs.
Despite its usefulness, the DEA method is underexplored in the assessment of the European cohesion policy devoted to R&I—see
Table 1.
Moreover, neither of the reports examined in
Table A1 (
Appendix A) employ the DEA approach in their evaluations. When conducting the efficiency assessment using the DEA method, MA can identify the OPs’ benchmarks in accordance with the desired practices. This enables the identification of the required changes that must be performed on the set of indicators of the performance framework that will allow inefficient OPs to become efficient across the programming period.
Also, as far as we are aware, the Network SBM methodology has not hitherto been employed in the assessment of the execution of OPs under the R&I theme (or in the assessment of any other TO). As a result, we will especially address the efficiency evaluation of R&I OPs using the Network SBM approach in conjunction with cluster analysis, as well as undertake a sensitivity analysis of the results produced.
3. Methodology
In DEA, there are two sorts of models: radial and non-radial. The Charnes–Cooper–Rhodes—CCR [
68] and Banker–Charnes–Cooper—BCC [
69] models represent radial models. These models cope with proportionate variations in inputs or outputs. As a result, the CCR and BCC efficiency scores thus obtained show the highest proportionate input (output) decrease (increase) rate possible that is equal for all inputs (outputs). Nevertheless, in real-world enterprises, this type of assumption is not realistic. For instance, if we make use of labour, raw materials, and capital as inputs, these might have a substitutional effect among them and do not necessarily change proportionately. Besides, these models only allow considering input- or output-oriented versions. Therefore, if the model is input-oriented, it is (unrealistically) assumed that the inefficiency linked to the usage of a particular input (resource) is necessarily linked to the inefficiency related to the use of another input by the same Decision-Making-Unit (DMU) under evaluation. Alternatively, if the model is output-oriented, it is presumed that a DMU can generate different outputs (outcomes) simultaneously with the same generation capability, thus ignoring that the generation efficiency for distinctive outputs can be diverse. As a result, in this paper, we employ the Network DEA model based on SBM model [
70]. As opposed to the CCR and BCC models, the SBM model offers a more comprehensive examination of efficiency because it is non-radial (presuming that inputs and outputs can change non-proportionally), and it can be input-, output, and non-oriented. Contrary to radial models, which ignore slacks, the SBM model offers information on the modifications needed for each inefficient DMU’s input and output values to become efficient. Moreover, the SBM model possesses certain desirable characteristics, in particular, monotonicity (the efficiency score is monotone, decreasing in each slack both in inputs and outputs) and unit invariance (the efficiency score holds irrespective of the units of data) [
70].
In addition, unlike the additive model [
71], which is also non-radial, the SBM model enables calculating an efficiency score based on the slacks computed. Furthermore, the SBM model may be used with clustering analysis by considering groups of DMUs that fit specific criteria to appraise the DMU’s efficiency according to the cluster-frontier, therefore mitigating the influence of DMU’s heterogeneity on efficiency [
72,
73,
74].
3.1. The SBM Model
Let the set of n DMUs be given as. We define the (m × n) matrix of inputs (resources) and the (s × n) vector of outputs (outcomes), respectively, as X = [xij, i = 1, 2, …, m, j= 1, 2, …, n] and Y = [yrj, r = 1, 2, …, s, j = 1, 2, …, n]. The rows of these for each DMUk are and , for its inputs and outputs, respectively, with T signalling the transposition of a vector.
The production possibility set for
n DMUs can be given as:
where
λ =
is an intensity vector.
The inequalities of this production possibility set can be written as equalities by introducing the slack variables as follows:
where
and
correspond to the input and output slacks, respectively.
Therefore, the non-oriented version of the SBM model can be written as [
70]:
An increase in
or
,
ceteris paribus, will reduce the value of
ρ, with 0 <
ρ < 1. The value of
ρ in (1) can additionally be written as:
The value given by provides the percentage decrease of input i while corresponds to the average percentage decrease of inputs. Analogously, evaluates the percentage increase of output r while gives the average percentage increase in outputs. Hence, ρ corresponds to the ratio of the average inefficiencies of inputs and outputs.
Model (1) can be transformed into model (3) by employing the Charnes–Cooper transformation through a positive scalar variable
t:
Let
=
,
=
and
Λ =
. So, problem (3) turns out to be:
The optimal solution to problem (4) is given by:
Definition 1. A DMUk is called SBM-Efficient if , i.e., if = 0 and = 0.
Definition 2. An SBM-inefficient DMUk has a set of efficient reference DMUs, which is obtained by choosing the indices of the DMUs with .
Consider the SBM-inefficient
DMUk as follows: E
k ={j:
,
j = 1, …,
n}. The DMU that can be seen as a reference of best practices, which is in a point of the efficiency frontier, for the SBM-inefficient
DMUk is given as:
3.1.1. The Network SBM Model
Traditional DEA techniques see DMUs as a “black box” that converts inputs into outputs. Nevertheless, there are also DEA techniques, named Network DEA, that view the entire system as being made of distinct stages (i.e., subDMUs) with linkages across them that relate to intermediate outputs created, which are then consumed inside the system. These Network DEA techniques provide finer-grained evaluation and increase the discriminating capability of the assessments performed [
75]. Besides, through this approach, it is possible to assess the impact of each stage-specific inefficiencies on the global efficiency of the DMU.
Tone and Tsutsui [
75] suggested the network slacks-based measure (NSBM), which relies on SBM [
70] and weighted SBM [
76]. This method evaluates the whole and stage efficiencies of the DMUs under scrutiny. The approach considers a global production possibility set that specifies the links between the multiple stage processes. Specifically, it is expected that a DMU is made up of p sub-processes (
p = 1, …,
P), with each sub-process consuming external inputs to create certain outputs. Besides external inputs and outputs, there are also intermediate factors linking processes. The generalized non-oriented NSBM model of [
75], considering Variable Returns to Scale (VRS), can be given (by considering
mp external inputs and
sp external outputs) as:
Intermediate linking constraints (7) or (8),
where
ρk is the overall efficiency score of
DMUk,
is the intensity factor related to process
p, and
is the value of external input
i used by the process
p of DMU
j such that
=
, where P
I(
i) is the set of processes that use external input
I and
is the value of external output
r generated by process
p of DMU
j, such that
=
, where P
O(
r) is the set of processes that produce external output
r, and
is the relative weight of process
p, which is obtained according to its significance.
The intermediate linking constraints can be represented according to the “free” link or to the “fixed” value cases [
75].
In the first case, the linking factors are obtained freely (i.e., are viewed as discretionary factors) whilst maintaining continuity between inputs and outputs, i.e.,:
where
, is the value of intermediate factors linking process
p to process
h.
Therefore, the linking flow may expand or contract in the optimal solution to (6).
In the second case, the linking activities remain with the same values (i.e., are viewed as non-discretionary factors):
Definition 3. A DMUk is overall SBM efficient if , where is the optimal value to problem (6).
Definition 4. The non-oriented stage efficiency score given within [0, 1] is as follows:
where
and
are the optimal output slacks to problem (6).
Theorem 1. A DMU is globally efficient if and only if it is efficient for all stages [75]. Model (6) can be linearized by employing a similar method applied to convert problem (1) into a linear problem.
3.1.2. The Network SBM Model with Cluster Analysis
Conventional DEA models presume that all DMUs are homogeneous. As a result, all DMUs are thought to offer the reference set for building meta-frontiers. In reality, DMUs are not always homogenous, thus impacting the correctness of the DEA results obtained [
77]. Therefore, clustering benchmarking might be particularly suitable when dealing with DMUs with diverse characteristics [
73,
74]. Cluster benchmarking is a methodology for categorizing a group of DMUs into clusters based on shared features. The clusters can be formed applying a clustering approach (in statistics) that is suitable to the problem under consideration, given exogenously, utilizing expert information, or based on the level of scale efficiency [
75]. DMUs belonging to the same cluster are more similar than those belonging to other clusters [
74].
The major objective is to exploit the similarity of DMUs in the same cluster as well as the discrepancy of DMUs in distinct clusters. The production frontiers must be established independently to complete the efficiency evaluation considering the best practices according to the appropriate clusters. The technology gap ratio (TGR) may be calculated by contrasting the results of the DEA model that considers all DMUs in the same group with the results obtained after clustering. The non-oriented version of the SBM model is used to calculate the meta-frontier and cluster-frontiers (see Problem (6)). Generically, the overall
TGRk of
DMUk is then calculated as follows [
78]:
where
is the efficiency value of
DMUk according to the SBM non-oriented model based on the meta-frontier considering VRS, and
is the efficiency value of
DMUk computed with the SBM non-oriented model using the cluster-frontier and considering VRS. The TGR for each process can be obtained analogously.
The TGR value indicates the gap between the cluster- and meta-frontier. It is applied to assess the technological efficiency gap of the same DMU based on distinct frontiers. Furthermore, TGR might signal the necessity to separate various groups [
78]. The lower the TGR value, the higher the clustering needed, and vice-versa.
The value of
TGRk should range between 0 and 1 [
73,
74]. Nevertheless, the TGRs of some DMUs in different stages (i.e., divisions) may be bigger than one when using the concave frontier because it might include unfeasible input-output arrangements, consequently leading to erroneous and unreasonable assessments. Therefore, [
79] suggested a non-concave meta-frontier that envelopes those input-output conjunctions that belong to the technology set of at least one of the technologies, removing the area with unfeasible input-output conjugations. They proposed a two-step methodology to obtain a non-concave meta-frontier, thus making it possible to attain a TGR in distinct processes of the network structure within [0, 1]. First, the technical efficiency of a DMU is measured against its own cluster technology. Then, the technical efficiency of a DMU is measured against the other cluster technologies. The technical efficiency score thus obtained is lower if the other cluster technology enables the DMU to generate a higher output level. In that situation, the other cluster technology characterizes the meta frontier for this DMU.
A TGRk closer to one indicates a tiny gap between the meta- and the cluster-frontier. The meta-frontier indicates the underlying efficiency level of the entire assessed group of DMUs, whereas the cluster-frontier depicts the true efficiency level of each cluster; the greater the value of TGRk, the narrower the gap between the meta- and the cluster-frontier.
4. Data and Assumptions
TO 1 covers innovation in global terms. The entire budget devoted to TO 1 is roughly 66 billion euros, with ERDF (national and EU contributions) covering 94% [
41]. TO 1 receives 22% of the entire (national and EU) ERDF funding, corresponding to 62.2 billion euros supported by the ERDF, of which 20.6 billion are national, and 41.6 billion are EU funds. TO 1 is divided into two main investment priorities (IP) [
41]: IP (1a) strengthening R&I infrastructure and capability to foster R&I excellency, as well as deploying centres of competence, predominantly those of European prominence; (1b) encouraging corporate investment in R&I, constructing connections and co-benefits between enterprises, research centres, and the university education sector, particularly stimulating incentives to invest in new products and services, transfer of technology, social innovation, eco-innovation, public service tenders, demand stimulus, networking, clusters, and open innovation via smart specialisation, and sponsoring technological and applied research, pilot lines, initial product validation, high tech manufacturing capacities, and initial production, with a focus on essential technological solutions and all-purpose technology dissemination. Since our study is focused on enterprises, in particular SMEs (most firms supported by ERDF), we will address priority (1b) in our assessment, and we will consider the corresponding dimensions of intervention given in
Table 2.
We engaged several stakeholders in the selection of the factors used in the assessment by hosting a guided workshop with various policymakers and MA on the theme “Evaluating Co-financed Intervention Policies in Enterprises.” Furthermore, we conducted a separate examination of regional and national programs, classified into clusters based on the categories of the regions under consideration (i.e., less developed regions, regions of transition, and more developed regions). In this case, whenever the regional categories assigned to the OPs simultaneously belonged to distinct types of regions, these have been classified as regions of transition. Because these are the most up-to-date statistics for the accomplishment indicators, the numbers evaluated are cumulative values from multiple years released on 19 November 2021. The OPs with no missing data were analysed (i.e., the OPs with missing data were not contemplated) in our evaluation, which resulted in the study of 19 countries and 53 programs.
The input and output factors considered for performing the efficiency assessment of implementing the European Structural Investment (ESI) funds devoted to R&I interventions in SMEs were chosen from a list of common indicators legally required by the EU [
80]. These were more often available and are described below (see also
Table 3).
In addition, according to [
41], at the TO level, 2014–2020, common R&I indicators concentrate on financial aspects (e.g., financial resources assigned), which are always present at any stage, and on procedural aspects (e.g., ‘the number of enterprises receiving support’). Nevertheless, there are other indicators, referred to as real outputs, such as the ‘number of researchers working in improved research infrastructure facilities’ and the ‘networks created between enterprises and research institutions facilitating technological and knowledge transfer’ [
41]. Therefore, we have considered an NSBM model with the structure given in
Figure 1.
4.1. The Pace of Programmes’ Implementation
The EU has been pursuing an R&I-led strategy relying on a paradigm in which R&D investment is a key source of higher economic achievements [
67]. Under this strategy, the EU has attempted to increase R&D expenditure, with a target of 3% of its GDP. Hence, the financial implementation of SFs, which is an essential prerequisite for efficient policy execution, is included in this study, with special attention paid to the rate of programme implementation [
60,
81]. Costs that do not meet the appropriate qualifying conditions will not be reimbursed under this framework. These must be confirmed by a certified Controller, who is the entity or responsible person for verifying at the national scale that the co-financed products and services were purchased, that the related spending has been paid, and that the required EU Programme and national regulations were followed.
On the one hand, the “total eligible spending” relates to the eligible expenses submitted by the supported projects that this Controller has certified. As a result, the larger its value, the higher each project’s financial execution. Consequently, it is employed as an output at the first stage of the analysis, which seeks to assess procedural aspects only. Then, it is viewed as an input at the second stage of the assessment, which seeks to evaluate the actual R&I capacity generated with the funding used. On the other hand, the “eligible costs decided” are those that have financial resources allotted to the projects chosen for financing (project pipeline), being thus considered an external input.
4.2. Enterprises Supported
This indicator refers to the number of firms that have received ERDF assistance (whether or not the assistance is considered state aid). According to [
41], this factor of evaluation should be used with indicators referring to the number of enterprises supported by introducing new-to-the-firms and new-to-the-market products for innovation in enterprises. The more firms are supported, the higher the possibility of enhancing their R&I capacities and their adoption of innovative technologies, thus being regarded as a procedural external output.
4.3. Enterprises Working with Research Institutions
Previous publications emphasise the necessity of efficient cooperation across organizations to build trust-based channels, which help promote knowledge transfer [
82]. Indeed, areas with well-coordinated institutional frameworks relying on the triple helix but also the creation of novel cooperation mechanisms [
83] are inextricably related to strong local innovation processes [
18]. As a result, we employ indicator CO26, ‘Number of firms interacting with research institutions,’ which evaluates network engagement and is regarded as a proxy for potential technology and knowledge transfer. This indicator was obtained from the set of common output indicators on ‘Productive Investment’ available from [
41] and corresponds to the number of enterprises that collaborate with research institutions on R&D initiatives. The initiatives involve at least one enterprise and one research institution. Support may be provided to one or more of the collaborating parties (research institution or industry), but it must be conditioned on cooperation. The collaboration might be fresh or old. The collaboration should endure at least until the end of the project. This indicator is viewed as a real external output at the second stage of the analysis.
4.4. Number of Enterprises Supported to Introduce New-to-the-Market Products
The launch of a new-to-the-market product is acknowledged as a sign of innovation. Besides measuring the outcomes in terms of innovation, it also shows its successful market introduction. There are several concepts and metrics that can be employed for assessing innovation [
84]. Out of these, a few consider expenditure as inputs, patents as outputs, and launching new-to-the-market products/services as a way of measuring market acceptance (also as an output). All measures have their own limitations. It is not guaranteed that investing in R&D will end up reaching innovation. Besides, patents do not necessarily generate added value to the organization and do not give any account for the degree of market acceptance. Moreover, as mentioned earlier, one of the limitations of the performance framework indicators used during the 2014–2020 programming period is the lack of data regarding the direct impact of the investments supported. Hence, we will use a proxy of the potential created for introducing a new-to-the-market product the ‘number of enterprises supported to introduce new-to-the-market products,’ which measures the assistance provided to businesses in developing a ‘new-to-the-market’ product in all its markets. This is the CO29 indicator from the list of common indicators obtainable from [
41].
In this indicator, process innovation might also be included if the process concurs with the creation of the ‘new-to-the-market’ product. Applications that do not seek to generate a new-to-the-market product are not eligible.
A product is new-to-the-market if it is not offered on a market (i.e., it does not have to be new to all markets) that delivers the same functions, or the technology used by the new product is radically different from the technology used by already existing products [
41]. Products may be both tangible and immaterial (including services and processes). To avoid double-counting, we have not used the indicator measuring the number of new-to-the-firm products. This indicator is regarded as an external procedural output during the first stage of evaluation.
4.5. Number of Researchers Working in Improved Research Infrastructure Facilities
Previous studies have found that firms with increased accessibility to more advanced technology and highly skilled workers outperform firms devoid of these (see, e.g., [
84,
85]). Therefore, we have considered indicator CO25 from the list of indicators presented in [
41], ‘the number of researchers working in improved research infrastructure facilities,’ as a proxy that allows measuring both highly skilled workers and the access of SMEs to more advanced technology.
This measure of research activity is contemplated in the framework of research infrastructure facilities (either physical or non-physical) improved due to ERDF funding. Existing job openings in research infrastructure facilities that (1) directly carry out R&I activities and (2) are directly impacted by the assistance provided by the OP. The positions must be filled (empty positions are not considered) and are in Full-Time Equivalent. The R&D project has to enhance the facilities or the quality of equipment, i.e., maintenance or substitution without quality improvement is not accounted for. This indicator is employed as a real external output at the second stage of the analysis.
From the analysis of
Table 4, the average overall financial execution rate (i.e., the ratio between the total eligible spending and the total eligible cost) is low (43.33%). It is even lower for the less developed regions (37.13%), presenting higher values for the regions of transition (48.84%) and more developed regions (50.86%). Additionally, the highest average R&I generation potential might be found in the regions of transition (with the highest average number of researchers working in improved infrastructures of 1407). In contrast, the average number of enterprises supported for working with research institutions is higher in the less developed regions (722).
A DEA requirement refers to inputs and outputs that must have an isotonic relation that can be corroborated using correlation analysis [
86]. This means that the link between inputs and outputs should be consistent, i.e., increasing the value of any input ceteris paribus should not decrease any output but should instead lead to an increase in the value of at least one output. The factors support an isotonic relation if the correlation between inputs and outputs is positive (and significant). Because the normality assumption for the application of the tests for the significance of Pearson’s correlation was not validated in this situation, we chose to obtain the Spearman correlation coefficients and the associated significance tests for both P1 and P2 processes—see
Table 5.
5. Analysis and Discussion of Results
Results were computed with MaxDEA 8 Ultra software, assuming that the linking factor considered in the analysis can be obtained freely. The values of the TGR for the distinct region categories and the corresponding meta- and cluster-frontiers were computed for all R&I OPs under scrutiny. We applied the Kruskal–Wallis test to validate the existence of significant differences in the efficiency scores of the three clusters for both processes.
Table 6 reports the results of the Kruskal–Wallis test, that are significant at a 5% level, for procedural efficiency and R&I efficiency. From this test, we can conclude that the used approach is adequate.
Basic descriptive statistics for these measures are shown in
Table 7 for both processes under evaluation.
Figure 2 illustrates the efficiency scores based on the meta- and cluster-frontiers for procedural efficiency, R&I efficiency, and the overall network efficiency, respectively.
It should be noted that more developed regions had their cluster frontiers tangent to their meta-frontiers, meaning that the TGR for these regions is one and that these regions are already producing at the same level of efficiency of the meta-frontier regardless of the process under evaluation (
Figure 2a–c). The other mean TGR values were 0.4 and 0.57 in terms of Procedural efficiency, with the situation getting worse for R&I efficiency. For example, the TGR values obtained for this latter process are 0.37 and 0.33 for less developed and transition regions, respectively (the median TGR values for both processes and less developed and transition regions corroborate these conclusions). These outcomes appear to confirm the ‘European paradox’ since the OPs reveal higher levels of TGR for procedural efficiency than for R&I potential capacity development, specifically in less developed regions and regions of transition. This paradox is related to the incapacity of transforming the results of technological research and skills into actual innovations and competitive advantage [
49]. In a similar vein, [
87] and [
49] found that the European Innovation System is more successful at doing pure research, which is primarily motivated by public R&D financing, than at producing innovation outputs. In summary, there is an ‘innovation gap’ in that encouraging innovation inputs through public support does not always result in innovation outputs.
The overall network TGR is even lower for these regions, with values ranging between 0.18 and 0.24. These findings indicate the existence of a big gap between the two frontiers, particularly for the less developed regions (when compared to the meta-frontier, the cluster-frontier had the number of efficient OPs—score equal to 1—increased from 0 to 7 and 0 to 4 for Procedural and R&I efficiencies, respectively—see
Figure 2a,b). In the regions of transition, the cluster-frontier had the number of efficient DMUs showing only one additional efficient OP according to procedural efficiency, thus attaining four efficient OPs, whereas according to R&I efficiency, three OPs became additionally efficient, ending up with six efficient OPs (
Figure 2a,b).
Figure 3 depicts the number of OPs at various efficiency score subintervals based on the cluster- and meta-frontiers, respectively. The number of OPs classified as efficient regarding procedural efficiency expanded from 7 to 15 (
Figure 3a,b), whereas in the case of R&I efficiency, it only increased from 7 to 14 efficient OPs (
Figure 3c,d). The overall network efficiency shows an increase of 5 to 10 efficient OPs in the cluster-frontier (
Figure 3e,f).
These findings are heavily driven by the TGR achieved (both for Procedural efficiency and R&I efficiency) by R&I OPs in less developed regions. These generate only about 40% of the possible efficiency given the technology available for this type of OP, according to the average European R&I OPs considered (19 countries are represented), as shown in
Table 7. Nonetheless, the regions of transition obtain 57% of the potential efficiency on average in terms of procedural efficiency (with a median of 69%, meaning that 50% of OPs have a potential efficiency smaller than or equal to 69%). However, only 33% of the potential efficiency (with the median equaling 19%, 50% of the OPs have potential efficiency smaller than or equal to 19%) in terms of R&I efficiency (
Table 7).
According to the cluster frontiers, 15 and 14 out of 53 OPs were relatively efficient for Procedural and R&I efficiencies. Nevertheless, the overall network efficiency only showed 10 efficient OPs, implying that only about 19% of the OPs considered were efficient. This indicates that the majority of OPs assessed had simultaneous inefficient procedural and R&I capacities.
The four OPs more frequently viewed as benchmarks are “Aragón” (22), “Brussels Capital Region” (19), “Cohesion Policy Funding—EE” (19), and “Competitiveness Entrepreneurship and Innovation—GR” (18)—see
Table 8. Out of these, two belong to more developed regions, one belongs to a less developed region, and another one belongs to a region of transition. By cross-checking these findings with the Regional Innovation Scoreboard in 2021 [
88], “Brussels Capital Region” is ranked 14th in the Top list of regional “innovation leaders,” whereas “Aragón” is viewed as a “moderate innovator.” The OP “Cohesion Policy Funding—EE,” a national programme, belongs to a country classified as a “Strong Innovator”. Finally, the OP “Competitiveness Entrepreneurship and Innovation” belongs to Greece, which is considered a “moderate innovator”.
Curiously, the less developed regions, which are both efficient and “emerging innovators,” manage to attain high values both for the number of researchers working in improved infrastructures and the number of enterprises working with research institutions.
Finally, only the efficient OPs coming from regions either classified as “Innovation leaders” or “Strong Innovators” scored in all the indicators considered in the analysis—see
Table 8. These findings might be driven by the fact that these regions are better equipped in terms of innovative institutions, as well as advanced education and knowledge transfer, also suggesting that these ended up profiting the most from the EU funds awarded [
19].
Globally, more developed regions show a higher potential for improvement of 85%, whereas transition and less developed regions had a smaller potential for improvement of 67% and 65%, respectively—see
Table 7. These findings are in line with [
63]. They concluded that the most innovative EU MS had much lower efficiency of diffusion of innovation scores when compared to some apparently weak innovating MS.
Figure 4 shows that the number of enterprises supported for developing new-to-the-market products in both efficient and inefficient OPs is not as significant as the other factors that help explain the overall network efficiency (
Figure 4c). The mean funds dedicated to the eligible cost of efficient OPs (1,205,112,294 €) were significantly higher than that of inefficient ones (205,853,193 €). The same can also be concluded regarding the mean eligible spending of efficient OPs (492,283,983 €) and that of inefficient ones (96,062,570 €)—see
Figure 4c). Moreover, the mean values of the number of researchers working in improved R&I infrastructures and the number of enterprises working with R&I institutions of efficient OPs are particularly high for those OPs that can generate R&I capacity efficiently. This is evident compared to the values attained for efficient OPs showing higher procedural efficiency (
Figure 4a,b). In addition, the number of enterprises supported is slightly higher for efficient Procedural OPs than for OPs that manage to reach higher R&I capacity efficiently (
Figure 4a,b). This suggests that supporting a higher number of enterprises does not necessarily lead to the generation of technology transfer between R&I institutions and enterprises or to the improvement of R&I conditions. It seems that the continuous focus on public R&D spending to rampage R&I capacity in SMEs may well hinder the adoption of new means of stimulating R&I, many of which are better capable of helping EU SMEs [
67]. SMEs’ innovation capacity is frequently influenced by immaterial variables, such as the strength of the local and regional R&I mechanisms [
65]. Cooperation and networking, either at the firm or organisational levels, are critical for the development and knowledge transfer, which is at the heart of SMEs’ R&I [
67]. R&I often occurs in SMEs as a result of a variety of partnerships, including with scientific R&D institutions, along with those based on DUI [
24]. As a result, [
89] highlight the need to create improved interactions between regional innovation and development policies and funding programmes at both strategic and implementation levels.
5.1. Potential Improvements
The SBM model identifies the required changes that inputs and outputs should undertake for inefficient OPs to become efficient (
Table 9). These results are depicted in
Figure 5 and
Figure 6, both by OP and by region type, where OPs are represented from left to right in decreasing order of efficiency.
The number of researchers working in improved infrastructures shows the largest improvement potential of about 481% (i.e., it should increase on average from 1136 to 6603 researchers)—see
Table 9 and
Figure 5. In any case, the more developed regions and the less developed regions show the highest room for improvement potentials of 881% and 393%, respectively. In contrast, the lowest improvement potential belongs to the regions of transition (32%)—see
Table 9. These results are consistent with those of [
29], who concluded that this sort of barrier had the largest restraining effect in more developed regions. Additionally, other studies also highlight the importance of the lack of skills as a hurdle to innovation (e.g., [
32,
33,
34]).
The number of enterprises working with research institutions also presents an overall high potential for improvement (49%), particularly for the less developed regions (72%), followed by more developed regions (47%)—see
Table 9 and
Figure 5. These outcomes highlight the need to pursue more effective and efficient cooperation with research institutions (see [
90]). Under the current output levels, the more developed regions show lower efficiency in terms of the use of funding as they require a reduction of 73% and 63% of eligible costs decided and total eligible spending, respectively, to become efficient—see
Table 9 and
Figure 6. The number of enterprises supported requires further improvements, particularly for more developed regions (57%) and regions of transition (23%)—see
Table 9 and
Figure 5. The number of enterprises supported for new-to-the-market products is the factor requiring the lowest adjustments possible (ranging from 4 to 7%)—see
Table 9 and
Figure 5. Hence, to foster R&I in enterprises, MA should focus on further increasing the number of researchers working in improved infrastructures and the number of enterprises working with research institutions. This is particularly important in less developed and more developed regions. Besides, MA should channel their funding towards less developed regions since these present a better performance when they employ their funding both in eligible costs decided (−11%) and eligible spending (−12%) against more developed regions (−63%)—see
Table 9 and
Figure 6. These findings are consistent with the European Commission’s Smart Specialisation Strategies for 2021–2027, which identify the major obstacles and next measures required to promote innovation-led growth, particularly in less developed and industrial transition regions [
43].
Overall, further support should be given to SMEs in less developed regions regarding the information provided, namely concerning existing funding options. Actions should be promoted to raise awareness of how the market, both at national and international levels, new technologies, and new regulations operate [
43,
91,
92]. Additional efforts should also involve the expansion of the target group enterprises by attracting companies that are not currently innovating but are innovation-oriented. Finally, MA should strengthen the support of partnerships between SMEs and R&I institutions, particularly in less developed regions.
5.2. Sensitivity Analysis
Because the SBM-DEA technique is nonparametric, an alternative way usually employed for performing sensitivity analysis involves removing one aspect of evaluation (input or output) at a time and evaluating the shifts in efficiency obtained [
93].
We performed four regression models, with the dependent variables being the scores obtained by removing each evaluation factor at a time and the independent variable being the original score. The slope and the related coefficient of determination (or R-square) are presented for each model. The sensitivity of efficiency to changes in the factors considered may thus be evaluated by the gap between the value one and the slope of the regression function, implying that the sensitivity of efficiency to adjustments in the factors is directly associated with this gap [
73,
94].
The sensitivity analysis results are depicted in
Figure 7 and
Table 10. The factor which shows higher impacts on efficiency is the “number of enterprises working with research institutions” since the omission of this variable leads to the highest value of |1-slope|. The “number of enterprises supported” and the number of “researchers working in improved infrastructures” have an analogous effect on efficiency. The “number of enterprises supported for new-to-the-market products” has the smallest impact on efficiency (
Figure 7b). Then again, results suggest that further measures should be adopted to promote cooperation between enterprises and research institutions.
6. Conclusions and Policy Implications
The main goal of this paper was to assess the efficiency of SFs’ implementation of 53 OPs from 19 EU countries in the framework of TO 1, considering procedural and R&I potential capacity generation. The data employed is the most up-to-date and refers to the programming period between 2014–2020. The chosen data is highly suitable to instantiate DEA nonparametric models. The key advantage of utilising this sort of assessment methodology lies in the fine-grain information that they may provide to MA on the inefficiency of the OPs when compared to their benchmarks. In this sense, by identifying the benchmarks of inefficient OPs, the DEA methodology offers particularly relevant information on the best practices that these OPs should undertake to become efficient.
Even though DEA has undeniable convenience over other conventional methodologies (e.g., microeconomic studies with control groups and case study evaluation), there is still a dearth of academic interest in its usage in the context of SFs’ efficiency evaluation. As a result, one of the innovative aspects of our work involves the use of the Network SBM-DEA model in conjunction with cluster analysis to evaluate the efficiency of the implementation of SFs allocated to foster R&I in enterprises, mainly SMEs. The Network SBM-DEA model makes it possible to identify the benchmarks (in terms of the best practices) and the necessary changes that need to be implemented for the factors considered in the evaluation to enhance the OPs implementation. Additionally, it encompasses a two-stage analysis: a procedural efficiency analysis and an evaluation of the potential R&I capacity generation of the programmes under scrutiny. This sort of evaluation is especially important if the programmes are in progress during the monitoring stages since it allows MA to predict the influence that prospective changes in indicators may have on the efficiency scores of OPs. Furthermore, this technique entails the use of sensitivity analysis of the results computed, enabling an understanding of which factors have the greatest influence on efficiency changes.
Unlike other approaches and methodologies employed during the ex-post or ex-ante assessments of cohesion policies, the DEA methodology enables efficiency appraisal of OPs’ implementation throughout the programming period. This makes it possible to adopt the necessary policies to reach efficiency during the period underway. Additionally, DEA can provide actual support in the design of future cohesion policy instruments because it allows an understanding of the main accomplishments and failures of the preceding implementation programming phases. Additionally, it delivers valuable information on what can be done to overcome the identified failures.
In summary, the main conclusions with respect to our research questions are provided.
RQ1. “Does a higher level of procedural efficiency necessarily lead to a higher R&I potential capacity generation?”
- (i)
OPs show higher levels of TGR for procedural efficiency than R&I potential capacity generation, particularly for less developed regions and regions of transition.
- (ii)
Our findings seem to corroborate the ‘European paradox,’ since there is an ‘innovation gap’ to the extent that fostering innovation inputs through public funding does not necessarily lead to innovation outputs.
RQ2. “Which performance framework indicators preclude the efficient application of funds committed to boosting R&I in EU SMEs?”
- (i)
Overall, the two main factors that preclude R&I OPs from becoming efficient are the number of researchers working in improved R&I infrastructures and the number of enterprises working with R&I institutions.
- (ii)
These results highlight the need to tackle the problem of a lack of skills, a factor that has been often identified as one of the biggest hurdles to innovation. MA should also strengthen the support of partnerships between SMEs and R&I institutions, particularly in less developed regions.
RQ3. “Which performance framework indicators showed higher resilience to efficiency classification in face of their potential shifts”?
From the results of the sensitivity analysis, it can be established that:
- (i)
The “number of firms that are supported for new-to-the-market products” has the least influence on efficiency.
- (ii)
More steps should be taken to enhance collaboration and networking between firms and research organizations, because this indicator has the highest influence on efficiency.
RQ4. “Which OPs were most frequently considered as a reference of best practices over the programming period under scrutiny?”
“Aragón” (22), “Brussels Capital Region” (19), “Cohesion Policy Funding—EE” (19), and “Competitiveness Entrepreneurship and Innovation—GR” (18) were the four OPs most frequently considered as a reference for best practices. Two of them belong to more developed regions, one to a less developed region, and one to a region of transition.
RQ5. “Which type of regions showed a higher performance in supporting R&I in SMEs?”
- (i)
In general, more developed regions exhibit a higher room for progress (85%), while transition and less developed regions had a smaller capacity for progress of 67% and 65%, respectively. Therefore, our findings show that successful policies should reflect not only the SMEs R&I actions towards their innovation capacity enhancement, but also the region where they are located.