Next Article in Journal
Life Cycle Environmental Impacts and Energy Demand of Craft Mezcal in Mexico
Next Article in Special Issue
Studying the Transition towards a Circular Bioeconomy—A Systematic Literature Review on Transition Studies and Existing Barriers
Previous Article in Journal
The Valuation of Idle Real Estate in Rural Areas: Analysis and Territorial Strategies
Previous Article in Special Issue
Spatial Agglomeration of Manufacturing in the Wuhan Metropolitan Area: An Analysis of Sectoral Patterns and Determinants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Making Use of Evaluations to Support a Transition towards a More Sustainable Energy System and Society—An Assessment of Current and Potential Use among Swedish State Agencies

The International Institute for Industrial Environmental Economics (IIIEE), Lund University, P.O. Box 196, 22100 Lund, Sweden
Sustainability 2020, 12(19), 8241; https://doi.org/10.3390/su12198241
Submission received: 20 August 2020 / Revised: 28 September 2020 / Accepted: 3 October 2020 / Published: 7 October 2020

Abstract

:
Evaluations hold the potential to support decision-making so that current global challenges related to climate and energy can be addressed; however, as the challenges are becoming increasingly large and complex, new and transformative evaluation approaches are called for. Such transformative evaluation in turn builds on an extended and more deliberate use of evaluations. This study focuses on the current evaluation use practices among Swedish state agencies who are commissioning and/or conducting evaluations within climate and energy-related areas. Building on focus group sessions with four agencies and a structured interview questionnaire answered by representatives at five state agencies, the results shed light on how informants perceive the current practices of using evaluations, following the models of use presented in the evaluation literature. These results show perceived use as mainly instrumental or conceptual, along with showing an overall emphasis on models of use that are deemed constructive for moving towards transformative evaluations. The results also outline key benefits and challenges related to the adoption of a transformative evaluation approach. Such benefits include a more structured planning and use of evaluations, while challenges relate to institutional barriers and mandates to coordinate evaluations on a transformative scale.

1. Introduction

The projections and trajectories of the current development process show that advancements towards addressing global climate challenges are too slow [1,2,3]. Thus, calls are becoming urgent for changes at larger scales [3] and for sustainability transitions that shift entire socio-technical configurations [4] towards a society that is built on more sustainable practices [5,6,7]. In order to achieve such sustainability transitions, deliberate and well-designed research initiatives and policy instruments are needed [1,2,8].
This paper focuses on facilitating a transition towards a more sustainable energy system and society. To ensure that time and effort are indeed invested in policy instruments that will support such a transition, there is a need to equip decision-makers with reliable and informed knowledge about how different instruments perform. Evaluations of both research efforts and policy instruments are abundant, as ex-post evaluation of past experiences as well as prospective ex-ante assessments are integrated into the seventh EU Environment Action Program in the EU’s Better Regulation Agenda [9], and previous studies have shown that multiple evaluations are undertaken in the fields of climate and energy policy, as well as other sectors [10,11,12,13,14].
Thus, it is key to ensure that evaluations of research and policy instruments are used properly. The use of evaluations is a frequently discussed and studied area among evaluation scholars [15,16,17,18,19,20,21]. Although the theory is rich in describing reasons for impeded use and how to address these [16,18,21,22,23,24], there are still concerns among evaluation scholars that evaluation findings are misused or are not used to their full potential [19,25,26]. The purposes for evaluating are closely related to the different types of uses [21], which have been conceptualized as models of use. These models represent a vast range of uses that take place at various times of an implementation: spanning, for example, instrumental improvement, tactical or political use, enlightenment and process-related learning [27,28,29,30].
Previous studies on evaluation use focus on the drivers and requirements for an enhanced use of evaluation findings [10,31,32,33], but attempts have also been made at assessing which models of use are prevalent in practice [17]. The results of these studies show that many models of (perceived) use can exist in parallel [10,17], and that a high-quality and clear evaluation is a prerequisite for usefulness [10], while not being a guarantee for actual use [17,31]. Different models of use are noted in these studies, including instrumental, conceptual, legitimizing and ritual models [17,31], but it is also noted that the context in which an evaluation is performed matters for its use [10,33].
However, evaluation use studies tend to focus on the use of individual policy evaluations and not on the use of evaluations to acquire systemic insights about evaluands’ support of systemic change and transition. Likewise, evaluations often omit a deeper analysis of the interplay between instruments [34], and as a consequence, the usefulness of evaluations of individual policies has been questioned as the effects from one single instrument may be difficult to discern from the impacts of the policy instrument mix it is embedded into [35]. Thus, a more holistic approach to evaluating—and using evaluations—has been called for [34,36]. Neij et al. [37] propose that such a transformative evaluation approach needs to build on insights drawn from various evaluations to deliberately synthesize a broad, holistic, transformative perspective of how different research initiatives and policy instruments are building momentum towards a sustainability transition together. A framework that accounts for the key aspects to consider when performing such transformative evaluations includes drivers for a transition, such as visioning, experimentation and learning, and accounts for a systems and multi-actor perspective, as well as considering reflexivity and the construction of value judgements [37,38]. How such evaluations can actually be facilitated and integrated into current evaluation commissioning and use practices is, however, largely unknown, particularly in the context of facilitating a transition towards a more efficient energy system and society.
The objective of this study is thus to provide an exploratory assessment of how evaluations of research and policy instruments aimed at climate and energy-related issues are used currently, how enhanced use is supported today, or can be supported, and which benefits and challenges that stakeholders perceive as resulting from the adoption of a transformative evaluation approach. For this, the case of Sweden is used, focusing on the readiness among Swedish state agencies to adopt transformative evaluation and integrate it into their current evaluation practices, and how the current reported modes of using evaluations fare in relation to the outlined need for supporting a transition.
The study is conducted by engaging four Swedish state agencies in focus groups and is complemented by a structured interview questionnaire sent out to seven Swedish state agencies. The selection of agencies is based on their working with the commissioning and/or evaluation of instruments related to climate and energy. The theoretical framework that guides the study and analysis builds on evaluation theory, particularly concerning different models of evaluation use proposed by evaluation scholars, in order to categorize different kinds of use and means to promote enhanced use.
This paper is organized as follows: the case used for the study—i.e., the proposed transformative evaluation approach—is presented in Section 1.1. This is followed by a description of the theoretical framework of the study, which is founded in evaluation theory, mainly focusing on the use of evaluations and different models of use, in Section 2. Section 3 presents the material and methods, comprising of focus group sessions with four Swedish state agencies and a structured interview questionnaire answered by representatives from five Swedish agencies. Results from the focus group sessions and questionnaire are presented in Section 4, followed by a discussion in Section 5 and some concluding remarks in Section 6.

1.1. Case Description—Transformative Evaluation

In order to address the need for a more holistic approach to using evaluations to generate insights about contributions on a transformative scale, a broader transformative evaluation approach was proposed by Neij et al. [37]. In general, there is no generic method to apply in an evaluation to secure its reliability or utility. Instead, different methods are called for to address different aspects and purposes and should thus be selected in order to appropriately address the needs that the evaluation is intended to meet [24]. Next, syntheses of evaluations, or meta-analyses, are promoted by evaluation scholars in order to provide broader evaluation results that span beyond the individual evaluand. Such syntheses are not promoted only for the sake of broadening the application of findings but also to increase the credibility of the findings to users [27,39]. Then, to address environmental challenges in particular, there is not only a need for strong knowledge production and evaluation practices but also for collaboration between knowledge producers and policy makers [40]. Together, these insights support the adoption of a transformative evaluation approach that builds on a variety of methods, that promotes the deliberate synthesizing of findings to support the credibility and utility of the findings among users and that has an articulated purpose of supporting a transition towards a more sustainable energy system and society.
The transformative evaluation approach was developed in an interdisciplinary research project between 2016 and 2019, intended to describe how evaluations of different individual research and policy programs and initiatives can be thoughtfully planned and conducted to become part of a more holistic perspective of transformative change. The proposed approach for transformative evaluation, hereafter called “transformative evaluation”, targets both research initiatives and policy instruments. It promotes making individual evaluations build on key insights from different but relevant disciplines, taking different areas of focus that are of different scales. Together, when properly coordinated and synthesized, these evaluations can provide an overview of how different instruments support change at a larger scale [37].
A cornerstone of the transformative evaluation approach is that it should be an extension of current evaluation practices, which gears evaluations towards a more deliberate integration of relevant viewpoints and concepts from evaluation theory and transition research, for example. Another cornerstone is that the approach should build on an extended use and synthesis of evaluation findings that go beyond the evaluand itself. Thus, the applicability of the proposed approach depends on current practices for conducting and using evaluations. In order to shed light on such current practices, as well as the overall attitudes towards transformative evaluation, this study focuses on Swedish state agencies that work with commissioning and/or conducting evaluations within areas related to climate and energy to assess their readiness to adopt a transformative evaluation approach.

2. Theoretical Framework

The point of departure for this study is that, to succeed in piecing together a more holistic picture of how research and policy instruments support a transition towards a more sustainable energy system and society, the use of their evaluations needs to be facilitated and extended. This requires that evaluations are conducted in a way that allows them to be accepted as carriers of information that can support a transition, and it requires an extended and clear strategy for securing the actual and deliberate use of evaluations beyond the program level—a key part of transformative evaluation (see Section 1.1). As such, this contrasts the more traditional perception of the use of evaluations as improving a program’s effectiveness and implementation [18]. Instead, evaluations are proposed to make a broader promise to users: to be applicable beyond the evaluated program in terms of providing pieces of a more holistic perspective. In order to address these issues, this study draws upon the evaluation literature, mainly surrounding the use and utility of evaluations to understand why use is impeded and how it can be supported, and surrounding the conceptualization of different models of use.

2.1. Use of Evaluations

The use patterns and identification of intended users depend on the purposes of the evaluation [18,27], which can be diverse, multiple, interrelated, parallel or even simultaneous. In general, two main domains of use are outlined—process use and findings use—with the former referring to use or actions taken prior to the finalization of the evaluation report and the latter concerning the use of actual evaluation findings [41].
The multitude of works within evaluation theory that refer to use bears witness to the fact that the actual use of evaluations may be substandard [17,18,22,23,32]. In order to support a broader and more active use of evaluations—a cornerstone of transformative evaluation—it is essential to understand why use is impeded. Evaluation theory scholars have identified various causes for this, including political resistance, where findings do not align with actors’ agendas [22,27]; scepticism or cynicism towards findings as users, for example, do not see the value or benefits of an evaluation; fear of judgement; a lack of resources to act on the evaluation; organizational cultures of non-use [18]; and other organizational tensions such as a lack of coordination, uneven power balances and competing interests between different actors [42]. Other barriers may stem from the friction between institutions and systems, which may occur when an evaluation topic crosses over different areas, subjects and agencies. Other systemic barriers include how the acceptability of evaluation findings is affected due to user preferences in terms of the methodology and modes of communication and knowledge when spanning different systems and actors [22]. Thus, the use of evaluation findings hinges on the users, which is a heterogeneous group with different interests and agendas [18,22,29]. Circumstances of non-use can, however, be deemed appropriate, for example, when an evaluation is not delivered on time or is poorly executed [18,26].
What are then the countermeasures for non-use? Two main areas for addressing non-use concern (I) the intended use of the evaluation and (II) the conduct and processes of the evaluation. Starting with the former, scholars have emphasized the need for a clear and realistic expectation of the intended use [18,27,28]. Next, it is paramount that users are defined and are involved in the evaluation process to support the intended use. Thus, receivers of the evaluation need to be integrated from the start in the determination of the questions of inquiry to make the evaluation relevant to them and to increase their sense of ownership and investment in the results [18,28,29,32]. The evaluation process should also be shaped according to input from users and key stakeholders, and should cater for the needs of users by being timely and performed at different stages of an implementation [23,28,29].
To connect this with transformative evaluation, the purpose and use of an evaluation may be twofold: in part, the purpose of evaluation is to gain knowledge that can improve or guide the implementation, but importantly, the purpose should also be to assist in the assessment of whether and how the implementation is supporting a transition. This calls for a framing of transition in terms of the purpose of the evaluation but also a consideration of who the users of such knowledge are; this may be both decision-makers on a state level and on program levels.
The second key area for addressing non-use focuses on the actual conduct of the evaluation and the reports that are delivered to users. Apart from the prerequisites of a good evaluation, such as being delivered on time, clarity and accessibility to users [10,22,28], evaluations also need to be performed using sound and robust methodologies [22,27], and they should seek to consider the political context and the decision-making context in which the evaluation is intended to be used [10,16,24,27,33,43]. Relatedly, an evaluation also gains credibility by being performed by a broad but deliberate evaluation team, whose methods and competencies gain trust among users, and through clearly communicated methods and competencies [22,23].
Lastly, a key for increasing the use of an evaluation is to carefully consider how it is disseminated. Audiences need to be aware of the existence of an evaluation report. This can be done by securing a dialogue between the evaluator and users, which should be maintained over the duration of the evaluation process, including the use phase [18,22,27,28,29].

2.2. Models of Use

As has been outlined in the previous section, there are numerous ways to use evaluations and numerous approaches to increase their use and utility. Table 1 presents an overview of 11 different models of use of evaluations—a conceptualization that builds on the work of scholars in the evaluation theory field. While this list may not be exhaustive, it provides an overview of the main approaches to the use of evaluations, which is essential to understand how the purpose of an evaluation is translated into some kind of desired action or effect, or how evaluations are received and acted upon by users.

3. Materials and Methods

The data collection for this study was conducted following two steps: the arranging of focus groups with representatives from selected state agencies, and a structured interview questionnaire sent out to state agencies to be filled in anonymously. As such, the focus groups allowed an initial scoping of general perceptions among agencies about taking on a transformative evaluation approach (as described in Section 1.1), while the structured and anonymous interview questionnaire allowed a deeper probing about current evaluation use practices and the possibilities and challenges that may come with moving current evaluation practices towards transformative evaluation.

3.1. Focus Groups

Using a focus group as a method for data collection allows an engaging and dynamic way of acquiring insights about participants’ opinions and experiences about a topic, which allows a comparison and discussion of various and conflicting viewpoints to be considered [44,45]. Apart from allowing a more inclusive and diverse discussion among participants compared to individual interviews [45], focus groups were also deemed appropriate for this initial state of the research in that the interactive discussions about evaluation practices among colleagues of a department were creating interest within departments to join in.
Four focus groups were arranged with Swedish state agencies in April and May 2019. The agencies were selected based on their working with evaluation by commissioning and/or conducting evaluations of research initiatives and/or policy instruments within energy and climate-related areas, with overarching mandates and responsibilities to support a more sustainable energy system and society. Requests to visit were made to five agencies, of which four agreed to partake in a focus group session. These were the Swedish Energy Agency, the Swedish Environment Protection Agency, Vinnova and the Swedish Agency for Growth Policy Analysis (the Swedish Energy Agency (Energimyndigheten) is overseeing the work towards an energy transition for sustainability; the Swedish Environment Protection Agency (Naturvårdsverket) is responsible for climate-related issues and goals; Vinnova is responsible for innovation for sustainable growth; and the Swedish Agency for Growth Policy Analysis (Tillväxtanalys) nalyses and evaluates political effects on growth). The attendees in the focus groups were invited by the agencies, based on the description that the session targeted employees that were working with tasks related to evaluation. The focus group sessions were carried out as workshops, held individually at each agency, which typically lasted for two hours; the number of participants ranged from three to approximately 15 (some participants were unable to join for the full duration of the workshop due to other duties, hence the approximate number of attendants). When the groups were large, they were split into sub-groups of up to eight participants. The aim of these workshops was to present the proposed evaluation approach and to gain an understanding of how different agencies perceived the possibilities and challenges of applying such an evaluation approach.
The format of the workshops was the same for each: the researchers started by presenting the proposed evaluation approach, then discussions followed related to the application of such an evaluation approach in their respective departments. At the last phase of the workshops, researchers particularly asked for the participants’ perceptions of the benefits and challenges that they foresaw when applying such an evaluation approach in their departments. Notes were taken by two researchers, which were compiled and organized into sub-categories according to content.

3.2. Structured Interview Questionnaire

In order to address the limitations of focus groups, particularly in relation to participants’ adjustments to any perceived expectations of views at the work place or any self-censorship due to social relationships [44], the focus group sessions were complemented with a structured interview questionnaire that was sent out to seven state agencies, four of which were represented in the focus groups. The interview questionnaire was filled in online individually by informants. The purpose of the structured interview questionnaire was thus twofold: to triangulate (i.e., crosscheck results using various methods [46]) and validate the initial scoping of perceptions among agencies towards transformative evaluation, and to acquire deeper insights about the current evaluation practices within state agencies, based on individual and anonymous responses uninfluenced by peer expectations
Sending out the questionnaire online was deemed a convenient means to reach the informants on their terms, giving the flexibility to fill in the questionnaire whenever it suited the informants. Moreover, the design of the interview guide contained multiple blocks of similar questions and multiple statements to rate, and as such it was deemed to perform better in a self-administrated format [44,47]. The structured interview questionnaire was primarily founded in a theoretical framework (as presented in Section 2) but also built on the categories of benefits and challenges that were drawn from the initial scoping of perceptions from the focus groups sessions. The questionnaire was originally in Swedish (see Appendix A for an English translation).
The selection of state agencies to receive the interview questionnaire was, like the focus groups, delimited to informants at state agencies working with evaluation either by commissioning and/or conducting evaluations of research initiatives and/or policy instruments within energy and climate-related areas. Thus, the selection of potential informants was rather narrowly delimited from the beginning, which was deemed necessary to secure relevant responses. Requests to participate were made to seven representatives at seven different agencies via e-mail, who were asked to disseminate the request to relevant colleagues at relevant departments. The selection of informants at each agency was made exclusively in each department in order to secure anonymity among the informants as far as possible. The questionnaire was to be filled in between 18 October–15 November 2019, it received 11 responses in total from five different agencies: the Swedish Energy Agency, the Swedish Environment Protection Agency, Vinnova, the Swedish Agency for Growth Policy Analysis, and The Swedish Agency for Economic and Regional Growth (the Swedish Agency for Economic and Regional Growth (Tillväxtverket) supports regions and businesses in sustainable growth related to climate and other areas).

4. Results

The results from this exploratory study will be presented in two main categories: results that concern the current use of evaluations and which measures that are taken that may facilitate further use (Section 4.1), and the actual benefits and challenges raised by adopting and integrating a broader evaluation approach with a transformative focus at the Swedish agencies represented in this study (Section 4.2).
Given the exploratory nature of this study, and given that no inferences can be made regarding the reported answers, the results are presented as indications of the current evaluation practices among five Swedish agencies. While the results do provide valuable insights on evaluation use processes and practices, no claims are made to generalize beyond what the study at hand entails.

4.1. Design, Conduct and Use of Evaluations and Facilitating Measures for Use

4.1.1. Design and Conduct

Evaluation theory provides some key measures for facilitating and supporting the use of an evaluation (see Section 2). One key strategy is to ensure robust evaluation approaches and methods which build on relevant actors being involved who provide relevant knowledge for conducting the evaluation. In order to assess the Swedish evaluation practices in this regard, the questionnaire sent to the agencies asked informants which actors were commonly involved in the design of the evaluation; which actors conducted the evaluation; and which particular areas of knowledge were sought when commissioning actors to engage in the conduct of an evaluation.
The results concerning involvement in the design of an evaluation are illustrated in Figure 1 (the graph to the left) and show a clear emphasis on placing responsibility for the design of the evaluation on the staff at the agencies themselves: 10/11 reported that this was always the case. Expertise from external consultancies was reported to be used occasionally, with 6/11 reporting that this was sometimes or often the case. Actors from other state agencies or from academia were less frequently engaged in the designing phase: both categories received the lowest total score among the informants. Open-ended reflections from informants provided additional clarifications, with one mentioning that expertise from their own agency was often selected to represent different administrative units of the agency, including both specialist knowledge on the implementation and those with evaluation expertise. Another informant mentioned that organizations with an interest in the implementation also occasionally partook in the design of certain evaluations. Thus, it seems that while agencies who commission evaluations do take a large responsibility in their design, other actors may be invited to partake in the design process as well. The involvement of key stakeholders in an evaluation is outlined as being crucial to increase its credibility and utility to users [23,28,29], and while there seemingly is an emphasis among the informants on in-house participation in the design of the evaluations, this may, however, mirror the actual involvement of intended users, as will be discussed later. However, consideration to actor involvement in the design of evaluations should be given in transformative evaluation, since users then may be found outside of the agency or immediate program boundary.
Next, informants were asked about which actors were commissioned to perform evaluations (Figure 1, the graph to the right). In this regard, the informants reported the main tendencies to be both hiring consultants and conducting the evaluation in-house at the agency, while commissioning researchers from academia was less frequently done. As may be suspected, different evaluations call for different kinds of expertise, and one informant clarified that certain types of evaluations were mainly conducted at the agency (e.g., enquiries and official reports), whereas evaluations of implementations were commissioned to external consultants. The expertise most called for when commissioning an evaluation from external actors was reported to be evaluation expertise, which 10 informants (the 11th refrained from answering) assigned as always or often being the case, followed by expertise in economics, which was reported to be the case often or sometimes. The least sought-after expertise, as reported by informants, was modeling expertise. Two informants added that specialist knowledge for the topic at hand was also sought after, including an understanding of how innovation systems and policy work. On the topic of which kind of expertise is commissioned, there are differences to be noted between different agencies. For example, informants working at the same state agency consistently reported that certain kinds of expertise were seldom commissioned, while the same kinds of expertise were reported as sought-after by informants at other agencies. This can be explained by the fact that agencies have different core areas and consequently host different key competencies, or possibly that they (traditionally) approach the evaluation of implementations with certain predominant methods and kinds of expertise.
Putting these results in the context of evaluation theory, it is apparent that the contextual dimension is of importance when it comes to selecting the methods, key people and kinds of expertise to be involved. The implications for increasing use may thus be inherent either to the agency’s general practices for commissioning and conducting evaluations—e.g., related to certain traditions or key competencies that decree how an evaluation is designed and conducted—or to political or bureaucratic conditions, such as political interests or procurement/master agreements.
The next key aspect for increasing use, as outlined by evaluation scholars, is the communication of evaluation results. According to the evaluation literature, mechanisms for a successful dissemination and uptake of evaluation results include presenting results in formats and channels that are tailored for the specific target users and leveraging public channels that can increase the accessibility and awareness among broader audiences [24,27]. The questionnaire posed open-ended questions regarding who informants considered to be the most important users of the evaluations commissioned or conducted at the agency, and to whom evaluation results were communicated. The informants indicated a range of users, commonly including the agencies themselves, but also state departments such as the Swedish Government Offices and the Ministry of Enterprise, Energy and Communications, as well as actors at regional levels. Regarding the channels used for the dissemination of evaluation results, the informants again indicated a variety of approaches. Among the different approaches to dissemination suggested in the questionnaire (Appendix A), none received a clear answer of “never” or “always”. Channels that received a score that suggested a more frequent practice included the publishing of evaluation reports on the agency’s webpage, an active communication of results to those involved in the evaluation (e.g., interviewees) and the presentation of the evaluation at meetings where those affected by the evaluation were invited to take part physically or digitally. Approaches to communication that informants reported as being less frequently applied mainly included internal dissemination at the agency only or dissemination to local or regional agencies. Nevertheless, informants also elaborated on the communication of evaluation results, where some clarified that the means of dissemination and the selection of receivers depended on the purpose of the evaluation, again linking back to the context. Additional measures of communication that were raised by informants included making the evaluation results accessible on online web-portals related to the implementation, and using films published online. Relating back to evaluation theory, the reported efforts from the informants showcased attempts to promote evaluation results using different outlets, providing avenues for an increased access and use of the evaluation results.

4.1.2. Models of Use

Turning to the use of evaluations, the questionnaire sought to gain insights regarding the current perceptions of the intended use of evaluations. Informants were asked to rate statements that referred to the 11 different models of use outlined in Section 2.2, on a scale between 1–5, on how well they were perceived to apply to the informant’s agency, where 1 was “never” and 5 was “always” (see Appendix A for a full list of statements as presented in the questionnaire). The results from the questionnaire are shown in Figure 2, where each model of use is presented as the aggregate ratings from all informants of the three statements underpinning the model (see Appendix B for a list of statements representing each model of use). In cases in which the statement was a reverse positive, the rating has been inverted. The number of responses per model of use could be up to 30 (10 informants answered these questions), with the response rate ranging from 27–30 per model of use. By using a Likert scale with a neutral midpoint, a division could be discerned between models of use that were rated as predominantly on the “never” side of the spectrum (1–3), as well as those predominantly found on the “always” side (3–5).
First, the results suggest that statements referring to all models of use were rated as high by some informants, and thus that the informants did not altogether denounce any model of use. Nevertheless, the results also suggest that some overall indications of certain models were more frequently recognized than others. The models of use that informants rated as least conforming with their perception of the evaluation practices at their agencies were mainly ritual and mobilizing use, followed by tactical use and overuse (Figure 2). Some of these models are deemed by scholars to be inappropriate or destructive [18,29]; and considering them from the perspective of supporting a transition, they are not deemed to be very helpful in building a constructive and transparent knowledge base because they promote certain (covert) agendas (e.g., mobilizing or tactical use), which would require significant transparency if synthesized in a holistic perspective. This because they lack an active concern about evaluation results (e.g., ritual use) or because the findings are not properly put into context or perspective (e.g., overuse). Thus, while some informants gave these statements high ratings, the majority of ratings were found on the “never” side of the spectrum, and as such, they were deemed of less importance for current practices in terms of the use of evaluations to support a transition towards a more sustainable energy system and society.
The next category that could be discerned from the results included the models spanning the entire spectrum, balancing between high and low levels of agreement. These models included constitutive and legitimizing use. In order to further analyze the wide reported span of these models, the underpinning statements were assessed individually. For legitimizing use, the statement “give trust and support for decisions that concern a program” received a high total score (41) and thus largely represented the right side of the spectrum, whereas the other statements—“contribute to showing that we are doing things correctly” and “confirm what we know about a research program or policy instrument”—conversely received lower scores (27 and 22, respectively). For constitutive use, the statements were more evenly rated, with the lowest being “realize effects as early as possible, before the evaluation in itself is done” (23) and the highest being “spur improvements in a program by communicating that an evaluation is to be done to those whom the evaluation concerns” (36). Contrasting these models of use from the perspective of supporting a transition, they can be argued to be of more or less importance. Constitutive use—spurring action and willingness to perform well by informing evaluands that an evaluation is coming—can probably only generate minor effects on a transformative scale, at least compared to other, more targeted efforts for supporting a transition. Legitimizing use, on the other hand, can be powerful from a transformative perspective, both for supporting and for disfavoring such efforts. Vedung [29] argues that parties using legitimizing use to support their own (political) agenda cannot be faulted, and depending on the agenda for adopting legitimizing use, it can be both detrimental and constructive for a given cause. In terms of supporting a transition, such discussions, however, call for further insights concerning justice, equity and power relations, for example.
The last category to be detailed includes those models of use that received ratings that suggested that they were perceived to be commonly adopted in the informants’ departments. In this category, statements related to instrumental use were rated as the most agreed with, along with statements for enlightenment. Nevertheless, there were informants that tended not to agree with the statements as well; for example, the statement saying that an evaluation is intended to “lead to immediate changes in the program that is being evaluated”, which five informants rated with a 2 or 3.
The last model of use to elaborate further is unintended use. Illustrated through statements that placed emphasis on widening evaluation application areas beyond the evaluand in itself, either for using the knowledge in discussions not immediately related to the evaluand or for highlighting new areas for further investigation, the informants rated unintended use more towards the right side of the spectrum. The statement “spur further discussions about other programs” predominantly received ratings of 4 and 5. This model of use is of particular interest when seeking to support the use of evaluations for a transition, since it refers to making evaluations wider in their approach in order to also cater for effects outside of the program or to capitalize on learning that can be valuable to apply in other circumstances. Such use can support a transition in that evaluations are seen as carriers of knowledge and learning that are intended to be used beyond the sphere of the evaluand itself. That informants indicate unintended use as balancing between neutral and positive suggests an opening for applying transformative evaluation in Swedish agencies. The next section focuses on the perceptions of the benefits and challenges of conducting transformative evaluation.
Lastly, the questionnaire provided opportunities for informants to elaborate on the most important benefits and limitations of their current evaluation practices. These responses showed that some informants emphasized an independent evaluation practice, where the selection of methods, theories and any delimitations were to a large degree at the discretion of the agencies, and that this independence also included the freedom to arrive at any conclusion, however inconvenient it may be for the evaluand. Others emphasized that the evaluation practice was recurring, mainly conducted externally and that evaluations were planned already at the design phase of an implementation. Conversely, the limitations of current evaluation practices included access to data, a lack of resources and notably a lack of time, requiring the balancing of ambition and the desire to conduct an evaluation in a certain way and the given deadlines for delivering the final product. Others emphasized limitations in the commissions of evaluations, which called for the application of indicators which may not be relevant, or a lack of systematization in evaluation processes; e.g., concerning routines for addressing and following-up measures as suggested by an evaluation. Another limitation mentioned by one informant was that there is currently an emphasis on quantitative evaluations, which limits the ability to perform evaluation to create understanding and learning.

4.2. Benefits and Challenges of Transformative Evaluation

As described in Section 1.1, an approach for transformative evaluation has been proposed. In order to acquire insights about how such an approach may be adopted and integrated by Swedish state agencies, an initial scoping of benefits and challenges was conducted using focus groups in which four state agencies took part: the Swedish Energy Agency, the Swedish Environment Protection Agency, Vinnova and Growth Analysis. The input shared at these focus group sessions was grouped and categorized according to content.
The comprehensive findings from this initial scoping are presented in Table 2. Generally, the results show that representatives from the agencies articulated more challenges than benefits in terms of adopting transformative evaluation, but also that many of the challenges and benefits were related, as benefits tended to relate to the envisioned outcome of transformative evaluation, whereas the challenges were related to the processes of adopting and executing such an approach. Thus, the agencies that took part in the focus group sessions seemingly saw a value in the proposed evaluation approach from the perspective of what it may deliver in terms of useful evaluation processes and results, but also expressed concerns about practicalities related to realizing these benefits.
Building on this initial scoping, the questionnaire sent to the agencies asked informants to rate statements concerning the challenges and benefits related to adopting transformative evaluation. Since the number of informants was rather low, these results should be seen as indicative and should be regarded as complementing the focus group discussions regarding benefits and challenges. Nevertheless, the results from the questionnaire suggest that informants at all agencies represented believed that the current evaluation practices can be strengthened by collaborating with external actors, and that current practices can be revised and altered to cater for a transformative focus (see Appendix C for ratings of statements).
Open-ended questions concerning other benefits or challenges that were not captured in the abovementioned statements yielded some additional valuable comments. Regarding benefits, informants stated that the broadening of evaluation approaches was beneficial, and that such a broadening potentially could be done incrementally to ensure that different actors could adapt. Moreover, it was also emphasized that there should be a clearer and more deliberate focus on the use of evaluation, making it an integrated and interactive tool among different stakeholders in planning an intervention to strengthen its implementation and flexibility in a changing context.
As for the challenges, responses to a large degree corresponded to the challenges outlined in evaluation theory related to use of evaluations. Organizational resistance, as described by Weiss [27], includes challenges related to transforming current practices and routines, the money and time needed to do so and the acquiring of new skills, as well as the protection of the organization. Chelimsky [22] highlights issues inherent to the mandates and abilities to act within or across administrative units or departments, for example. While these concerns focus on the use of (challenging) evaluation results, they are arguably translatable to the challenges outlined in this study for transforming the evaluation practices, as well as the use of evaluation results for supporting a sustainability transition. Results from the questionnaire show that coordinating an evaluation between multiple stakeholders and agencies is challenging, particularly when there are conflicting goals between different departments and when findings require actions within another actor’s jurisdiction. One informant highlighted that there is a lack of political priorities and coherence between different implementations of both research and policy, which further complicates the adoption of transformative evaluation. Moreover, it was emphasized that transformative evaluation needs to be clearly anchored in the evaluation processes in order to prevent it from becoming reliant on certain key people and thus hinging on their remaining in the process. Other issues highlighted in evaluation theory include access to relevant and reliable data for conducting an evaluation [27,40], which was also voiced by informants, notably for performing evaluations with a transformative focus. It was specifically mentioned that it is rare that implementations are designed to be readily evaluated, which makes data collection difficult.
On a slightly different note, one informant mentioned that transformative evaluation should not be adopted “for the sake of it”; instead, the need for knowledge should guide the evaluation inquiry, rather than an enforced intention of placing focus on a certain place. This is clearly a valuable point, which the author argues should be accommodated in the proposed evaluation approach in that it builds on a variety of different evaluations of different scopes and foci. Thus, the inquiry should guide the design of the evaluation, and the transformative focus can then be adopted to support a deeper learning.
Lastly, the questionnaire sought to shed light on whether a clear transformative focus in evaluations was perceived as being able to support a more active use of evaluations. The answers were varied, with four answering “yes”, five answering “do not know” and one answering “no”. Clearly, the issue of increasing the use of evaluations is not as easily addressed as simply re-framing the utility of evaluations; however, when including the open-ended answers in which informants were asked about their thoughts on adopting transformative evaluation, five said that it would be interesting and positive. One informant elaborated, saying that it would indeed support a more active use and increased learning from evaluations. Others elaborated on the challenges, including competence development, and how such an approach may divert from the inquiries that need to be made in evaluations.

5. Discussion

Building on insights gained from focus group sessions and the structured interview questionnaire, a key indication is that informants at Swedish state agencies express both a concern that evaluations are not currently used to their full potential and—importantly—that there is an interest in discussing and increasing their use. Another key indication is that participants at the focus group sessions and in the questionnaire expressed a curiosity and general openness to adopting an evaluation approach that seeks to provide insights on transformative efforts, with some claiming that it can strengthen evaluation practices and use. While the number of informants was rather low, particularly for the structured interview questionnaire, the combination of insights from the focus group sessions and the questionnaire still allows the study to provide an indication about issues that often otherwise remain theoretical.

5.1. Potential of Different Models of Use

The analysis of which models of use that were perceived as most agreed with shows models of use that seem quite supportive of a transformative evaluation approach, as they largely represent a constructive use in which knowledge is attained and used. These models include enlightening use, instrumental use, process use, interactive use and even unintended use, which in itself represents the conveyance of knowledge or insights from an evaluation beyond the evaluand: e.g., providing input for other evaluations for a transformative perspective. In theory, these models of use are arguably more supportive of a transformative evaluation approach than their counterparts of overuse or ritual use, which are less prone to promoting knowledge production and use for a transformative cause. Regardless of the model of use, the most important aspect is, however, that the evaluations are used not only to improve a program—this kind of use is clearly also important—but that they also are used to synthesize results for a more holistic picture.
A word is warranted on instrumental use in particular, as it was one of the models of use that received one of the overall highest ratings among the 11 models in this study. While this model is noted by scholars as not very prominent in reality [24], due to its rather direct approach to use which contrasts with the complexity of a real policy process [29,48], it has been shown to be applied in the European Commission, for example, in decision-making based on legislative evaluations, even under political opposition [31], or as indirectly applied as evaluation findings inform policy development ex-ante [49]. Thus, while the results of this study at first glance may seem idealistic and to showcase a mere vision of what the use of evaluation could be, they should be considered against the fact that instrumental use is not altogether disregarded in practice.
On this note, it also needs to be acknowledged that it is difficult to fully assess the degree to which informants rated statements in relation to their perception of the actual practice in their department or from the perspective of what the intended evaluation use is. Some individual statements underpinning the models of use may be argued to be difficult to disagree with from the perspective of what evaluation may be perceived to be in an ideal case. Thus, there may be a discrepancy in the reported uses, in that some may relate to the ideal case of what evaluation is envisioned to accomplish, and the use in reality. Another factor contributing to this discrepancy can also be rooted in the informant’s insights regarding the entire evaluation processes in the department in question. For example, while some may conduct evaluations, they may not be responsible for making use of the results and may thus be uninformed of what comes after an evaluation is finalized.
Finally, while the 11 models of use applied in this study are treated as separate, it should also be noted that the lines between them may become vague, as the statements underpinning them become subjected to interpretation by the informants. Furthermore, the differences between the models of use are at times fine; for example, between legitimizing and mobilizing use, which both refer to use intended to serve a stated viewpoint but with slightly different intended reactions of generating acceptance or activation among actors. Thus, the models of use should perhaps not be solely viewed as independent entities, but rather as parts of a spectrum and at times linked and interlaced with each other.

5.2. Strengthening the Use of Evaluations

Turning to the means for strengthening the further use of evaluations in order to support a transition, the results from the questionnaire suggest that the modes of communicating the results, and the intended main users, depend on the purpose of conducting the evaluations. While this may seem fairly obvious, it does raise further questions regarding whether the purpose, the main users, and the channels used for communication are indeed prepared to support a transition. A previous study of current Swedish evaluation practices implied that there is rarely a stated objective within an evaluation to provide knowledge to support a transition [38], and the focus of evaluations is often geared towards a program level. Therefore, while the purpose and reported uses are still seemingly not yet fully aligned for conducting evaluations with a transformative focus, this study indicates a rather robust and varied approach to disseminating results. Communication is emphasized in evaluation theory as paramount for the successful use of evaluations, and provided that the purpose and intended use are focused towards supporting transitions, there is a large potential in capitalizing on current communication practices in terms of targeting various actors and using tailored communication formats (reports, seminars, films, etc.) and channels (interpersonal, physical and digital).
Lastly, the Swedish state agencies that took part in this study—both in the focus group sessions and in the questionnaire—expressed both excitement and concern regarding adopting transformative evaluation. On the one hand, it was acknowledged that evaluations today are not always used to their full potential, and taking a transformative approach is perceived as bringing many benefits, both for enhanced use and for supporting a more holistic approach. Based on the insights gained in this study, it seems that Swedish agencies as a minimum perceive evaluations to be used in constructive ways, supporting the adoption of transformative evaluation. The incorporation of actors outside of an agency is not uncommon and could likely become more deliberate and possibly even extended if the agencies were to adopt a transformative evaluation approach that builds on cooperation across departmental boundaries. Subsequently, when the focus of the evaluation moves towards transformative contributions, the intended users will also move from a mainly program-oriented level to actors in other parts of the system.
On the other hand, concerns included the costs in terms of the time and resources required to adopt such an approach and the difficulties that may come with crossing institutional boundaries and mandates between various state departments. Another concern is that any evaluation approach taken needs to be guided by what kind of knowledge is sought, and to impose a certain approach would risk a skewed evaluation. These are all valuable concerns, which call for future research, notably in terms of testing how transformative evaluation can be applied in a real evaluation setting in order to ensure that it does not contribute to a misleading outcome.

5.3. The Role of Framing Evaluations for a Transition

Another issue to consider for further research is the role of framing in evaluations. Evaluation theory states that users need to be involved in evaluations to enhance their use; however, to support a transition, users must also see the value in evaluations supporting such transitions. Thus, it can be argued that framing theory may provide key insights to ensuring a more extensive use of evaluations, in that evaluations are presented and accepted as carriers of information that can support a transition. On the one hand, framing theory may support the promotion of a framing that places evaluation findings and their synthesis in a transition context, emphasizing the importance of this knowledge-production method for aligning and strengthening transformative efforts. On the other hand, it can also easily be argued that evaluations themselves are subjected to, and are conducted under, the influence of frames, which leads the design and the focus towards certain endpoints—another reason for further uncovering the role of frames in evaluations.
There may be a potential in framing evaluations as carriers of information that can support a transition, not for the sake of misleading or imposing decisions but for shifting focus away from the program level and towards harnessing the potential of evaluations. Such a framing needs to be communicative, meaning that the commissioning, designing, conducting and communication of an evaluation should uphold this frame in order to promote cognitive frames used by receivers to address and act on evaluations [50,51,52]. It is important to note, however, that frames are crafted by both including and excluding information and viewpoints, which may affect how receivers respond to them [52]. Thus, frames that seek to support evaluations’ transformative potential need to be carefully crafted, to be transparent and to be made accessible to users. Transformative evaluation advocates an ongoing synthesis and alignment of multiple evaluations, which balances findings guided by different perspectives (and frames) and thus represents a more versatile and holistic depiction of the effects that research initiatives and policy instruments provide in terms of realizing a transition.

6. Conclusions

This study shows that the current practices among the represented Swedish state agencies when using evaluation findings are varied, but importantly that the reported modes of using evaluations are generally constructive and arguably supportive of learning and use on a transformative scale. The reported processes for designing and conducting evaluations show collaboration between actors both in public and private sectors, which can support a wider approach to evaluation. The informants recognize a value in moving towards transformative evaluation approaches, related to collaboration and increased evaluation use, for example. Challenges that are outlined in relation to adopting a transformative evaluation approach include overcoming institutional boundaries to coordinate and collaborate on evaluations and balancing the need for different evaluation scales and foci. To capitalize on the potential of increasing the use and usefulness of evaluations, the time seems ripe to deliberately address how to further elaborate the communication of evaluations, how to support cross-agency coordination and the articulation of intended goals, and how to develop an extended collaboration with relevant actors and stakeholders of a transition towards a more sustainable energy system and society.

Funding

This work was funded by the International Institute for Industrial Environmental Economics (IIIEE) at Lund University. This work builds on previous work conducted in the Transition Governance project (2016-2019) supported by the Swedish Energy Agency [grant number 39938-1]. The APC was funded by Lund University.

Acknowledgments

The author is grateful for the valuable feedback and support provided by Lena Neij in terms of the conduct of this study and for the input provided by Maria Johansson on the design of the structured interview questionnaire. The author is also thankful to Evert Vedung for reading the manuscript and for providing input and inspiring discussion.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

A structured interview questionnaire was sent to seven Swedish agencies that work with the evaluation of research initiatives and policy instruments within the fields of energy and/or climate related issues. The original questionnaire was sent out in Swedish. An English translation is provided below.
Part 1: Background
1. At which state agency are you working?
-
The Swedish Energy Agency
-
The Swedish Environment Protection Agency
-
Growth Analysis
-
Vinnova
-
Formas
-
National Board of Housing, Building and Planning
-
The Swedish Agency for Economic and Regional Growth
-
Other (open answer)
2. What are your main tasks, concerning evaluation?
-
I work strategically with issues related to evaluation
-
I commission evaluations
-
I conduct evaluations
-
Other (open answer)
3. What is your experience in working with evaluation-related tasks?
Rate on a scale from 0–10.
Part 2: Evaluation Today
4. Who takes part in the design of the evaluations that you either commission or conduct at your agency? (Select one of the following: never, sometimes, often, always)
-
Staff at our agency
-
Staff at other agencies
-
External consultants
-
Researchers at universities
-
Other (open answer)
5. Which actors do you hire when you commission an evaluation? (Select one of the following: never, sometimes, often, always)
-
We hire consultants
-
We hire researchers at universities
-
We hire expertise at our own agency
-
We hire others (open answer)
6. Which specific knowledge are you seeking if you commission an evaluation, or look for expertise, from an external actor? (Select one of the following: never, sometimes, often, always)
-
Evaluation expertise
-
Economics
-
Organizational theory
-
Behavioral expertise
-
Modeling expertise
-
We have the expertise needed within our own agency
-
Other (open answer)
7. Who do you believe is/are the most important user/s of the evaluations you commission or conduct at your agency?
Open answer
8. To whom are the evaluation results communicated?
Open answer
9. How are the evaluations results communicated? (Select one of the following: never, sometimes, often, always)
-
The report is only distributed internally at our agency
-
The report is published on our agency’s webpage
-
The report is sent to other state agencies
-
The report is sent to municipal or regional agencies
-
The report is sent to those taking part in the evaluations; e.g., through interviews
-
We invite parties to an open seminar for discussing the evaluation
-
We broadcast the presentation of seminars online, where the evaluation is discussed
-
We arrange physical or digital meetings with those affected by the evaluation
-
Other (open answer)
10. The following questions refer to the purpose and the intended use of the evaluations which you commission or conduct at your agency. The questions follow the same format, where you are asked to rate on a scale between 1–5 how well the following statements apply to your department, where 1 is never and 5 is always.
Evaluations of research initiatives or policy instruments that we either commission or conduct at my department are intended to…
Lead to immediate changes in the program that is being evaluated
Provide general knowledge about how a certain type of program works
Give trust and support for decisions that concern a program
Show that “something is being done”
Be used to create support among others for the evaluated program
Be used as a knowledge base to be used for issues that are outside of the evaluated program
Evaluations of research initiatives or policy instruments that we either commission or conduct at my department are intended to…
Not emphasis on describing why the results show what they show
Lead to important insights during the evaluation process
Prevent hasty decisions, by allowing the evaluation process to take time
Buy additional time in a decision-making process
Be performed because it is expected that an evaluation is performed
Contribute to knowledge development that creates immediate measures for improvement in the evaluated program
Evaluations of research initiatives or policy instruments that we either commission or conduct at my department are intended to…
Be broad and robust enough to be the only basis for decisions regarding the evaluated program
Be conducted to show that the program at hand has been followed-up on
Influence actors to consider what needs to be done to meet the expectations of the planned evaluation
Increase the understanding of how a program should be implemented, rather than lead to concrete measures in the program
Spur improvements in a program by communicating that an evaluation is to be done
Indicate areas outside of the evaluation boundaries that need further investigation
Contribute to showing that we are doing things correctly
Evaluations of research initiatives or policy instruments that we either commission or conduct at my department are intended to…
Provide an overview of which aspects (e.g., administration, behaviour, economy, market) are affected by a program
Be used in combination with other material in decision-making
Be conducted as per usual, so that actors affected by the evaluation will know what to expect
Confirm what we know about a research program or policy instrument
Show how the evaluated program should be changed
Be used for decisions about the program that are entirely based on what the evaluation shows
Spur further discussions about other programs
Evaluations of research initiatives or policy instruments that we either commission or conduct at my department are intended to…
Focus on pre-determined criteria and indicators, regardless of whether the program or the situation has changed
Primarily contribute with learning and knowledge during the evaluations process, through interactions between different actors
Be used to convince opponents
Realize effects as early as possible, before the evaluation itself is done
Only to be one part of the knowledge basis in a decision process about a program
Spur change and improvement already during the evaluation process, through dialogues with different actors
Be used to encourage actors to support a program or a viewpoint
11. What do you consider to be the largest benefits of the current evaluation practices at your agency?
Open answer
12. What do you consider to be the largest limitations of the current evaluation practices at your agency?
Open answer
Part 3: Evaluation with a transformative focus
13. Are you familiar with the concept of “transition” (transformative change)?
Yes/No
14. Do you believe that a deliberate focus on capturing and supporting transformative change for a more sustainable society may spur a more active use of evaluations?
Yes/No/Do not know
15. What are your thoughts on allowing evaluations that are commissioned or conducted at our agency to take a transformative focus?
Open answer
Info-page about the proposed broader approach to evaluation with a transformative focus
16. Have you had the possibility to read or acquaint yourself with ‘Vägledning för utvärdering av transformativ omställning’ sedan tidigare?
Yes/No
17. Do you have any viewpoints you wish to share regarding the ‘Vägledning för utvärdering av transformativ omställning’?
Open answer
Part 4: Possibilities and delimitations
18. Rate on a scale between 1–5 the extent to which you agree with the statement, where 1 is do not agree and 5 is absolutely agree. (Do not know also available)
Our evaluation-related work can be strengthened through cooperation with other agencies
It is difficult to work with a broad transformative evaluation approach because there may be disagreements between different actors on what a transition should entail
We can change our specifications of requirements when we commission or conduct an evaluation to include a transformative focus
A broad transformative evaluation approach is limited by the agency’s budget for evaluation
Our evaluation-related work can be strengthened through cooperation with actors from academia and external consultants
To coordinate different governmental agencies and other actors under the umbrella of a broad evaluation approach with a transformative focus is difficult
A broader evaluation approach with a transformative focus would contribute to a better planning of evaluations than the case today
19. Rate on a scale between 1–5 the extent to which you agree with the statement, where 1 is do not agree and 5 is absolutely agree. (Do not know also available)
Objectives and purposes of evaluations may be in conflict with each other when they are brought together—this limits the utility of a broad transformative evaluation approach
The evaluation expertise available at our agency as well as externally, is enough for a broader evaluation approach with a transformative focus
It is difficult to maintain the systematic documentation and knowledge transfer between different actors that is required for a broad transformative evaluation approach
The development of competency needed among those commissioning and those conducting evaluations for a broader evaluation approach with a transformative focus can easily be acquired
A broader evaluation approach with a transformative focus is hindered by current routines for procurement and framework agreements for evaluations
We dare to try and we are supported to try new ways of doing things in my department
We are already using our evaluations in a way that supports transformative change in the energy system and in society
It is difficult to apportion ownership, responsibilities and mandate to make decisions about steps of transformative processes between different actors
20. Do you see any other possibilities connected to applying a broader evaluation approach with a transformative focus that is not mentioned in the above statements?
21. Do you see any other delimitations connected to applying a broader evaluation approach with a transformative focus that is not mentioned in the above statements?

Appendix B

Eleven models of use were identified in evaluation theory. Based on the characteristics of each model, three statements were designed to capture the essence of what the model of use entails. This appendix shows the statements used to illustrate each model of use. These statements were scrambled and presented as stand-alone statements in the self-administered questionnaire sent out to Swedish state agencies, as presented in Appendix A.
Table A1. Eleven models of use were identified in evaluation theory. Based on the characteristics of each model, three statements were designed to capture the essence of what the model of use entails. Original text in Swedish in brackets.
Table A1. Eleven models of use were identified in evaluation theory. Based on the characteristics of each model, three statements were designed to capture the essence of what the model of use entails. Original text in Swedish in brackets.
Model of UseCharacteristics
1. InstrumentalShow how the evaluated program should be changed
(Visa hur på det utvärderade programmet bör förändras)
Lead to immediate changes in the program that is being evaluated
(Leda till direkta korrigeringar i det utvärderade programmet)
Contribute to knowledge development that creates immediate measures for improvement in the evaluated program
(Bidra till kunskapsuppbyggnad som skapar omedelbara förbättringsåtgärder i det utvärderade programmet)
2. Enlightenment/conceptualIncrease the understanding of how a program should be implemented, rather than lead to concrete measures in the program
(Öka förståelsen för hur ett program bör genomföras snarare än att leda till konkreta handlingar)
Provide general knowledge about how a certain type of program works
(Ge övergripande kunskap om hur en viss sorts program fungerar)
Provide an overview of which aspects (e.g., administration, behaviour, economy, market) are affected by a program
(Ge en överblick över vilka olika aspekter (t.ex. administration, beteende, ekonomi, marknad) som påverkas av ett program)
3. Legitimizing/Reinforcing useContribute to showing that we are doing things correctly
(Bidra till att visa att vi gör saker rätt)
Give trust and support for decisions that concern a program
(Ge förtroende och stöd för beslut som rör ett program)
Confirm what we know about a research program or policy instrument
(Bekräfta det vi vet om en forskningsinsats eller styrmedel)
4. InteractiveBe broad and robust enough to be the only basis for decisions regarding the evaluated program (reverse positive)
(Vara bred och robust nog att utgöra det enda beslutsunderlaget för beslut om programmet.)
Only be one part of the knowledge basis in a decision process about a program
(Endast vara en del av kunskapsunderlaget i en beslutsprocess om ett program)
Be used in combination with other material in decision-making
(Användas i kombination med annat material vid beslutsfattande)
5. Ritual/Symbolic use/Mechanical useBe conducted to show that the program at hand has been followed-up on
(Utföras för att visa att programmet i fråga har följts upp)
Be performed because it is expected that an evaluation is performed
(Utföras för att det förväntas att en utvärdering utförs)
Be conducted as per usual, so that actors affected by the evaluation will know what to expect
(Utföras som vanligt, så att aktörer som berörs av utvärderingen vet vad som väntas)
6. Mobilizing use/Persuasive useBe used to encourage actors to support a programme or a viewpoint
(Användas för att uppmuntra aktörer att stödja ett program eller en ståndpunkt)
Be used to convince opponents
(Användas för att övervinna meningsmotståndare)
Be used to create support among others for the evaluated program
(Användas för att skapa stöd hos andra för programmet)
7. OveruseFocus on pre-determined criteria and indicators, regardless of whether the program or the situation has changed
(Fokusera på förutbestämda kriterier och indikatorer, oavsett om programmet eller situationen har förändrats)
Be used for decisions about the program that are entirely based on what the evaluation shows
(Användas för programbeslut som helt baseras på vad utvärderingen visar)
Not put emphasis on describing why the results show what they show
(Inte lägga stor vikt vid beskrivning av varför resultaten visar det som de visar)
8. Process useSpur to change and improvement already during the evaluation process, through dialogues with different actors
(Sporra till förändring och förbättring under utvärderingsprocessen genom dialoger med olika aktörer)
Primarily contribute with learning and knowledge during the evaluations process, through interactions between different actors
(Främst bidra med lärande och kunskap under utvärderingsprocessens gång genom interaktionen mellan olika aktörer)
Lead to important insights during the evaluation process
(Leda till viktiga insikter i utvärderingsprocessen)
9. Constitutive/Anticipatory use Realize effects as early as possible, before the evaluation itself is done
(Få effekter så tidigt som möjligt, innan själva utvärderingen är klar.
Influence actors to consider what needs to be done to meet the expectations of the planned evaluation
(Påverka aktörer att fundera över vad som behöver göras för att leva upp till förväntningarna av den planerade utvärderingen)
Spur improvements in a program by communicating that an evaluation is to be done to those whom the evaluation concerns
(Sporra till förbättring genom att den kommande utvärderingen kommuniceras till berörda aktörer)
10. Tactical useBuy additional time in a decision-making process
(Köpa ytterligare tid i en beslutsprocess)
Show that “something is being done”
(Visa på att ’något görs’)
Prevent hasty decisions, by allowing the evaluation process to take time
(Förhindra förhastade beslut, genom att utvärderingsprocessen tillåts ta tid)
11. Unintended useSpur further discussions about other programmes
(Sporra vidare diskussioner om andra program)
Be used as a knowledge base to be used for issues that are outside of the evaluated program
(Användas som kunskapsunderlag för frågor som står utanför det utvärderade programmet)
Indicate areas outside of the evaluation boundaries that need further investigation
(Påvisa områden utanför utvärderingens gränser som kräver ytterligare utredning)

Appendix C

Two figures that display how informants at five Swedish state agencies rated positively and negatively framed statements regarding the benefits and the challenges related to adopting a transformative evaluation approach are presented. The statements were guided by an initial scoping of perceptions at focus group sessions with four state agencies.
Figure A1. Positively framed statements related to benefits and challenges to adopting a transformative evaluation approach, which informants were asked to rate in a self-administered questionnaire between 1–5, where 1 meant “do not agree” and 5 meant “absolutely agree”. Ten informants out of 11 answered these questions. Answers are shown sample-wide, including all agencies that were represented in the questionnaire. The number of responses, including “do not know” for each statement is ten, except for the last statement with nine answers.
Figure A1. Positively framed statements related to benefits and challenges to adopting a transformative evaluation approach, which informants were asked to rate in a self-administered questionnaire between 1–5, where 1 meant “do not agree” and 5 meant “absolutely agree”. Ten informants out of 11 answered these questions. Answers are shown sample-wide, including all agencies that were represented in the questionnaire. The number of responses, including “do not know” for each statement is ten, except for the last statement with nine answers.
Sustainability 12 08241 g0a1
Figure A2. Negatively framed statements related to benefits and challenges to adopting a transformative evaluation approach, which informants were asked to rate in a self-administered questionnaire between 1–5, where 1 meant “do not agree” and 5 meant “absolutely agree”. Ten informants out of 11 answered these questions. Answers are shown sample-wide, including all agencies that were represented in the questionnaire. The number of responses, including “do not know” per statement is 10, except for statement number three with nine answers.
Figure A2. Negatively framed statements related to benefits and challenges to adopting a transformative evaluation approach, which informants were asked to rate in a self-administered questionnaire between 1–5, where 1 meant “do not agree” and 5 meant “absolutely agree”. Ten informants out of 11 answered these questions. Answers are shown sample-wide, including all agencies that were represented in the questionnaire. The number of responses, including “do not know” per statement is 10, except for statement number three with nine answers.
Sustainability 12 08241 g0a2

References

  1. European Environment Agency. Trends and Projections in Europe 2018: Tracking Progress Towards Europe’s Climate and Energy Targets; Publications Office of the European Union: Luxembourg, 2018; ISBN 978-92-480-007-7. [Google Scholar]
  2. International Energy Agency; Organization for Economic Co-operation and Development. World Energy Outlook 2018; OECD: Paris, France; IEA: Paris, France, 2018. [Google Scholar]
  3. Masson-Delmotte, V.; Zhai, P.; Pörtner, H.-O.; Roberts, D.; Skea, J.; Shukla, P.R.; Pirani, A.; Moufouma-Okia, W.; Péan, C.; Pidcock, R.; et al. (Eds.) IPCC Global Warming of 1.5°C. In An IPCC Special Report on the Impacts of Global Warming of 1.5°C Above Pre-Industrial Levels and Related Global Greenhouse Gas Emission Pathways, in the Context of Strengthening the Global Response to the Threat of Climate Change, Sustainable Development, and Efforts to Eradicate Poverty; 2018; in press. [Google Scholar]
  4. Geels, F.W. Technological transitions as evolutionary reconfiguration processes: A multi-level perspective and a case-study. Res. Policy 2002, 31, 1257–1274. [Google Scholar] [CrossRef] [Green Version]
  5. Farla, J.; Markard, J.; Raven, R.; Coenen, L. Sustainability transitions in the making: A closer look at actors, strategies and resources. Technol. Forecast. Soc. Chang. 2012, 79, 991–998. [Google Scholar] [CrossRef] [Green Version]
  6. Markard, J.; Raven, R.; Truffer, B. Sustainability transitions: An emerging field of research and its prospects. Res. Policy 2012, 41, 955–967. [Google Scholar] [CrossRef]
  7. Weinstein, M.P.; Turner, R.E.; Ibáñez, C. The global sustainability transition: It is more than changing light bulbs. Sustain. Sci. Pract. Policy 2013, 9, 4–15. [Google Scholar] [CrossRef]
  8. Somanathan, E.; Sterner, T.; Sugiyama, T.; Chimanikire, D.; Dubash, N.K.; Essandoh-Yeddu, J.; Fifita, S.; Goulder, L.; Jaffe, A.; Labandeira, X.; et al. National and sub-national policies and institutions. In Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change; Edenhofer, O., Pichs-Madruga, R., Sokona, Y., Farahani, E., Kadner, S., Seyboth, K., Adler, A., Baum, I., Brunner, S., Eickemeier, P., et al., Eds.; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2014. [Google Scholar]
  9. European Commission. Better Regulation Guidelines; [European Commission Staff Working Document]; European Commission, 2017; Available online: https://ec.europa.eu/info/sites/info/files/better-regulation-guidelines.pdf (accessed on 7 October 2020).
  10. Edler, J.; Berger, M.; Dinges, M.; Gok, A. The practice of evaluation in innovation policy in Europe. Res. Eval. 2012, 21, 167–182. [Google Scholar] [CrossRef] [Green Version]
  11. Edler, J.; Cunningham, P.; Gök, A.; Shapira, P. (Eds.) Handbook of Innovation Policy Impact; EU-SPRI Forum on Science, Technology and Innovation Policy; Edward Elgar: Cheltenham, UK; Northampton, MA, USA, 2016; ISBN 978-1-78471-184-9. [Google Scholar]
  12. Huitema, D.; Jordan, A.; Massey, E.; Rayner, T.; van Asselt, H.; Haug, C.; Hildingsson, R.; Monni, S.; Stripple, J. The evaluation of climate policy: Theory and emerging practice in Europe. Policy Sci. 2011, 44, 179–198. [Google Scholar] [CrossRef] [Green Version]
  13. Mela, H.; Hildén, M. Evaluation of Climate Policies and Measures in EU Member States: Examples and Experiences from Four Sectors; The Finnish Environment, 19th ed.; Finnish Environment Institute: Helsinki, Finland, 2012. [Google Scholar]
  14. Schoenefeld, J.J.; Jordan, A.J. Environmental policy evaluation in the EU: Between learning, accountability, and political opportunities? Environ. Politics 2019, 28, 365–384. [Google Scholar] [CrossRef]
  15. Højlund, S. Evaluation use in the organizational context–changing focus to improve theory. Evaluation 2014, 20, 26–43. [Google Scholar] [CrossRef]
  16. King, J.A.; Alkin, M.C. The centrality of use: Theories of evaluation use and influence and thoughts on the first 50 years of use research. Am. J. Eval. 2019, 40, 431–458. [Google Scholar] [CrossRef]
  17. Milzow, K.; Reinhardt, A.; Söderberg, S.; Zinöcker, K. Understanding the use and usability of research evaluation studies1,2. Res. Eval. 2019, 28, 94–107. [Google Scholar] [CrossRef] [Green Version]
  18. Patton, M.Q. Utilization-Focused Evaluation, 4th ed.; Sage Publications: Thousand Oaks, CA, USA, 2008; ISBN 978-1-4129-5861-5. [Google Scholar]
  19. Shulha, L.M.; Cousins, J.B. Evaluation use: Theory, research, and practice since 1986. Eval. Pract. 1997, 18, 195–208. [Google Scholar] [CrossRef]
  20. Weiss, C.H. The many meanings of research utilization. Public Adm. Rev. 1979, 39, 426. [Google Scholar] [CrossRef]
  21. Weiss, C.H. Have we learned anything new about the use of evaluation. Am. J. Eval. 1998, 21–34. [Google Scholar] [CrossRef]
  22. Chelimsky, E. A strategy for improving the use of evaluation findings in policy. In Evaluation Use and Decision-Making in Society: A Tribute to Marvin C. Alkin; Christie, C.A., Vo, A.T., Alkin, M.C., Eds.; Information Age Pub. Inc: Charlotte, NC, USA, 2015; pp. 73–90. [Google Scholar]
  23. Newcomer, K.E.; Hatry, H.P.; Wholey, J.S. Planning and designing useful evaluations. In Handbook of Practical Program Evaluation; Wholey, J.S., Hatry, H.P., Newcomer, K.E., Eds.; Jossey-Bass, Wiley: San Francisco, CA, USA, 2010. [Google Scholar]
  24. Shadish, W.R.; Cook, T.D.; Leviton, L.C. Foundations of Program Evaluation: Theories of Practice; Reprinted; Sage Publications: Newbury Park, CA, USA, 1991; ISBN 978-0-8039-3551-8. [Google Scholar]
  25. Alkin, M.C.; King, J.A. Definitions of evaluation use and misuse, evaluation influence and factors affecting use. Am. J. Eval. 2017, 38, 434–450. [Google Scholar] [CrossRef]
  26. Cousins, J.B. Commentary: Minimizing evaluation misuse as principled practice. Am. J. Eval. 2004, 7. [Google Scholar]
  27. Weiss, C.H. Evaluation: Methods for Studying Programs and Policies, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1998; ISBN 978-0-13-309725-2. [Google Scholar]
  28. Rossi, P.H.; Lipsey, M.W.; Freeman, H.E. Evaluation: A Systematic Approach, 7th ed.; Sage: Thousand Oaks, CA, USA, 2006; ISBN 978-0-7619-0894-4. [Google Scholar]
  29. Vedung, E. Public Policy and Program Evaluation; Transaction Publishers: New Bruswick, NJ, USA, 1997; ISBN 978-0-7658-0687-1. [Google Scholar]
  30. Vedung, E. Six Uses of Evaluation; In Nachhaltige Evaluation?: Auftragsforschung Zwischen Praxis und Wissenschaft. Festschrift zum 60. Geburtstag von Reinhard Stockmann; Hennefeld, V., Meyer, W., Silvestrini, S., Stockmann, R., Eds.; Waxmann Verlag: Münster, Germany; New York, NY, USA, 2015; pp. 187–210. [Google Scholar]
  31. van Voorst, S.; Zwaan, P. The (non-)use of ex post legislative evaluations by the European Commission. J. Eur. Public Policy 2019, 26, 366–385. [Google Scholar] [CrossRef] [Green Version]
  32. Fleischer, D.N.; Christie, C.A. Evaluation use: Results from a survey of U.S. American evaluation association members. Am. J. Eval. 2009, 30, 158–175. [Google Scholar] [CrossRef]
  33. Ledermann, S. Exploring the necessary conditions for evaluation use in program change. Am. J. Eval. 2012, 33, 159–178. [Google Scholar] [CrossRef]
  34. Cunningham, P.; Edler, J.; Flanagan, K.; Larédo, P. The innovation policy mix. In Handbook of Innovation Policy Impact; Edler, J., Cunningham, P., Gök, A., Shapira, P., Eds.; Edward Elgar: Cheltenham, UK; Northampton, MA, USA, 2016; pp. 505–542. [Google Scholar]
  35. Edler, J.; Fagerberg, J. Innovation policy: What, why, and how. Oxf. Rev. Econ. Policy 2017, 33, 2–23. [Google Scholar] [CrossRef] [Green Version]
  36. Martin, B.R. R&D policy instruments-a critical review of what we do and don’t know. Ind. Innov. 2016, 23, 157–176. [Google Scholar] [CrossRef]
  37. Neij, L.; Sandin, S.; Benner, M.; Johansson, M.; Mickwitz, P. Bolstering a transition for a more sustainable energy system: A transformative approach to evaluations of energy efficiency in buildings. Energy Res. Soc. Sci. 2020, in press. [Google Scholar]
  38. Sandin, S.; Neij, L.; Mickwitz, P. Transition governance for energy efficiency-insights from a systematic review of Swedish policy evaluation practices. Energy Sustain. Soc. 2019, 9. [Google Scholar] [CrossRef] [Green Version]
  39. Cook, T.D. Lessons learned in evaluation over the past 25 years. In Evaluation for the 21st Century: A Handbook; Chelimsky, E., Shadish, W.R., Eds.; Sage Publications: Thousand Oaks, CA, USA, 1997; ISBN 978-1-4833-4889-6. [Google Scholar]
  40. Zraket, C.A.; Clark, W. Environmental changes and their measurement: What data should we collect and what collaborative systems do we need for linking knowledge to action? In Evaluation for the 21st Century: A Handbook; Chelimsky, E., Shadish, W.R., Eds.; Sage Publications: Thousand Oaks, CA, USA, 1997; pp. 329–336. [Google Scholar]
  41. Alkin, M.C.; Taut, S.M. Unbundling evaluation use. Stud. Educ. Eval. 2002, 29, 1–12. [Google Scholar] [CrossRef] [Green Version]
  42. Wholey, J.S. Use of evaluation in government-the politics of evaluation. In Handbook of Practical Program Evaluation; Wholey, J.S., Hatry, H.P., Newcomer, K.E., Eds.; Jossey-Bass: San Francisco, CA, USA, 2010; pp. 651–667. [Google Scholar]
  43. Leviton, L.C. Evaluation use: Advances, challenges and applications. Am. J. Eval. 2003, 24, 525–535. [Google Scholar] [CrossRef]
  44. Bryman, A. Social Research Methods, 4th ed.; Oxford University Press: Oxford, UK; New York, NY, USA, 2012; ISBN 978-0-19-958805-3. [Google Scholar]
  45. Wilkinson, S. Focus group research. In Qualitative Research Methods-Theory. Method and Practice; Silverman, D., Ed.; Sage: Thousand Oaks, CA, USA, 2004; pp. 177–199. [Google Scholar]
  46. Denzin, N.K. The Research Act: A Theoretical Introduction to Sociological Methods; AldineTransaction: New Brunswick, NJ, USA, 2009; ISBN 978-0-202-36248-9. [Google Scholar]
  47. Evans, J.R.; Mathur, A. The value of online surveys: A look back and a look ahead. Internet Res. 2018, 28, 854–887. [Google Scholar] [CrossRef] [Green Version]
  48. Weiss, C.H. The interface between evaluation and public policy. Evaluation 1999, 5, 468–486. [Google Scholar] [CrossRef]
  49. Højlund, S. Evaluation use in evaluation systems–the case of the European Commission. Evaluation 2014, 20, 428–446. [Google Scholar] [CrossRef]
  50. Carnahan, D.; Hao, Q.; Yan, X. Framing methodology: A critical review. Oxf. Res. Encycl. Politics 2019. [Google Scholar] [CrossRef]
  51. Chong, D.; Druckman, J.N. Framing theory. Annu. Rev. Polit. Sci. 2007, 10, 103–126. [Google Scholar] [CrossRef]
  52. Entman, R.M. Framing: Toward clarification of a fractured paradigm. J. Commun. 1993, 43, 51–58. [Google Scholar] [CrossRef]
Figure 1. Responses from representatives at five different Swedish state agencies concerning actor involvement in the design and conduct of evaluations. (a) Answers from the questionnaire to the question “Who partakes in the design of the evaluations that you either commission or conduct at your agency?” (b) Answers from the questionnaire to the question “Which actors do you hire when you commission an evaluation?” Results show the mean value reported by informants from each agency (A–E), where 1 is “never” and 4 is “always”.
Figure 1. Responses from representatives at five different Swedish state agencies concerning actor involvement in the design and conduct of evaluations. (a) Answers from the questionnaire to the question “Who partakes in the design of the evaluations that you either commission or conduct at your agency?” (b) Answers from the questionnaire to the question “Which actors do you hire when you commission an evaluation?” Results show the mean value reported by informants from each agency (A–E), where 1 is “never” and 4 is “always”.
Sustainability 12 08241 g001
Figure 2. The aggregate ratings of the models of use of evaluations. Each model was represented by three statements which the informants were asked to rate between 1–5, where 1 meant “never” and 5 meant “always” (for the full list of statements, see Appendix B). Ten informants out of 11 answered these questions. Thus, each model of use could receive a maximum of 30 ratings (10 informants x three statements to rate). The results are presented as aggregate responses for all three statements per model of use. Answers are shown sample-wide, including all agencies that were represented in the questionnaire. The number of responses per model of use ranges from 27–30.
Figure 2. The aggregate ratings of the models of use of evaluations. Each model was represented by three statements which the informants were asked to rate between 1–5, where 1 meant “never” and 5 meant “always” (for the full list of statements, see Appendix B). Ten informants out of 11 answered these questions. Thus, each model of use could receive a maximum of 30 ratings (10 informants x three statements to rate). The results are presented as aggregate responses for all three statements per model of use. Answers are shown sample-wide, including all agencies that were represented in the questionnaire. The number of responses per model of use ranges from 27–30.
Sustainability 12 08241 g002
Table 1. A list of 11 models of use for evaluations. The names of the models are adopted from evaluation scholars, as indicated in the table; the characteristics briefly describe the essence of the model and build on the work of evaluation scholars.
Table 1. A list of 11 models of use for evaluations. The names of the models are adopted from evaluation scholars, as indicated in the table; the characteristics briefly describe the essence of the model and build on the work of evaluation scholars.
Model of useCharacteristics
1. Instrumental [27,28,29,30]Evaluation findings are applied immediately for specific actions concerning the evaluand.
2. Enlightenment/conceptual [27,28,29,30]Evaluation findings are not used for the evaluand per se, but rather to provide insights about issues or implementations in general.
3. Legitimizing/reinforcing use [27,29,30]Evaluation findings are used to legitimize and justify decisions already made; e.g., to confirm current knowledge and beliefs, or support confidence in a standpoint.
4. Interactive [20,29] Evaluation findings form part of a larger decision-making process, where other sources of information and actors influence the decisions.
5. Ritual/symbolic use/mechanical use [18,30]Evaluations are conducted symbolically, because this is expected by current customs and practice, or evaluations are performed mechanically; evaluands seek only to fulfil requirements and get a good “score”.
6. Mobilizing use/persuasive use [27,28]Evaluation findings are used to create support for a particular standpoint, for mobilizing actors.
7. Overuse [18]Evaluation findings are put to use as definitive facts, without considering the contextual factors, or without exploring why certain outcomes have emerged.
8. Process use [18,30]The evaluation process in itself contributes with insights and learning and spurs action.
9. Constitutive/anticipatory use [30]An evaluation process spurs action and has impacts already in its initial phases by making evaluands and stakeholders aware of its coming into force.
10. Tactical use [29,30]Evaluations are conducted in order to postpone decisions, by showcasing that there is an ongoing evaluation. It is the process rather than the findings that are at the focal point of use.
11. Unintended use [18]Evaluation findings are used outside of an evaluated program; e.g., by spawning other investigations concerning issues separate from the program.
Table 2. Benefits and challenges of Transformative evaluation, as articulated during focus group sessions with four Swedish agencies: the Swedish Energy Agency, the Swedish Environment Protection Agency, Vinnova and Growth Analysis. The first four rows in the table present categories that are seen as both beneficial and challenging; the following rows present categories bound to either benefits or challenges.
Table 2. Benefits and challenges of Transformative evaluation, as articulated during focus group sessions with four Swedish agencies: the Swedish Energy Agency, the Swedish Environment Protection Agency, Vinnova and Growth Analysis. The first four rows in the table present categories that are seen as both beneficial and challenging; the following rows present categories bound to either benefits or challenges.
Benefits of Transformative EvaluationChallenges of Transformative Evaluation
A new way of approaching evaluation
Balances the requirements of smaller vs. more encompassing evaluations. Reduces stakes when evaluations become part of a bigger picture and gives room for experimentation.Adoption requires change, which requires courage and acceptance from stakeholders. It does not harmonize with the current system for funding and the design of research and policy. Requires addressing current path dependencies.
Competency requirements
The need for different evaluation competencies will be distributed more evenly. Less pressure on one stakeholder to host wide competencies.A transformative approach requires the development of competencies by both commissioners and evaluators.
Evaluation planning and logistics
The approach supports a pragmatic and deliberate planning of evaluation needs and calls for a connection between small-scale and large-scale evaluations.The combining of evaluations of various scales into a more holistic picture means a continuous balancing of the need for details, the available data and the wider context. It is logistically challenging to involve many actors in an evaluation.
Knowledge transfer and documentation
The evaluation approach supports a systematic documentation and logging of findings and insights, as well as a continuous knowledge transfer for learning between various evaluations and stakeholders.The documentation and knowledge transfer required to execute the evaluation approach is challenging to realize.
GuidingOwnership, agency and interests
A broader evaluation approach supports evaluation practices by being an inspiration, by maintaining ideas of what evaluation should (not) be and by assisting both evaluators and commissioners in providing a joint language for articulating expectations and designs. A sustainability transition has stakeholders with vested interests, which calls for transparency and a continuous motivation for change. Issues to be resolved include the ownership of the evaluation approach and responsibilities and agency in managing such an approach. Responsibilities between levels such as units, agencies and state departments need to be determined.
PotentialCurrent procurement and commissioning routines
The approach highlights the transformational contributions of the evaluand and provides a possibility to identify drivers for change and how to support a sustainability transition.There is a limited budget for evaluations, and commissions for evaluations are commonly limited to narrow requirements or are limited by what is allowed under a direct award. Master agreements may limit who can be commissioned to perform evaluations. There is a limited number of actors performing evaluations, which affects possibilities for variation.
Goals and purposes for evaluation
A sustainability transition may have potentially conflicting goals on an aggregate level. Evaluations are static, but the purposes and goals (of a transition) may change. Previously set goals may become irrelevant. There may be a mismatch between what is stated in (old) policy documents and what should be evaluated according to a broader transformative approach. Currently, the purpose of evaluating is commonly to determine whether to legitimize a new program period.
Current use
The use of evaluations today is limited by various factors.

Share and Cite

MDPI and ACS Style

Sandin, S. Making Use of Evaluations to Support a Transition towards a More Sustainable Energy System and Society—An Assessment of Current and Potential Use among Swedish State Agencies. Sustainability 2020, 12, 8241. https://doi.org/10.3390/su12198241

AMA Style

Sandin S. Making Use of Evaluations to Support a Transition towards a More Sustainable Energy System and Society—An Assessment of Current and Potential Use among Swedish State Agencies. Sustainability. 2020; 12(19):8241. https://doi.org/10.3390/su12198241

Chicago/Turabian Style

Sandin, Sofie. 2020. "Making Use of Evaluations to Support a Transition towards a More Sustainable Energy System and Society—An Assessment of Current and Potential Use among Swedish State Agencies" Sustainability 12, no. 19: 8241. https://doi.org/10.3390/su12198241

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop