1. Introduction
There is a growing consensus that results from scientific investigations of environmental and sustainability problems such as water resources must be made more relevant, salient, and timely for decision-makers [
1,
2,
3]. Effective long-term development of local strategies and policies requires using the most reliable and comprehensive data from diverse sources, including field campaigns, remotely sensed drone and satellite data, and outputs generated by simulation models. Despite the growing volume of relevant scientific data, locating and assessing data quality remains challenging [
4,
5,
6]. The integration and analysis of such data can be laborious and time-consuming, and the development of user-friendly and practical interfaces for stakeholders, who may or may not be technologically savvy, is crucial [
7,
8,
9].
Scenario analysis is defined as the process of evaluating projections of future events through the consideration of alternative plausible, though not equally likely, states of the world (scenarios) [
10]. Scenario analysis has been identified as a strategy for addressing wicked social–environmental system (SES) problems, where it is often difficult to fully comprehend the plausible futures under varying conditions of uncertainty [
11,
12,
13,
14]. This difficulty in comprehension can be attributed to a novice understanding of models or the lack of understanding of the inputs and outputs of the models. Scenario analysis provides a systematic way of thinking about plausible uncertain futures. Scenario analysis also combines qualitative and quantitative models. The development of plausible scenarios is qualitative. Those scenarios are then implemented in quantitative models to analyze and compare outcomes across scenarios. This approach generates a large number of output datasets that are difficult to analyze and understand, even by modeling experts [
15]. Regrettably, analyzing a robust number of scenarios to complete scenario analysis may be taxing for the diverse group of stakeholders involved in social–environmental system issues. When using new tools, there is also always a fear of participants becoming confused when navigating the new tool or experiencing frustration. If this is not overcome, it is easy for participants to lose interest in learning how to use the model as a tool.
On a global scale, humanity is compelled to address complex or ‘wicked’ resource-related issues in the face of accelerating environmental change. In our work, we use the term wicked problem to refer to an issue with multiple potential solutions and involving various stakeholders. Environmental sustainability issues may involve perspectives from multiple stakeholders (i.e., scientists, policymakers, community members, and industry), often leading to conflicting interests and or cultural misalignments that trigger the need for a more integrated approach. Participatory modeling (PM) has been identified as an emerging strategy to address these problems [
16,
17]. PM is a co-creative process in which stakeholders and researchers collaboratively develop conceptual or empirical models of a system. This process encourages dialogue, facilitates knowledge exchange, and enables stakeholders to align their perspectives toward shared objectives. The resulting models act as interactive platforms for stakeholders to explore and refine ideas, ultimately fostering a sense of ownership and deeper understanding [
18]. PM aims to generate a shared understanding of the challenges confronting a given resource system through social learning and collaborative thought experiments that explore potential societal responses. Our work supports PM with the use of computational tools [
19,
20,
21,
22]. This approach has many examples, but our understanding of how social learning, knowledge shifts, and decision-making occur in this context remains limited. In this study, we focus on how people understand and collaborate through PM using freshwater supply models of the Middle Rio Grande River Basin (MRGRB).
Studies like this have the potential to bridge the gap between scientific knowledge and actionable policy [
11]. By integrating PM and SA, researchers can create more accessible and practical tools for stakeholders, ultimately enhancing the relevance and usability of scientific data. These approaches facilitate stakeholder engagement and empower diverse groups to explore and co-develop solutions to complex environmental challenges [
23]. Insights generated through such methods can directly inform policymaking by providing decision-makers with nuanced, data-driven scenarios and fostering collaboration across sectors. The outcomes of these studies can guide the development of stakeholder practices that are better aligned with long-term sustainability goals, ensuring that policy responses are not only evidence-based but also inclusive and context-specific [
7,
20,
24].
In this work, we leverage the Sustainable Water through Integrating Modeling (SWIM) platform, an online modeling tool that hosts two regional water models of the MRGRB, spanning from the San Marcial gauges north of Elephant Butte Reservoir in New Mexico to Fort Quitman in Texas. SWIM seamlessly integrates with heterogeneous modeling software and programming languages. Stakeholders can visually interact with integrated models through a three-step workflow: scenario selection, model execution, and result visualization [
25].
The aim of this article is to address the following research questions: (RQ1) How do stakeholders reason, learn, and make decisions about water facilitated by PM and scenario analysis methods? (RQ2) What is the extent of an increase in engagement, if at all, with models through the implementation of participatory modeling and scenario analysis? (RQ3) What insights (e.g., new questions, new knowledge, increased understanding) were gained from scenario analysis activities with diverse stakeholders?
2. Background
Qualitative approaches have primarily been used as inputs to participatory modeling (PM) and scenario analysis (SA) [
26]. Less commonly, qualitative approaches have been applied to PM and scenario analysis outputs. This investigation combines qualitative inputs to PM and scenario analysis with qualitative interpretation of results in the form of developing narratives with stakeholders that help them interpret and understand the information and the impact of social learning on those capabilities [
27]. Combining qualitative narratives with quantitative model outputs has been identified as a tractable approach, although not focused on understanding social learning in these contexts [
28,
29,
30].
PM and SA can have several benefits when combined in a social learning context. These methods have shown increased stakeholder engagement in the modeling process and completion of scenario analysis, which leads to a sense of ownership and investment in outcomes [
11,
31]. There is also an increased sense of understanding because the PM process allows for knowledge exchange from a diverse group of stakeholders, providing a deeper sense of understanding and leading to the co-creation of new ideas [
32,
33,
34]. Scenario-based approaches combined with interactive simulation offer an organized basis for non-scientist stakeholders to explore the ramifications of alternative decisions [
35,
36]. Many researchers have identified that stakeholder engagement through participatory scenario planning contributes to better framing of research questions and generating outputs that serve as better solutions [
37]. When layered with scenario analysis, stakeholders can explore these new ideas and compare future scenarios, allowing them to make more informed decisions and potentially develop more equitable and just solutions [
24,
38,
39,
40,
41,
42]. One of the key challenges in achieving these benefits is ensuring that stakeholders can easily access and understand complex scientific models and data for SA. To address this challenge, tools like the SWIM platform can be introduced.
The SWIM platform was developed with the purpose of making complex scientific models and data more accessible to a wider audience. The platform features two versions of a Water Balance Model and one Hydroeconomic Optimization Model of the MRGRB and is publicly available at
https://swim.cybershare.utep.edu (accessed on 20 September 2020). “SWIM provides a human-technology framework for future water projections that integrates semantic-based computational approaches, information technology, and participatory modeling with strong community engagement” [
25].
The Water Balance Model is a high-level simulation model intended to aid stakeholders in understanding how fluctuations in surface water inflows and climate impact regional water availability across three competing subregions: Las Cruces, NM; El Paso, TX; and Ciudad Juárez, MX. The model evaluates water distribution across agricultural and urban sectors, as well as the impact of surface water availability on groundwater extraction [
43].
The Hydroeconomic Optimization Model’s objective function maximizes the region’s economic benefits from water allocation to competing sectors. It extends the capabilities of the Water Balance Model, considering factors such as agricultural crop mixes, urban water pricing, pumping costs, and scenarios involving cost-effective technologies (e.g., reduced costs for in-land desalination) and water storage recovery strategies (e.g., protection of aquifer storage) [
44].
To facilitate user adoption, the platform offers “Canned Scenarios”. These consist of pre-assembled model runs that showcase a limited set of model outputs for specific scenarios. For instance, the “Climate effects down the line in a conjunctive water use setting surface and groundwater” scenario simulates an extreme climate event on the Middle Rio Grande System, providing projections of surface and groundwater flows and storage levels.
3. Materials and Methods
This study, approved by the UTEP Institutional Review Board (IRB), was conducted through four virtual participatory workshops in 2020 and early 2021 during the COVID-19 outbreak, each lasting three hours, with the aim of engaging stakeholders from a variety of sectors in the MRGRB, including both non-experts and experts (such as city residents, educators, environmentalists, farmers, regulators, researchers, rural residents, and water managers) in discussions around water issues. These workshops were not initially developed to be held in an online format, and therefore we leveraged interactive collaborative tools like Miro to allow for more engagement. The workshops’ virtual format created challenges but allowed for broad participation, including individuals from beyond the MRGRB study region. The workshops were designed to build upon one another and incorporate participant feedback, though attendance at all workshops was not required. To ensure all participants were informed, an ArcGIS Storymap and supplementary reading materials were provided via email, and participants were given access to an online shared workspace on Miro.com for reviewing workshop materials.
3.1. Workshop and Activity Design
The workshops were intentionally designed to foster collaboration and encourage active participation through various processes, including knowledge co-creation, team activities, and social learning exercises, all conducted within a virtual environment (
Figure 1). The initial workshop, comprised of two iterations referred to as W1A and W1B, introduced participants to water-related challenges specific to the MRGRB region while also assessing stakeholder concerns. To facilitate this, participants were familiarized with the SWIM system for the MRGRB region with coverage from Elephant Butte Reservoir to Fort Quitman [
45] (
Figure 2). The introduction to SWIM 2.0 was facilitated through an interactive demo run by the workshop facilitator, during which participants were introduced to the overall goal of SWIM, model inputs, changeable inputs, and outputs.
The workshop activities in this study were based around the Water Balance Simulation Model, opting for its simplicity over the Hydroeconomic Optimization Model, although the use of the latter was available to participants. The initial presentation of SWIM in W1A and W1B involved using pre-defined scenarios, referred to as “canned scenarios”, to streamline model execution and demonstrate its results. Canned scenarios were developed by the research team using only the Water Balance Simulation Model for simplicity, based on stakeholder concerns for the MRGR gathered from previous workshop data collected by [
46]. The canned scenario questions created for these workshops focused on the impact of specific climate scenarios (i.e., zero flow, extended drought, extreme stress, mean climate, and wet climate) on river supply and storage, resulting in a total of five canned scenarios available for participants to chose from. Key inputs for running the questions using the water balance model were identified, followed by selecting key outputs. The deliberate choice to focus on key outputs for these canned scenarios aimed to prevent overwhelming participants with excessive output variables. Consequently, participants interpreted the results in relation to the specific question posed in the canned scenario.
Canned Scenario question used in W1A and W1B:
In the second workshop (W2), the objective was to investigate the impact of scenario analysis on social learning and decision-making. Initially, participants were tasked with individually defining the problem space from their perspectives using concept maps, which were then shared on the Miro platform
https://miro.com/ (accessed on 1 September 2020). This activity aimed to enable a comprehensive understanding of the diverse perspectives and existing knowledge concerning water issues in the MRGRB. A demonstration of the SWIM 2.0 interface was provided, and participants engaged in canned scenario analysis to enhance their comprehension of potential water futures in the region. Afterward, participants mapped their individual concept maps to available SWIM 2.0 models, generating their own scenario questions, running scenarios, and producing narratives. The narratives were prompted by requests to describe temporal trends, identify patterns, and summarize them in concise sentences concerning their specific question of interest.
The third workshop (W3) focused on scenario analysis, but with a shift from individual to group-level activities. Participants utilized their individual concept maps to create a shared group concept map collaboratively. Participants then generated scenario questions from the group concept map. Once everyone had developed questions as a group, they identified which question to run on SWIM 2.0. Once they executed the scenarios, they interpreted the results individually, followed by a group discussion. This allowed for an investigation into whether the process of co-creation and group-level activities led to an increase in tacit knowledge, engagement, and learning compared to individual scenario analysis.
Workshops were designed in this way to promote integration of knowledge across disciplines and perspectives. A model for individual learning and group processes was adapted from [
21,
47].
3.2. Participant Recruitment and Makeup
Participants were recruited via email using a database created by compiling email listservs from various water-related events held in the Middle Rio Grande region. A total of 215 invitations were sent for all of the workshops.
Figure 3 shows the makeup of our workshop. Makeup categories were based on participant introductions using the SWIM 2.0 user-type categories.
3.3. Data Collection, Coding, and Analysis
In each workshop, participants were administered pre- and post-surveys using QuestionPro
https://www.questionpro.com/ (accessed on 1 September 2020) to gather valuable insights into their motivations, overall workshop experience, and self-reported metrics. To ensure comprehensive documentation, all workshops were recorded on the Zoom platform
https://zoom.us/ (accessed on 1 September 2020). Two researchers transcribed the audio recordings, and the facilitator carefully reviewed their accuracy.
Participants also engaged in scenario analysis activities and generated concept maps, scenario questions, and narratives through the Zoom chat function or in their designated online collaborative workspace on Miro. To protect individual’s confidentiality, each participant was assigned a unique identifier. This article specifically focuses on the analysis of data obtained from scenario analysis activities and survey questions related to self-reported metrics [
48]. Scenario analysis activities data was collected from participant-developed questions and participant narratives, developed individually and as a group.
3.4. Coding Schemas
In this study, deductive coding schemas were employed to investigate the process of reasoning with data models and assess the frequency and impact of scenario analysis on social learning and decision-making. The NVivo software
https://lumivero.com/ (accessed on 1 February 2021) was utilized to analyze the transcripts and participant behavior observed during workshop activities. The coding process was guided by theoretical frameworks pertaining to group-level learning [
49] and engagement [
50] (
Table 1). By utilizing these coding schemas, we aimed to gain insights into the dynamics of reasoning with data and models and the influence of scenario analysis on social learning and decision-making. Group-level learning coding was applied to all workshops for all activities at the same level. This coding schema is targeted to understand an individuals learning through group activities, not necessarily learning they gained as individuals vs. in a group. Engagement was coded throughout group and individual engagement and analyzed separately.
A granular analysis was conducted by coding the Zoom transcripts of each speaker’s full turn to speak and their individual activity deliverables. This approach ensured that each participant’s complete contribution to the discussion was captured and analyzed as a distinct unit, allowing for a comprehensive understanding of their perspectives and thematic patterns.
A comprehensive set of key terms and corresponding definitions were established to effectively implement scenario analysis within the study, as presented in (
Table 2). Inductive coding was used to develop the narrative coding schema, ensuring that it emerged directly from the data rather than being predetermined in advance. This approach allowed for a rigorous and data-driven framework to analyze and interpret the narratives and their essential elements. When coding the narrative coding schema for scenario analysis, we ensured that each participant’s full interpretation was captured as a complete unit. Descriptions were coded individually, and although questions run on SWIM for W3 were chosen together, interpretations remained on an individual level but could have been influenced by group discussions.
The coding analysis was explicitly applied to the data collected during the canned scenario activities (W1A and W1B) and the scenario analysis activities (W2 and W3). To understand how participants learned and engaged with the modeling tool to make decisions, simple counts of coded text in transcripts for each coding schema were conducted and visualized through histograms. These histograms provided an overview of the participants’ levels of group-level learning, engagement, and scenario analysis. Percentages of overall counts of each coding scheme across workshops were also calculated to better understand how dialogue changed over workshops.
3.5. Self-Reported Metrics
A 5-point Likert-scale survey was administered to assess the self-reported levels of trust, understanding, and engagement (
Table 3). Participants were asked to rate their perceptions of these metrics, and the responses were quantified accordingly. The data are presented through stacked bar charts, illustrating the percentage distribution of participants’ responses in the post-survey for each workshop. These visualizations offer a comprehensive overview of the participants’ self-reported trust, understanding, and engagement levels throughout the workshops.
It is important to note that due to the virtual nature of the workshops, the counts were subjective and dependent on the participant’s level of engagement during Zoom sessions. To mitigate this, patterns were further analyzed by comparing the counts with the discussion transcripts. It is important to emphasize that the coding schema and analysis approach employed in this study reflect the interpretation of the researchers. This interpretative approach allowed for the exploration of patterns that are often overlooked by purely quantitative analyses, providing valuable insights into the data beyond numerical measurements. To address potential bias when comparing workshop data, the contributions of the two repeating participants are highlighted as percentages, ensuring their input is appropriately contextualized given the original intention for the workshops to build upon one another. Factor analysis was initially explored to examine potential relationships between user types, engagement levels, scenario analysis outcomes, and group-level learning. However, the small sample size limited statistical power, and no significant relationships were identified.
4. Results
The results demonstrate how the integration of participatory modeling (PM) and scenario analysis (SA) within online workshops supported stakeholder engagement, social learning, and decision-making processes. Participants engaged with the SWIM 2.0 Water Balance Model to explore and analyze scenarios related to water management challenges in the Middle Rio Grande River Basin. Analysis of workshop transcripts identified themes such as increased trust in modeling tools, enhanced knowledge exchange among diverse stakeholders, and the development of shared perspectives and goals. Quantitative data indicated an improvement in participant engagement and completion rates for scenario analysis activities, while qualitative findings highlighted the role of structured dialogue and collaborative exploration in fostering actionable insights. These results highlight the potential of PM and SA to facilitate stakeholder empowerment and support the practical application of scientific data for water resource management and policy development.
4.1. Characteristics of Workshop Participants
Our workshop participants are described below to provide better context to the findings. A total of 45 individuals participated in the workshops, with 37 unique individuals attending the workshops. Notably, there was a significant rate of dropout, resulting in only four individuals participating in at least two workshops and only two individuals attending all three workshops.
Participants in the workshops were assigned to different water user types based on the SWIM 2.0 user categories, including city residents, educators, environmentalists, farmers, regulators, researchers, rural residents, students, and water managers. These categories were created based on previous work by [
46]. In W1A (n = 8), the composition included three students, two environmentalists, one educator, two water managers, and one regulator. W1B (n = 14) consisted of four students, one environmentalist, two educators, one water manager, three city residents, and two farmers. W2 (n = 13) comprised four students, two environmentalists, three educators, two water managers, one regulator, and one researcher. Lastly, W3 (n = 10) included seven students, one environmentalist, one regulator, and one city resident.
While the initial categorization of participants into water user types was based on their stated interests and goals derived from pre-surveys, it is important to acknowledge that many participants may potentially identify with more than one water user type. Furthermore, the definitions of water user types were refined based on participants’ discussions, utilizing the definitions provided by [
46]. This acknowledgment is crucial, as individuals’ water usage patterns vary depending on their geographical location, lifestyle, and occupation. Additionally, it is worth noting that the number of participants in each workshop had some influence on the level of engagement during Zoom sessions, and the composition of participant types impacted the extent of conversations between participants.
4.2. Overall Experience with SWIM 2.0 and Miro
The participant experience with SWIM 2.0 and the collaborative Miro platform varied across the workshops, highlighting both challenges and opportunities for improvement. In W1, participants appreciated brainstorming solutions to water challenges, such as salinity issues, but found the technology component demanding. One participant noted, “The technology part was a bit challenging, but I enjoyed it”, while another emphasized, “I got distracted by needing to learn quickly to interface with Miro … and having to hear other people also learning to use it at the same time as trying to process data on SWIM”. This dual learning curve for Miro and SWIM 2.0 required significant cognitive effort, which affected engagement with the data. By W2, participants began adapting to the tools, with one stating, “I liked Miro once I figured it out … it helped me see the Big Picture”. However, the need for more structured guidance, such as user manuals and step-by-step instructions, persisted. In W3, participants noted the value of facilitated discussions around SWIM 2.0 outputs, with one reflecting, “Having someone walk us through the graphs was most helpful”. The collaborative potential of Miro was also recognized, with suggestions to expand its reach to growers via mobile apps or cheat sheets for model operations. These insights underscore the importance of structured facilitation, accessible resources, and iterative tool improvements to enhance participant engagement and learning in such collaborative platforms.
4.3. Self-Reported Metrics
The post-survey was administered with questions related to engagement, trust, and understanding. These self-reported outcomes are adapted from [
48] and serve as a valuable tool for gaining insights into participants’ perceptions of their enjoyment, trust, and comprehension of the overall workshops. Throughout the study, we observed variations in post-survey completion rates across all workshops, W1A having eight, W1B having nine, W2 having eight, and W3 having seven total survey completions. The resulting percentages were adjusted to account for these varying group sizes to ensure accuracy, thus representing the true self-reported outcomes.
Self-reported results across the workshops and survey questions are shown in
Figure 4. For W1A, the self-reported engagement levels for E1 showed 13% neutral, 63% agree, and 25% strongly agree. Similarly, for E2, the responses were 13% neutral, 50% agree, and 38% strongly agree (
Figure 4A). Concerning trust (T1), the results indicated 13% strongly disagree, 50% agree, and 38% strongly agree. As for understanding (U1), the responses were 13% neutral, 50% agree, and 38% strongly agree, whereas U2 showed 75% neutral and 25% agree. In the case of U3, the responses were 13% disagree, 13% neutral, 63% agree, and 13% strongly agree. For U4, 13% were neutral, 25% agree, and 63% strongly agree.
In W1B, the self-reported engagement levels for E1 showed 11% neutral, 56% agree, and 33% strongly agree, while for E2, the responses were 78% agree and 22% strongly agree (
Figure 4B). For trust (T1), the results showed 11% strongly disagree, 11% neutral, 56% agree, and 22% strongly agree. For understanding (U1), the responses were 11% neutral, 44% agree, and 44% strongly agree. U2 indicated 56% neutral, 22% agree, and 22% strongly agree. In the case of U3, the responses were 11% disagree, 22% neutral, 56% agree, and 11% strongly agree. For U4, 11% were neutral, 78% agree, and 11% strongly agree.
For W2, the self-reported engagement levels for E1 showed 50% agree and 50% strongly agree, whereas for E2, the responses were 63% agree and 38% strongly agree (
Figure 4C). For trust T1 showed 13% neutral, 75% agree, and 13% strongly agree. For U1, the responses were 25% neutral and 75% agree. U2 showed 50% neutral, 38% agree, and 13% strongly agree. For U3, the responses were 25% neutral, 50% agree, and 25% strongly agree. For U4, 38% neutral, 25% agree, and 38% strongly agree.
In W3, the self-reported engagement levels for E1 were 14% neutral, 57% agree, and 29% strongly agree. E2 was 100% agree (
Figure 4D). T1 results showed 14% neutral, 57% agree, and 29% strongly agree. U1 showed 86% agree and 14% strongly agree. U2 showed 43% neutral and 57% agree. For U3, the responses were 29% neutral, 57% agree, and 14% strongly agree. For U4, 29% were neutral, 43% agree, and 29% strongly agree.
4.4. Engagement Levels
In W1A, the highest count for group engagement coding was observed in the “involve” level (21, 58%) and “expose” level (12, 33%), a low count in the “synthesize” level (2, 6%) and “analyze” level (1, 3%), and zero counts in the “decide” level (
Figure 5). In W1B, the highest count was observed in the “involve” level (14, 61%), with moderate counts in the “expose” level (7, 30%) and “analyze” level (2, 9%), and no counts in the “decide” level and “synthesize” level. W2 exhibited the highest count in the “involve” level (60, 78%) and “decide” level (8, 10%), a moderate count in the “expose” level (2, 3%) and “synthesize” level (6, 8%), and a lower count in the “analyze” level (1, 1%). W3 had the highest count in the “involve” level (31, 80%), of which 13 came from mapping to SWIM as a group, and a moderate count in the “analyze” level (4, 10%), “synthesize” level (2, 5%), and “decide” level (2, 5%), and no counts in the “expose” level. The “decide” category (derive decisions) remained low across all workshops. Involve (interacting) made up the highest percentage of all workshops, but we see a 20 to 30% higher coding percentage in W2 compared to W1A, W1B, and W3. Individual engagement coding counts were low except in the “synthesize” level in W3 (
Figure 6). In W1A, “analyze” level had 5 counts, “synthesize” level had 3, and “decide” level had 2. In W1B, the “analyze” level had 5 counts and “decide” level had 2). For W2, there were 2 counts at the “synthesize” level and 1 at the “decide” level. The returning participants to W2 and W3 contributed to the overall engagement coding counts at 41% and 25% accordingly.
4.5. Group-Level Learning
In W1A, the highest counts coded (refer to
Table 3) from the transcripts occurred in the activity level (27, 34%) and interface level (14, 18%); the third-highest count was in the model level (22, 24%), with moderate counts in the tool-use level (7, 9%) and policy-world level (8, 10%) (
Figure 7). W1B exhibited the lowest counts in policy-world talk (3, 7%), while having moderate counts in the activity level (15, 36%), interface level (5, 12%), and model level (16, 38%) and a low count in the tool-use level (3, 7%). W2 displayed the highest counts in the tool-use level (18, 24%) and policy-world talk (24, 32%), along with moderate counts in the activity level (6, 8%), model level (22, 29%), and the same count as W1B for the interface level (5, 7%). W3 demonstrated the highest count of model-level talk, totaling 32 (56%) instances, while having the lowest counts in the activity level (1, 2%), interface level (4, 7%), and tool-use level (1, 2%), and a moderate count in policy-world talk (19, 33%). Looking at the counts, W1A and W1B were primarily made up of activity level talk, in contrast to W2 and W3 which had higher counts of policy-world and model-level codes, respectively. When we look at the percentages, we see that activity and model levels make up more than 50% of the conversations. W2 has the highest percentage of tool-use level across all workshops and had an increase in the percentage of policy world talk, but was on par with W1A model-level percentages and lower than W1B percentages. More than fifty percent of all conversations in W3 were model-level talk, with 1% increases in policy-world talk compared to W2. Returning participants contributed 36% and 22% to overall group-level learning counts in W2 and W3 accordingly.
4.6. Scenario Analysis
The results obtained from the scenario analysis levels across all workshops indicate a substantial increase in the completion of the scenario analysis process (
Figure 8). Note that scenario analysis levels in W1A and W1B were developed using only one canned scenario: How will an extended drought scenario affect river supply and storage? In W1A, participants generated five Model Output Interpretations (MOIs), two narratives, and three storylines. W1B participants generated three MOIs, two narratives, and one instance of individual-level scenario analysis. W2 participants generated four MOIs, one narrative, and four storylines. In W3, participants generated four MOIs, five narratives, two storylines, and twelve instances of group-level scenario analysis.
A significant comparison can be made between Workshop 2 (individual scenario analysis activities) and W3 (group scenario analysis activities). The analysis reveals that W3, conducted as a group, resulted in over double the number of participant-generated outputs (23 total) compared to Workshop 2 (9 total). In W2, where activities were performed individually, 44% of the generated outputs consisted of MOIs, 11% were narratives, 44% were story lines, and there were no instances of scenario analysis. In contrast, W3, characterized by group activities, demonstrated that scenario analysis accounted for 52% of all the generated scenario analysis levels. Returning participants account for two of the four MOIs (50%) in W2 and 100% of the story lines. In W3, they contributed one of the four MOI’s (25%) and four of the twelve scenario analysis levels (33%). Overall, they contributed 89% and 22% to scenario analysis coding counts in W2 and W3, respectively.
5. Discussion
This study aimed to address three primary research questions: (RQ1) How do stakeholders reason, learn, and make decisions about water facilitated by participatory modeling (PM) and scenario analysis (SA) methods? (RQ2) Is there an increase in engagement with models through the implementation of PM and SA? (RQ3) What insights were gained from conducting SA activities with a variety of stakeholders? Results indicate that PM and SA successfully fostered social learning, trust, engagement, and collaborative reasoning among diverse stakeholders. Notably, while individual interaction with the SWIM 2.0 modeling tool was challenging for some participants, group-level collaborative learning processes enabled higher levels of SA and rich policy discussions. This suggests that collective efforts rather than individual training were pivotal to the successful outcomes observed.
To begin addressing RQ1, we start by looking at the self-reported metrics we see that most participants found the workshops engaging and information was more easily understood for participants in W2 and W3. Although decision-making discussions were limited, participants valued the workshops for highlighting the importance of considering diverse perspectives and data in regional water management. Most participants reported that their thinking about water issues was influenced by the activities, with one participant noting that the process “solidified the commitment toward binational and multisectoral collaboration”. However, some participants did not identify as decision-makers, perceiving decision-making as confined to higher levels of policy-making, which may have limited their engagement with the decision-making aspects of SA. This dynamic reflects the broader challenges of addressing power hierarchies in water governance, as discussed by [
17,
51].
When addressing RQ2 and engagement through PM and SA, it is important to note that most of the participants in W3 had not partaken in the earlier workshops. This means any training provided in previous workshops becomes irrelevant for success in the workshops. We also recognize the contribution returning participants had in all coding counts, and we still saw an increase in scenario analysis and engagement with synthesizing scenario runs. Even so, when looking at W3, we see that returning participants did not engage more than 50% for any activity except the engagement code. This observation intimates a salient conclusion: the collective learning process, rather than tool-based training and individual interactions with the tool, may be of greater importance to the successful outcomes of such workshops. Group-level activities also enabled rich discussion even when users were not able to overcome the barrier of model complexity. This suggests that the use of participatory processes, collaboration, and knowledge-sharing among stakeholders can lead to increased learning with models and connection to the policy-world level [
47]. On the other hand, activities in the first two workshops, which were initially designed to scaffold learning about the tools so that they could complete higher levels of scenario analysis, may have, in fact, distracted participants from such higher-order reasoning. This result would be consistent with [
52], who aligned phases of experiential learning with theoretical models of technology adoption, with perceived usefulness preceding tool usability in the adoption process. Yet, tool usability is a significant barrier even when usefulness is perceived in [
52]. W3 participants collectively overcame the complexities of tool usage because their shared interest in learning the results from scenario analysis motivated them to work together towards surmounting the difficulties of learning the new technology.
The long-term impacts of studies incorporating PM and SA have significant potential to contribute to sustainability initiatives, particularly in regions facing complex water management challenges [
3]. By fostering collaborative learning, trust-building, and multi-stakeholder engagement, these approaches empower participants to integrate diverse perspectives into decision-making processes, which was our primary focus for RQ3. The iterative nature of PM and SA enables stakeholders to explore future scenarios, assess trade-offs, and identify shared priorities, laying a foundation for adaptive and resilient water governance. Moreover, the knowledge generated through such processes can inform the development of policies that are both context-specific and forward-looking, addressing immediate challenges while considering long-term sustainability. Importantly, these methodologies highlight the value of inclusive participation, which ensures that marginalized voices are heard and that solutions are equitable. As demonstrated in this study, the emphasis on group learning and engagement transcends individual tool limitations, fostering a deeper understanding of interconnected water issues and reinforcing the commitment to collective action. By embedding these practices in broader sustainability frameworks, the outcomes can extend beyond individual workshops to inspire systemic change and long-term stewardship of critical resources.
5.1. Strengths and Limitations
One of the key strengths of this study was its ability to foster trust in the SWIM 2.0 model, even among participants with no prior experience using modeling tools. This trust was demonstrated through the increased frequency of discussions about the model’s utility, its implications for policy, and participant-generated recommendations for improvement. The collaborative design of the workshops encouraged participants to engage with the model, overcoming technical challenges and promoting shared learning. Additionally, participants were able to explore regional water scenarios collectively, providing meaningful feedback and showing interest in applying the model to broader contexts. This aligns with previous findings emphasizing the value of participatory approaches in improving stakeholder engagement and decision-making [
53].
However, self-reported data pose inherent limitations, including response bias, where participants may misrepresent their behaviors due to misunderstandings or social desirability bias. This distortion can be particularly problematic in program evaluations, as recalibration of bias after an intervention can obscure accurate outcome assessments [
54,
55,
56]. The reliance on self-reported feedback in this study necessitates cautious interpretation of findings.
Recruitment and retention also posed significant barriers. Out of 215 invitations sent, each workshop only retained 8–20 participants, with further drop out across sessions. These figures highlight the challenges of involving stakeholders in participatory initiatives, especially in virtual settings. This limitation underscores the need to scale similar efforts to larger and more consistent participant pools to ensure generalizability and robust outcomes.
The complexity of the SWIM 2.0 tool was another limitation. Participants often required substantial support to navigate the tool, which, while mitigated through group-level collaboration, still presented challenges. Despite these challenges, the collaborative approach fostered trust in the model, as evidenced by the high levels of engagement in scenario analysis and recommendations for improvement.
Finally, while this study initiated conversations about bridging SA with decision-making, it did not fully address this gap. Bridging this divide will require greater involvement of policymakers and more structured frameworks for multi-agent decision-making [
57]. Enhanced engagement frameworks and the development of concise, user-friendly tools, such as mobile apps or interactive web-based platforms could help overcome these limitations and foster broader adoption among diverse stakeholder groups.
While the study demonstrated important strengths, including fostering trust in the model and promoting collaborative learning, it also highlighted significant limitations in recruitment, retention, virtual workshop effectiveness, and tool complexity. These findings provide a foundation for refining PM and SA processes to better support sustainability efforts in diverse contexts.
5.2. Practical Implications
The findings underscore the importance of designing PM and SA workshops that prioritize group-level collaboration and scaffolded learning processes over extensive individual training. While individual engagement with modeling tools remains valuable, collective learning appears more critical for achieving higher levels of SA and meaningful policy discussions. Facilitators should emphasize structured, collaborative activities that enable participants to contribute regardless of their technical expertise. Additionally, clear definitions of roles and decision-making authority should be established to address the uncertainties expressed by participants about their involvement in policy-making.
The virtual workshop format, while offering accessibility for geographically dispersed stakeholders, highlighted the need for face-to-face or hybrid approaches that combine the benefits of in-person interaction with the convenience of virtual participation. Hybrid models could help mitigate recruitment and retention challenges, ensure consistent participation across sessions, and foster deeper engagement and trust-building. Scaling up future workshops should also involve comprehensive recruitment strategies to ensure diverse representation, alongside iterative training to build upon prior learning.
Despite these limitations, this study demonstrated the potential of PM and SA methods to foster trust in models, encourage collaborative learning, and influence participants’ understanding of regional water issues. As emphasized by [
1,
58], the processes rendered the model salient, credible, and legitimate, enabling participants to transition from observing the model to engaging in discussions on policy-making. These findings highlight the importance of participatory approaches in addressing complex environmental challenges and provide a foundation for expanding the application of such methods in future decision-making contexts.
6. Conclusions
This study highlights the potential of participatory modeling (PM) and scenario analysis (SA) to support stakeholder engagement and collaborative exploration of water management challenges. By integrating these methods, participants engaged with SWIM 2.0 to examine complex issues, fostering trust in the model and encouraging co-production of knowledge. While technical challenges posed barriers for some participants, the process facilitated collective learning and informed discussions, with outputs offering valuable insights for policy considerations. The findings emphasize the importance of creating accessible and collaborative spaces where diverse perspectives can inform sustainable resource management strategies. Participants’ contributions also guided improvements to SWIM 2.0, enhancing its applicability for broader community engagement.
However, challenges to achieving higher levels of engagement, particularly decision-making, persist. Technical complexity, limited participant familiarity with tools like SWIM 2.0, and perceived constraints on decision-making authority were key barriers. Addressing these limitations will require further refinement of participatory processes, including simplified tool interfaces, clear guidance, and frameworks that empower participants to translate scenario analysis into actionable decisions. Efforts to engage stakeholders more effectively could also focus on building confidence and reducing perceived power imbalances within group settings.
Future research should explore strategies to bridge the gap between scenario analysis and decision-making by leveraging stakeholder-driven insights to shape policy recommendations. Continued iterations of SWIM 2.0 and similar tools will be instrumental in fostering deeper engagement, particularly when paired with efforts to scale participation and sustain collaboration through ongoing workshops or community networks, such as the established listserv from this study.
In conclusion, this study underscores the potential of participatory modeling approaches to promote collaboration, build trust, and generate actionable insights in complex decision-making contexts. By addressing existing barriers and expanding opportunities for inclusive engagement, such approaches can play a critical role in advancing sustainable water resource management.
Author Contributions
Conceptualization, K.S. and D.P.; data curation, K.S.; formal analysis, K.S.; investigation, K.S.; methodology, K.S. and D.P.; project administration, K.S.; resources, K.S. and D.P.; software, K.S.; supervision, D.P.; validation, K.S.; visualization, K.S.; writing—original draft, K.S.; writing—review and editing, K.S. and D.P. All authors have read and agreed to the published version of the manuscript.
Funding
This material is based upon work supported by the National Science Foundation under Grant No. 1835897. This work was supported by The Agriculture and Food Research Initiative (AFRI) grant no. 2015-68007-23130 from the USDA National Institute of Food and Agriculture. This work used resources from CyberShARE Center of Excellence supported by NSF Grant HDR-1242122. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This publication was supported by the National Science Foundation under Grant No. 2228180 and the University of Texas at El Paso Department of Earth, Environmental, and Resource Sciences.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of The University of Texas at El Paso (1627737-6, July 2022) for studies involving humans.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
Acknowledgments
We express our gratitude to Luis Garnica Chavira for providing support and his assistance with edits and knowledge about SWIM 2.0.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
SES | Social-environmental system |
PM | Participatory modeling |
MRGRB | Middle Rio Grande River Basin |
MOI | Model Output Interpretation |
SA | Scenario analysis |
References
- Cash, D.; Clark, W.C.; Alcock, F.; Dickson, N.; Eckley, N.; Jager, J. Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making. SSRN Electron. J. 2002. [Google Scholar] [CrossRef]
- Rounsevell, M.D.; Arneth, A.; Brown, C.; Cheung, W.W.; Gimenez, O.; Holman, I.; Leadley, P.; Luján, C.; Mahevas, S.; Maréchaux, I.; et al. Identifying uncertainties in scenarios and models of socio-ecological systems in support of decision-making. One Earth 2021, 4, 967–985. [Google Scholar] [CrossRef]
- Rounsevell, M.D.A.; Metzger, M.J. Developing qualitative scenario storylines for environmental change assessment: Developing qualitative scenario storylines. Wiley Interdiscip. Rev. Clim. Chang. 2010, 1, 606–619. [Google Scholar] [CrossRef]
- Cai, L.; Zhu, Y. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era. Data Sci. J. 2015, 14, 2. [Google Scholar] [CrossRef]
- Lee, I. Big data: Dimensions, evolution, impacts, and challenges. Bus. Horizons 2017, 60, 293–303. [Google Scholar] [CrossRef]
- Shanmugam, D.; Dhilipan, D.; Vignesh, A.; Prabhu, T. Challenges in Data Quality and Complexity of Managing Data Quality Assessment in Big Data. Int. J. Recent Technol. Eng. (IJRTE) 2020, 9, 589–593. [Google Scholar] [CrossRef]
- Dimara, E.; Zhang, H.; Tory, M.; Franconeri, S. The Unmet Data Visualization Needs of Decision Makers Within Organizations. IEEE Trans. Vis. Comput. Graph. 2022, 28, 4101–4112. [Google Scholar] [CrossRef]
- Jagadish, H.V.; Gehrke, J.; Labrinidis, A.; Papakonstantinou, Y.; Patel, J.M.; Ramakrishnan, R.; Shahabi, C. Big data and its technical challenges. Commun. ACM 2014, 57, 86–94. [Google Scholar] [CrossRef]
- Philip Chen, C.; Zhang, C.Y. Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Inf. Sci. 2014, 275, 314–347. [Google Scholar] [CrossRef]
- Mahmoud, M.; Liu, Y.; Hartmann, H.; Stewart, S.; Wagener, T.; Semmens, D.; Stewart, R.; Gupta, H.; Dominguez, D.; Dominguez, F.; et al. A formal framework for scenario development in support of environmental decision-making. Environ. Model. Softw. 2009, 24, 798–808. [Google Scholar] [CrossRef]
- Cairns, G.; Wright, G.; Fairbrother, P. Promoting articulated action from diverse stakeholders in response to public policy scenarios: A case analysis of the use of ‘scenario improvisation’ method. Technol. Forecast. Soc. Chang. 2016, 103, 97–108. [Google Scholar] [CrossRef]
- Cox, L.A. Addressing Wicked Problems and Deep Uncertainties in Risk Analysis. In AI-ML for Decision and Risk Analysis; International Series in Operations Research & Management Science; Springer International Publishing: Cham, Switzerland, 2023; Volume 345, pp. 215–249. [Google Scholar] [CrossRef]
- Marttunen, M.; Lienert, J.; Belton, V. Structuring problems for Multi-Criteria Decision Analysis in practice: A literature review of method combinations. Eur. J. Oper. Res. 2017, 263, 1–17. [Google Scholar] [CrossRef]
- Swart, R.; Raskin, P.; Robinson, J. The problem of the future: Sustainability science and scenario analysis. Glob. Environ. Chang. 2004, 14, 137–146. [Google Scholar] [CrossRef]
- Moallemi, E.A.; Kwakkel, J.; De Haan, F.J.; Bryan, B.A. Exploratory modeling for analyzing coupled human-natural systems under uncertainty. Glob. Environ. Chang. 2020, 65, 102186. [Google Scholar] [CrossRef]
- Jordan, R.; Gray, S.; Zellner, M.; Glynn, P.D.; Voinov, A.; Hedelin, B.; Sterling, E.J.; Leong, K.; Olabisi, L.S.; Hubacek, K.; et al. Twelve Questions for the Participatory Modeling Community. Earth’s Future 2018, 6, 1046–1057. [Google Scholar] [CrossRef]
- Zellner, M.; Milz, D.; Lyons, L.; Hoch, C.; Radinsky, J. Finding the Balance Between Simplicity and Realism in Participatory Modeling for Environmental Planning. Environ. Model. Softw. 2022, 157, 105481. [Google Scholar] [CrossRef]
- PahlWostl, C.; Hare, M. Processes of social learning in integrated resources management. J. Community Appl. Soc. Psychol. 2004, 14, 193–206. [Google Scholar] [CrossRef]
- Carson, A.; Windsor, M.; Hill, H.; Haigh, T.; Wall, N.; Smith, J.; Olsen, R.; Bathke, D.; Demir, I.; Muste, M. Serious gaming for participatory planning of multi-hazard mitigation. Int. J. River Basin Manag. 2018, 16, 379–391. [Google Scholar] [CrossRef]
- Cuppen, E.; Nikolic, I.; Kwakkel, J.; Quist, J. Participatory multi-modelling as the creation of a boundary object ecology: The case of future energy infrastructures in the Rotterdam Port Industrial Cluster. Sustain. Sci. 2021, 16, 901–918. [Google Scholar] [CrossRef]
- Pennington, D.; Bondank, E.; Clifton, J.; Killion, A.; Salas, K.; Shew, A.; Sterle, K.; Wilson, B. EMBeRS: An Approach for Igniting Participatory Learning and Synthesis. Int. Congr. Environ. Model. Softw. 2018, 68, 1–10. [Google Scholar]
- Smetschka, B.; Gaube, V. Co-creating formalized models: Participatory modelling as method and process in transdisciplinary research and its impact potentials. Environ. Sci. Policy 2020, 103, 41–49. [Google Scholar] [CrossRef]
- Biggs, R.; Preiser, R.; de Vos, A.; Schlüter, M.; Maciejewski, K.; Clements, H. The Routledge Handbook of Research Methods for Social-Ecological Systems, 1st ed.; Routledge: London, UK, 2021. [Google Scholar] [CrossRef]
- Walz, A.; Lardelli, C.; Behrendt, H.; Grêt-Regamey, A.; Lundström, C.; Kytzia, S.; Bebi, P. Participatory scenario analysis for integrated regional modelling. Landsc. Urban Plan. 2007, 81, 114–131. [Google Scholar] [CrossRef]
- Chavira, L.G.; Villanueva-Rosales, N.; Heyman, J.; Pennington, D.D.; Salas, K. Supporting Regional Water Sustainability Decision-Making through Integrated Modeling. In Proceedings of the 2022 IEEE International Smart Cities Conference (ISC2), Pafos, Cyprus, 26–29 September 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Xexakis, G.; Trutnevyte, E. Model-based scenarios of EU27 electricity supply are not aligned with the perspectives of French, German, and Polish citizens. Renew. Sustain. Energy Transit. 2022, 2, 100031. [Google Scholar] [CrossRef]
- Leong, C. Narratives and water: A bibliometric review. Glob. Environ. Chang. 2021, 68, 102267. [Google Scholar] [CrossRef]
- Heer, J.; van Ham, F.; Carpendale, S.; Weaver, C.; Isenberg, P. Creation and Collaboration: Engaging New Audiences for Information Visualization. In Information Visualization; Lecture Notes in Computer Science; Kerren, A., Stasko, J.T., Fekete, J.D., North, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 4950, pp. 92–133. [Google Scholar] [CrossRef]
- Segel, E.; Heer, J. Narrative Visualization: Telling Stories with Data. IEEE Trans. Vis. Comput. Graph. 2010, 16, 1139–1148. [Google Scholar] [CrossRef]
- Tong, C.; Roberts, R.; Borgo, R.; Walton, S.; Laramee, R.; Wegba, K.; Lu, A.; Wang, Y.; Qu, H.; Luo, Q.; et al. Storytelling and Visualization: An Extended Survey. Information 2018, 9, 65. [Google Scholar] [CrossRef]
- Cho, S.J.; Klemz, C.; Barreto, S.; Raepple, J.; Bracale, H.; Acosta, E.A.; Rogéliz-Prada, C.A.; Ciasca, B.S. Collaborative Watershed Modeling as Stakeholder Engagement Tool for Science-Based Water Policy Assessment in São Paulo, Brazil. Water 2023, 15, 401. [Google Scholar] [CrossRef]
- Galan, J.; Galiana, F.; Kotze, D.J.; Lynch, K.; Torreggiani, D.; Pedroli, B. Landscape adaptation to climate change: Local networks, social learning and co-creation processes for adaptive planning. Glob. Environ. Chang. 2023, 78, 102627. [Google Scholar] [CrossRef]
- Voinov, A.; Kolagani, N.; McCall, M.K.; Glynn, P.D.; Kragt, M.E.; Ostermann, F.O.; Pierce, S.A.; Ramu, P. Modelling with stakeholders—Next generation. Environ. Model. Softw. 2016, 77, 196–220. [Google Scholar] [CrossRef]
- Voinov, A.; Çöltekin, A.; Chen, M.; Beydoun, G. Virtual geographic environments in socio-environmental modeling: A fancy distraction or a key to communication? Int. J. Digit. Earth 2018, 11, 408–419. [Google Scholar] [CrossRef]
- Andrienko, G.; Andrienko, N.; Jankowski, P.; Keim, D.; Kraak, M.; MacEachren, A.; Wrobel, S. Geovisual analytics for spatial decision support: Setting the research agenda. Int. J. Geogr. Inf. Sci. 2007, 21, 839–857. [Google Scholar] [CrossRef]
- Sukanya, S.; Joseph, S. Chapter 4—Climate change impacts on water resources: An overview. In Visualization Techniques for Climate Change with Machine Learning and Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 2023; pp. 55–76. [Google Scholar] [CrossRef]
- Allington, G.R.H.; Fernandez-Gimenez, M.E.; Chen, J.; Brown, D.G. Combining participatory scenario planning and systems modeling to identify drivers of future sustainability on the Mongolian Plateau. Ecol. Soc. 2018, 23, 9. [Google Scholar] [CrossRef]
- Kepner, W.G.; Semmens, D.J.; Bassett, S.D.; Mouat, D.A.; Goodrich, D.C. Scenario Analysis for the San Pedro River, Analyzing Hydrological Consequences of a Future Environment. Environ. Monit. Assess. 2004, 94, 115–127. [Google Scholar] [CrossRef] [PubMed]
- Keseru, I.; Coosemans, T.; Macharis, C. Stakeholders’ preferences for the future of transport in Europe: Participatory evaluation of scenarios combining scenario planning and the multi-actor multi-criteria analysis. Futures 2021, 127, 102690. [Google Scholar] [CrossRef]
- Peterson, G.D.; Cumming, G.S.; Carpenter, S.R. Scenario Planning: A Tool for Conservation in an Uncertain World. Conserv. Biol. 2003, 17, 358–366. [Google Scholar] [CrossRef]
- Reilly, M.; Willenbockel, D. Managing uncertainty: A review of food system scenario analysis and modelling. Philos. Trans. R. Soc. Biol. Sci. 2010, 365, 3049–3063. [Google Scholar] [CrossRef]
- Rinaldi, P.N. Dealing with complex and uncertain futures: Glimpses from transdisciplinary water research. Futures 2023, 147, 103113. [Google Scholar] [CrossRef]
- Holmes, R.N.; Mayer, A.; Gutzler, D.S.; Chavira, L.G. Assessing the Effects of Climate Change on Middle Rio Grande Surface Water Supplies Using a Simple Water Balance Reservoir Model. Earth Interact. 2022, 26, 168–179. [Google Scholar] [CrossRef]
- Ward, F.A.; Mayer, A.S.; Garnica, L.A.; Townsend, N.T.; Gutzler, D.S. The economics of aquifer protection plans under climate water stress: New insights from hydroeconomic modeling. J. Hydrol. 2019, 576, 667–684. [Google Scholar] [CrossRef]
- Hargrove, W.; Heyman, J.; Mayer, A.; Mirchi, A.; Granados-Olivas, A.; Ganjegunte, G.; Gutzler, D.; Pennington, D.; Ward, F.; Chavira, L.G.; et al. The future of water in a desert river basin facing climate change and competing demands: A holistic approach to water sustainability in arid and semi-arid regions. J. Hydrol. Reg. Stud. 2023, 46, 101336. [Google Scholar] [CrossRef]
- Hargrove, W.L.; Heyman, J.M. A Comprehensive Process for Stakeholder Identification and Engagement in Addressing Wicked Water Resources Problems. Land 2020, 9, 119. [Google Scholar] [CrossRef]
- Pennington, D. A conceptual model for knowledge integration in interdisciplinary teams: Orchestrating individual learning and group processes. J. Environ. Stud. Sci. 2016, 6, 300–312. [Google Scholar] [CrossRef]
- Xexakis, G.; Trutnevyte, E. Are interactive web-tools for environmental scenario visualization worth the effort? An experimental study on the Swiss electricity supply scenarios 2035. Environ. Model. Softw. 2019, 119, 124–134. [Google Scholar] [CrossRef]
- Radinsky, J.; Milz, D.; Zellner, M.; Pudlock, K.; Witek, C.; Hoch, C.; Lyons, L. How planners and stakeholders learn with visualization tools: Using learning sciences methods to examine planning processes. J. Environ. Plan. Manag. 2017, 60, 1296–1323. [Google Scholar] [CrossRef]
- Mahyar, N.; Kim, S.H.; Kwon, B.C. Towards a Taxonomy for Evaluating User Engagement in Information Visualization. Workshop Pers. Vis. Explor. Everyday Life 2015, 3, 4. [Google Scholar]
- Fallon, A.L.; Lankford, B.A.; Weston, D. Navigating wicked water governance in the “solutionscape” of science, policy, practice, and participation. Ecol. Soc. 2021, 26, art37. [Google Scholar] [CrossRef]
- Pennington, D.D. Collaborative, cross-disciplinary learning and co-emergent innovation in eScience teams. Earth Sci. Inform. 2011, 4, 55–68. [Google Scholar] [CrossRef]
- Elsawah, S.; Bakhanova, E.; Hämäläinen, R.P.; Voinov, A. A Competency Framework for Participatory Modeling. Group Decis. Negot. 2023, 32, 569–601. [Google Scholar] [CrossRef]
- Rosenman, R.; Tennekoon, V.; Hill, L.G. Measuring bias in self-reported data. Int. J. Behav. Healthc. Res. 2011, 2, 320. [Google Scholar] [CrossRef]
- Garau, C. Focus on Citizens: Public Engagement with Online and Face-to-Face Participation—A Case Study. Future Internet 2012, 4, 592–606. [Google Scholar] [CrossRef]
- Warkentin, M.E.; Sayeed, L.; Hightower, R. Virtual Teams versus FacetoFace Teams: An Exploratory Study of a Webbased Conference System. Decis. Sci. 1997, 28, 975–996. [Google Scholar] [CrossRef]
- Motlaghzadeh, K.; Eyni, A.; Behboudian, M.; Pourmoghim, P.; Ashrafi, S.; Kerachian, R.; Hipel, K.W. A multi-agent decision-making framework for evaluating water and environmental resources management scenarios under climate change. Sci. Total Environ. 2023, 864, 161060. [Google Scholar] [CrossRef] [PubMed]
- Elsawah, S.; Filatova, T.; Jakeman, A.J.; Kettner, A.J.; Zellner, M.L.; Athanasiadis, I.N.; Hamilton, S.H.; Axtell, R.L.; Brown, D.G.; Gilligan, J.M.; et al. Eight grand challenges in socio-environmental systems modeling. Socio-Environ. Syst. Model. 2020, 2, 16226. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).