Next Article in Journal
Vibocracy and the Collapse of Shared Reality
Previous Article in Journal
Geometry of an English Church Bell
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Classifying Cyber Ranges: A Case-Based Analysis Using the UWF Cyber Range

1
Department of Cybersecurity and Information Technology, The University of West Florida, Pensacola, FL 32514, USA
2
Department of Computer Science, The University of West Florida, Pensacola, FL 32514, USA
3
Department of Mathematics and Statistics, The University of West Florida, Pensacola, FL 32514, USA
*
Author to whom correspondence should be addressed.
Encyclopedia 2025, 5(4), 162; https://doi.org/10.3390/encyclopedia5040162
Submission received: 18 August 2025 / Revised: 22 September 2025 / Accepted: 8 October 2025 / Published: 10 October 2025
(This article belongs to the Section Mathematics & Computer Science)

Definition

To address the gaps in cyber range survey research, this entry develops and applies a structured classification taxonomy to support the comparison, evaluation, and design of cyber ranges. The entry will address the following question: What are the objectives and key features of current cyber ranges, and how can they be classified into a comprehensive taxonomy? The entry synthesizes existing frameworks and analyzes and classifies a variety of documented cyber ranges to find similarities and gaps in the current classification methods. The findings indicate recurring design elements across ranges, persistent gaps in standardization, and demonstrate how the University of West Florida (UWF) Cyber Range exemplifies the taxonomy application in practice. The goal is to facilitate informed decision-making by cybersecurity professionals when choosing platforms and to support academic research in cybersecurity education. Pulling information from studies about other cyber ranges to compare with the UWF Cyber Range, this taxonomy aims to contribute to the documentation of cyber ranges by providing a clear understanding of the current cyber range landscape.

1. Introduction

As modern technology has evolved, so has the rise in cybersecurity threats. Cyberattacks from the Morris Worm [1] to the Colonial Pipeline ransomware attacks [2] have impacted the way we approach defending our systems and infrastructure. More recently, the development of artificial intelligence (AI) has revolutionized the way we detect cyberattacks. Researching threats, testing different solutions, and training organizations on how to protect themselves has become necessary. Cyber ranges were developed to fill that need, to provide a platform for research, testing, and training.
A cyber range is a virtual environment used to simulate real-world attacks for cybersecurity training, testing, and research without harming working systems [3]. Cyber ranges offer a secure and adaptable environment for developing and honing abilities in a variety of fields, including incident response and penetration testing. Ranges provide focused instruction, realistic scenarios, and hands-on practice in a safe testing setting [4].
Persistent workforce and skills gaps continue to challenge the field of cybersecurity. 92% of organizations mention skills gaps, especially in areas like cloud security, artificial intelligence and machine learning (AI/ML), and Zero Trust implementation, with 67% of organizations reporting a shortage of cybersecurity personnel, according to the 2023 ISC2 Cybersecurity Workforce Study [5]. These results underline the urgent need for practical, scalable training programs like cyber ranges that can close these gaps across a variety of technical domains.
Initially, cyber ranges were confined to secret programs within the United States (U.S.) Department of Defense (DoD) and its contractors, only to be used by cleared employees on internal networks. They were considered the cybersecurity equivalent of the military’s “proving grounds” for tactical exercises [6]. The first “public” cyber range in the U.S. came from the University of Michigan, which intended to provide an unclassified environment for safe and world-class training [7]. From that development onwards, cyber ranges have spread globally as more organizations began to recognize the benefits of using them.
Despite this rise in creation and usage, there is a lack of standardization in how cyber ranges are structured. Most organizations create cyber ranges specific to their needs, which has led to fragmented implementations across sectors. As noted by Priyadarshini (2018) [8] and Ukwandu et al. (2020) [9], the scalability and reusability of cyber ranges across broader contexts have been limited by issues like high infrastructure costs, inconsistent architectural models, and narrow design targeting specific domains. Urias et al. (2018) [10] also stress that cross-institutional cooperation and coordinated development are hindered by platform incompatibilities. These limitations not only prevent wider adoption but also hinder the development of federated, scalable platforms that can accommodate a range of training requirements in academia, industry, and government.
Since a definitive cyber range framework is not widely used, many researchers would like to work towards creating a taxonomy of the current cyber range landscape. To address these gaps in the current cyber range survey research, this entry develops and applies a structured classification taxonomy to support comparison, evaluation, and design of cyber ranges. The research will address the following question: What are the objectives and key features of current cyber ranges, and how can they be classified into a comprehensive taxonomy?
Specifically, this entry contributes:
  • A synthesis of existing frameworks into a unified classification taxonomy;
  • A structured classification of ten cyber ranges across sectors, highlighting similarities and gaps in the current methods;
  • A live simulation case study of the UWF Cyber Range to validate the taxonomy in a real-world setting;
  • A discussion of persistent challenges in the field and identification of future research directions to support the development of cyber ranges that are operationally reliable, scalable, and pedagogically sound.
The goal is to facilitate informed decision-making for cybersecurity professionals choosing platforms and to support academic research in cybersecurity education. Pulling information from studies about other cyber ranges to compare with the University of West Florida (UWF) Cyber Range, this taxonomy aims to contribute to the documentation of cyber ranges by providing a clear understanding of the current cyber range landscape.
The rest of the entry is organized as follows. Section 2 presents the related works, including existing frameworks and taxonomy reviews; Section 3 presents the methodology using a case study approach and also explains UWF’s Cyber Range; Section 4 presents the results in terms of classifying the cyber ranges and the criteria used for the classification; Section 5 presents the gaps in current cyber range research; Section 6 discusses future directions for cyber range research; and finally Section 7 presents the conclusions.

2. Related Works

The National Institute of Standards and Technology (NIST) defines a cyber range as a simulated, interactive platform that replicates organizational networks, systems, tools, and applications [3]. These environments are designed to provide a safe and adaptable way to deliver hands-on cybersecurity training and education, catering to various use cases. Cyber ranges may include actual hardware, be entirely virtual, or a combination of both, making them fully adaptable to an organization’s needs [4]. Being in a controlled environment, individuals can practice and develop skills without harming the host system or network.
Back in the early 2000s, the U.S. DoD and other U.S. federal government agencies began developing their own versions of simulated environments for cybersecurity training and testing [7]. As time has progressed, so have the features and growth of cyber ranges. Today, cyber ranges are used across the industry, including academia, government, and the private sector, all providing cybersecurity training and education [4].
Before cyber ranges, existing testing facilities limited the development, testing, and training of cyber techniques, making it difficult to match real-world threats [11]. The importance of cyber ranges cannot be overstated. Prior to their development, many organizations found it too difficult to test the capabilities they had developed in a secure and realistic environment [11]. Cyber ranges allow skills to be developed in a controlled setting, reducing the risk of errors or mistakes that could be harmful in the real world.
Cyber ranges have become critical in cybersecurity training and education. As cybersecurity evolves, cyber ranges will become increasingly more important in providing training and education.

2.1. History of Cyber Ranges

When information security (INFOSEC) [8] began to gain attention at the U.S. federal government level in the early 1990s, the concept of interactive, simulated environments emerged to replicate real-world networks and systems for cybersecurity training. These early initiatives prepared the way for the development of advanced training techniques, which ultimately led to the cyber range environments of today. By examining the historical development of INFOSEC training, it is possible to trace how early limitations in infrastructure, education, and readiness directly contributed to the demand for more realistic, scenario-based learning platforms.
From early U.S. federal training standards and red team exercises to the creation of expansive, mission-realistic environments like the National Cyber Range, this section offers an overview of the development of cyber ranges. The growth of cyber ranges into commercial training, education, and international cooperation is then traced.
It is essential to understand this evolution in order to comprehend how various cyber range designs emerged and why a structured taxonomy is now necessary to categorize their diverse architectures, instructional models, and operational objectives.

2.1.1. Early Beginnings of Information Security Training

In the early 1990s, as reliance on computer systems increased in government and private industry, significant vulnerabilities began to surface [12]. The need for skilled workers grew as a result of the rapid advancements in technology and the growing interconnection of infrastructure. With INFOSEC being a relatively new concept, most organizations lacked skilled INFOSEC personnel and basic cyber hygiene practices [13].
At the time, U.S. government agencies were not adequately performing many security tasks because of insufficient personnel, training, and tools. [13]. Security was often treated as an afterthought, with critical security responsibilities assigned as secondary tasks and a “figure it out yourself” training approach [13], resulting in ineffective execution. One high-profile example was the Morris worm [1] in 1988 that, within hours of its release, was said to have infected up to 6000 computers, clogging systems and interfering with the majority of the major research facilities across the U.S. [12]. It affected organizations ranging from national labs, including National Aeronautics and Space Administration’s (NASA) Ames Research Center and the U.S. Department of Energy’s (DoE) Lawrence Livermore Center, to universities such as Massachusetts Institute of Technology (MIT), Purdue, and Cornell [12]. The worm spread primarily because of human error, exploiting two coding errors and the host systems’ lack of security, which included easy-to-guess passwords [12].
This incident highlighted the lack of security training, pushing the creation of training programs and curriculum standards for Information Assurance (IA) at the federal government level. It helped to speed up progress towards developing guidance, such as the NSTISSC standards [14] and Presidential Directives [15].
In response, the U.S. federal government created fundamental training and policy standards to address the identified gaps and the growing workforce. This initiative began with the creation of the Committee of National Security Systems and a series of Electronic DACUMs (Developing a Curriculum) workshops from 1992 to 2002 [13]. These helped identify the roles, tasks, and knowledge, skills, and abilities (KSAs) required for IA roles [13].
Beginning in 1992 with DACUM 1, the Awareness, Training, and Education (AT&E) levels were established and then later used in DACUM 2, developed in 1993, which created the National AT&E Matrix to guide U.S. federal training. DACUM 3 defined and categorized KSAs relevant to the work roles. In 1994, DACUM 4 took theory and turned it into practice by outlining training requirements, making way for DACUM 5, which developed drafts for the first three national training standards, later becoming the NSTISSI 40XX series [13].
Later DACUMs broadened the definitions of roles: DACUM VI (1995) formalized training for system certifiers, DACUM VII (1998) incorporated accreditation standards, such as DITSCAP, and DACUM VIII (1999) created the new role of risk analyst. The terms were changed to Information Assurance by DACUM IX in 2000, and the change was completed by DACUM X in 2002, integrating IA principles into all standards while working with industry and academia [13].
Building on the foundation set by the DACUM initiative, the National Security Telecommunications and Information Systems Security Committee (NSTISSC) developed the NSTISS-4011 standard [14], the National Standards for INFOSEC Professionals, in 1994. This report establishes the minimum training standard for INFOSEC professionals in telecommunications and automated information systems (AIS) security [14]. This was the first national-level standard for INFOSEC in the U.S., helping to define core knowledge and performance requirements for various security roles.
In 1998, U.S. President Bill Clinton’s administration elevated cybersecurity to a national security priority by issuing the Presidential Decision Directive 63 [15]. Initially addressing the need for INFOSEC to keep pace with evolving technology, the directive highlighted the increased risk of vulnerabilities that could spread through the interconnected infrastructure and cause significant damage. Addressing these vulnerabilities requires flexible, evolutionary approaches that encompass both public and private sectors, ensuring the protection of domestic and international security [15]. The directive then called for federal-private collaboration in critical infrastructure and IA education [15]. Attacks on critical infrastructure are expected to impact essential societal systems and services, making it necessary to mitigate vulnerabilities through coordinated efforts between the government and the private sector [15].
This directive led to the creation of the National INFOSEC Education and Training Program (NIETP) [13] in 1998 as a joint effort between the National Security Agency (NSA) and the Department of Homeland Security (DHS). The goal of NIEPT was to be a national leader in improving INFOSEC training and education in the U.S. [13]. NIETP helped to develop national training standards, courseware evaluation for certifications, support to the President’s Critical Infrastructure Protection Board, and various academic outreach programs [13].
One of these academic programs was the NSA Centers of Academic Excellence (CAE) in IA Education program [16], launched in 1999, and is still growing today. The establishment of this program by the NSA was driven by the increasing need for professionals with expertise in information assurance across multiple disciplines [16]. The NSA CAE program [17] was a way to promote institutional IA leadership and curriculum excellence. Designated schools became the focal point for developing the next generation of INFOSEC professionals through their curriculum.
Although these programs touched on educational infrastructure and workforce development, another significant turning point came from operational experimentation. The Defense Advanced Research Projects Agency (DARPA) launched the Information Assurance (IA) Program [18] to explore cyber defense practice under operational conditions.
Unlike most traditional red team exercises, where the goal was just to breach a system, the DARPA IA program emphasized red and blue team collaboration. In this approach, both teams collaborated to test the experimental goals [18], with the exercises intended to assess how the U.S. critical infrastructure would handle a coordinated cyberattack. They showed that static, perimeter-based defenses would not keep up with well-organized cyber operations. Cyber defenses would need to become as dynamic and flexible as the attacks themselves, the idea that will become the center of modern cyber range design [18]. This program marked the shift from static learning to a more dynamic defense model, utilizing real-time monitoring and adaptive configurations.
Despite early policy efforts and the rise in cybersecurity threats, the initial decade of INFOSEC training during the 1990s was largely inconsistent and poorly integrated across agencies, marked by a disconnect between policy theory and operational defense reality. Without cyber ranges or simulation-heavy environments, early INFOSEC training lacked the depth needed to train skills in dynamic threat response, adversarial thinking, or system configuration under attack [11]. The inability of early training models to simulate realistic threats underscored the need for secure, controlled spaces in which personnel could conduct full-scale cyber exercises without compromising live systems.
This shortfall became a driving force behind the development of cyber ranges. The encompassing curriculum development, national policy, and operations experimentation in the 1990s formed the precursor to modern cyber ranges. These early efforts not only created the form of cybersecurity training but also laid the conceptual and technical basis for the mission-oriented, scenario-based training environments of today.

2.1.2. Developmental Stages of Cyber Ranges

The 1990s laid the groundwork for the creation of national IA training standards in the U.S., and the 2000s and 2010s saw a surge in the creation of dynamic, mission-realistic environments that traditional labs could not replicate. As cybersecurity threats grew more complex and extensive, early training models, founded on policy, fixed curriculum, and limited red-team experimentation, were not sufficient to prepare defenders against dynamic, adversarial threats [19]. The introduction of cyber ranges during this time frame shows the development of ideas into functional, technically sound platforms that can support both training and system testing at scale.
By the mid-2000s, cyberattacks had evolved from isolated incidents to extremely complex, state-sponsored campaigns that targeted critical infrastructure and defense systems. These operations exposed the weaknesses of existing test and validation environments, which struggled to keep pace with rapidly evolving cyber threats. The U.S. DoD was dealing with an increasing backlog of security flaws in its systems, which made matters worse. Reports from the Director of Operational Test and Evaluation (DOT&E) in 2012 and 2013 revealed that of over 400 cybersecurity vulnerabilities, about 90% could have been found and corrected earlier in the systems’ development [19].
These findings identified the limitations of traditional IT labs, which were unable to replicate realistic cyber environments. Legacy systems lacked the requirements needed to test for malware behavior, complex kill chains, or zero-day exploits. To address the growing need for real-world training, the U.S. federal government began developing the National Cyber Range (NCR) [20], one of the first large-scale, fully integrated cyber ranges designed to facilitate mission-realistic, repeatable, and classified testing [21]. The NCR provides the capability to emulate military and adversary networks at a level of sophistication relevant to executing realistic cyber tests and cyber mission practices [22].
The idea behind the NCR was to be the cyber equivalent of a missile range: a secure environment capable of hosting complex cyber operations at various classification levels and technical sophistication. The NCR increases confidence that systems can withstand sophisticated, persistent cyber threats while achieving their intended mission objectives through the use of realistic training environments [19]. It can scale to meet high-capacity demands and support 24-h operations across multiple classification levels [19], which supports simulations of defense networks and cyber threats.
Technically, the NCR featured layered virtualization, malware simulation, and classified-level isolation. These capabilities were described in official documentation:
“The isolated test beds interface through a firewall to the range management enclave. There, encapsulation tools and an automation tool kit provision resources from a common pool. Automation ensures accurate mapping between test network design and provisioned capabilities. The automation process electronically verifies the configuration of all the computers in the test environment, then generates a report for the customer to review and validate prior to the start of the test”.
[19]
These features work to keep the testing internal, ensuring no harm to the host systems and networks.
As U.S. federal cyber range capabilities evolved, state and commercial efforts also started to meet corresponding needs for training and workforce development. Between 2009 and 2012, the SANS Institute developed NetWars and CyberCity [23], which offered gamified, immersive cybersecurity challenges designed to build real-world skills [24]. Concurrently, Merit Network, the Michigan National Guard, and Eastern Michigan University launched the Michigan Cyber Range [7] in 2012, focusing on Supervisory Control and Data Acquisition (SCADA) simulation [25] and professional certification. It demonstrated the potential for state-level cyber ranges to offer controlled environments for operational training and academic outreach [6]. These initiatives were part of a larger trend where public–private partnerships were used to support regional incident response capabilities, local educational initiatives, and cybersecurity talent pipelines.
Despite their promise, early cyber range efforts faced numerous issues. DARPA’s NCR program was hindered by a lack of funding and communication gaps, resulting in delayed deployment and limited access across the defense community [26]. Critics observed that “the Pentagon’s fancy virtual firing range for cyberwarfare… still isn’t fully operational” [26]. In the absence of a centralized capability, individual military branches began building their own standalone ranges [26], leading to inconsistency and redundancy.

2.1.3. Evolution and Expansion Towards the Present

From prototype testbeds to established facilities integrated into the U.S. DoD operational procedures, cyber ranges expanded starting in the mid-2010s, with policy changes further establishing the role of cyber ranges in acquisition and validation procedures. In 2015, revisions to DoDI 5000.02 [27] mandated that cybersecurity be included in acquisition programs, with testing environments that simulate mission-realistic cyber threats. Specifically, it mandated that environments had to conduct red team and penetration testing tailored to relevant threats, simulating hostile attempts to compromise program information systems within the operational environment [27]. This directive further established cyber ranges as a vital resource for mission assurance by formalizing cyber testing as part of the program acquisition and validation process.
As a result, the DoD launched enterprise-level initiatives like the Persistent Cyber Training Environment (PCTE) [28] and the National Cyber Range Complex (NCRC) [29]. PCTE, developed under the U.S. Cyber Command and managed by the U.S. Army, is an on-demand, classified training environment where Cyber Protection Teams (CPTs) practice mission scenarios and carry out exercises like Cyber Flag and Cyber Guard [30]. A division in the Test Resource Management Center, the NCRC, was built on previous DARPA models to support testing and training across secure, interconnected environments via the Joint Mission Environment Test Capability (JMETC) [29]. The NCRC’s institutional significance is reflected in its mission. It is responsible for planning, coordinating, and executing comprehensive workforce development, operational training, and cyber test and evaluation activities. To enhance system resilience and the effectiveness of offensive cyber capabilities, these programs support the DoD’s test, training, experimentation, and acquisition communities [29].
With these changes, cyber ranges were seen as being just as important to operational readiness as conventional training ranges. During the late 2010s, ranges became institutionalized as 24/7-available assets with institutional appropriations, directly influencing force development and acquisition decisions. They offered realistic, secure, and scalable spaces for testing offensive and defensive cyber tactics, tools, and techniques.

2.2. Existing Frameworks and Taxonomy Reviews

Cyber ranges have become key platforms for the government, private, and academic sectors to develop, test, and assess cybersecurity skills [10]. As cyber ranges have become more sophisticated and diverse, the need for structured evaluation frameworks has become clear. Many studies examine individual elements of cyber range design, such as technical architecture or a particular scenario execution; however, few studies provide unified taxonomies that incorporate both functional and educational concerns. As a result, the body of research is fragmented, with studies frequently limited to particular platforms, making it challenging to make thorough comparisons between them.
The purpose of this section is to analyze six existing frameworks that guide the design, development, and evaluation of cyber ranges. These include studies by Ukwandu et al. (2020) [9], Priyadarshini (2018) [8], Chouliaras et al. (2021) [31], Beauchamp et al. (2022) [32], Chindruș and Caruntu (2023) [33], and Soceanu et al. (2017) [34].
Table 1 provides an overview of each framework’s focus and scope, including the areas of application they prioritize, their pedagogical and architectural approaches, and any gaps that have been discussed.
The more in-depth discussion that follows in Section 2.2.1, Section 2.2.2, Section 2.2.3, Section 2.2.4 and Section 2.2.5 is supported by this table. These sections draw from each framework to highlight common classification criteria, differences in opinions, and existing gaps. The purpose, structure, tooling, and pedagogical design elements that support the methodology and analysis described later in this entry are derived from these models. Although the goals of these models differ, they support a larger effort to develop a deeper understanding of cyber ranges, their usage, and how each use case can be evaluated.

2.2.1. Classification Elements Across the Frameworks

The purpose and use case of a cyber range are arguably the most fundamental components. Ukwandu et al. (2020) [9] and Priyadarshini (2018) [8] classify cyber ranges by their intended purpose, whether training, research, or evaluation, and distinguish between their application within educational, military, and industrial settings. Chouliaras et al. (2021) [31] advance this one step further with the classification of ranges based on real-world implementation goals, such as red teaming, cyber defense simulation, and training exercises. In contrast, Beauchamp et al. (2022) [32] and Soceanu et al. (2017) [34] concentrate solely on educational use, viewing cyber ranges as classrooms within a formal curriculum. Chindruș and Caruntu (2023) [33] focus more narrowly on competition-based training environments, where the range is a platform for finite, adversarial training exercises for team-based decision-making and incident response skills.
Another shared component across the frameworks is the architectural model used to implement cyber ranges. All six categorize ranges into simulation, emulation, hybrid, or overlay deployments. A six-pillar architecture is presented by Ukwandu et al. (2020) [9], where each pillar represents a layer of range functionality, encompassing scenario design, environment modeling, monitoring, management, teaming, and learning. By surveying 44 cyber ranges, Priyadarshini (2018) [8] expanded on this and provided a comprehensive list of deployment types and system features. Chouliaras et al. (2021) [31], however, viewed architecture from a tooling perspective, reporting on platforms and orchestration tools such as OpenStack, Docker, and VMware, which are used in real-world deployments. Soceanu et al. (2017) [34] offer a complete-stack learning model by bringing together Moodle and OpenStack to facilitate large-scale virtual lab deployment in organizations. Chindruș and Caruntu (2023) [33] and Beauchamp et al. (2022) [32] cover architecture in terms of training execution, providing little information about the underlying systems.
Another aspect that is frequently addressed is how user roles are established and teams are arranged within scenarios. The most extensive role taxonomy is provided by Ukwandu et al. (2020) [9], covering red, blue, white, purple, and orange team roles. To organize assessment and monitor learning progress, Beauchamp et al. (2022) [32] integrate team roles into instructional design. Chindruș and Caruntu (2023) [33] integrate team dynamics as an integral training element, framing scenarios in terms of red/blue engagements with clear performance metrics. Soceanu et al. (2017) [34], on the other hand, is designed for individual use by users working through self-contained laboratories and does not include a team-centric capability. Although team roles are mentioned by Priyadarshini (2018) [8] and Chouliaras et al. (2021) [31], they are not specifically linked to learning objectives or teaching strategies.
Differences in scenario setup and training style stand out between the frameworks. Ukwandu et al. (2020) [9] distinguish between static and dynamic scenarios with emphasis on flexibility and realism in the training environment. Beauchamp et al. (2022) [32] are focused on multi-phase hybrid exercises that evolve and require team coordination for several roles. On the other hand, Chindruș and Caruntu (2023) [33] create competitive red-versus-blue team scenarios to emphasize collaboration under pressure and quick response to incidents. Soceanu et al. (2017) [34] take a different view, emphasizing preconfigured, task-specific, reusable, and deployable labs at scale, which are ideal for large-volume academic deployment. Priyadarshini (2018) [8] presents multiple kinds of scenarios but does not connect them to learning trajectories or to instructional models. Rather than treating scenario design as a fundamental pedagogical choice, Chouliaras et al. (2021) [31] view it as a developing feature influenced by deployment goals.
Finally, the models vary significantly in their approach to infrastructure and tooling, specifically in terms of detail and emphasis on implementation. Chouliaras et al. (2021) [31] provide the most comprehensive documentation of infrastructure, referencing the wide use of OpenStack, Proxmox, Kernel-based Virtual Machine (KVM), and VMware as platforms deployed. Soceanu et al. (2017) [34] provide a useful model for academic use by integrating Moodle with OpenStack and scripting software to deliver reusable, modular labs. However, Ukwandu et al. (2020) [9] and Priyadarshini (2018) [8] broadly categorized infrastructure into virtual machines (VMs), containers, and cloud platforms without going into detail about particular deployment tools. Beauchamp et al. (2022) [32] and Chindruș and Caruntu (2023) [33] give more weight to learner experience and training formats than infrastructure.
Across the six models, a pattern is evident: there is consensus that purpose, architecture, and training style are crucial factors to consider in cyber range design. While each framework places a differing priority and level of detail on the components, their general comparison indicates that creating a cyber range is a complex task that is subject to both pedagogical and technical considerations. These classification components provide a framework for assessing the construction and application of individual ranges, especially when considering their use in educational settings.

2.2.2. Architectural Patterns and Educational Models

Most frameworks focus on categorizing specific aspects of cyber ranges, such as deployment type, scenario planning, and team structuring. A few go further by producing a more general pedagogical or architectural model. These frameworks suggest how cyber ranges should be organized or incorporated into institutional processes, particularly academic ones, going beyond simply listing features.
The six-pillar model developed by Ukwandu et al. (2020) [9] is one of the more structured architectural proposals. It divides a cyber range into six functional layers: scenario, environment, monitoring, management, teaming, and learning. This layer-based formulation provides a modular framework for researching or designing range systems, highlighting the interdependence of cyber range components. Despite the model’s universal applicability in industry, it falls short in assessing the extent to which these pillars are represented in successful implementation, especially in educational institutions.
Soceanu et al. (2017) [34], on the other hand, is based on usability, scalability, and real-world academic deployment in higher education. By combining OpenStack for virtual machine orchestration with Moodle, a learning management system (LMS), Soceanu et al. (2017) [34] integrate cyber range capabilities straight into the classroom. Its design supports automated provisioning, template-based, reusable labs, and integration with the LMS, key enablers for delivering consistent, repeatable exercises at scale. Soceanu et al. (2017) [34] provide an implementable model for institutions wishing to incorporate cybersecurity labs into formal curricula by bringing technical architecture and instructional delivery into alignment, in contrast to more conceptual frameworks.
Beauchamp et al. (2022) [32] take these ideas further by introducing pedagogical structure to the exercise model itself. While other models have focused on system infrastructure, Beauchamp et al. (2022) [32] use competency-based models to drive learning outcomes and hybrid, team-based exercises to track student development. This method demonstrates how architectural considerations can support formative assessment, learner feedback, and the gradual development of expertise by coordinating platform functionality with learning objectives.
Together, these models demonstrate that architectural attention in cyber range development is not only technical but also pedagogical. While not all of the models discussed here are full-fledged architectural designs, those most developed to that point, particularly Soceanu et al. (2017) [34] and Beauchamp et al. (2022) [32], shed light on the potential for systems that satisfy both instructional and operational effectiveness criteria. However, there is a clear flaw in this: the majority of frameworks only define architecture at a technical or abstract level, failing to take into account whether the systems can be scaled for classroom use or whether instructors can use them, two factors that are crucial for educational adoption. This gap must be addressed for cyber ranges to be not only functionally strong but also practical in formal educational settings.

2.2.3. Domains of Application and Contextual Focus

Besides classifying internal components and architectures, a few frameworks distinguish cyber ranges according to the intended domain of operation. Configuring in this manner affects not only the design and functioning of a range but also assumptions on the deployment model, resource access, and user experience.
Ukwandu et al. (2020) [9], Priyadarshini (2018) [8], and Chouliaras et al. (2021) [31] offer broad taxonomies that categorize cyber ranges by industry, e.g., military, academic, industrial, SCADA/Industrial Control System (ICS), and hybrid use cases. These categorizations acknowledge that the requirements of a defense-oriented range may be very different from a university lab or an enterprise test lab. Models capture variations in security goals, infrastructural needs, and user roles in these domains, providing a top-level sense of operational heterogeneity.
Soceanu et al. (2017) [34] and Beauchamp et al. (2022) [32] instead confine their scope to education-specific environments. Both of them mention aspects relevant to academic deployment, such as exercises, inter-institutional sharing of resources, and conformance with formal curricula. These models emphasize aspects such as ease of student onboarding, instructor management, and integration into LMSs, which are more suited to formal classroom environments.
While the merits of generic and subject-specific views exist, a critical limitation remains. Domain-based taxonomies, while useful in terms of summarizing functional diversity, tend to overlook the particular limitations that govern adoption in academia, such as limited technical support, resource-constrained budgets, inadequate instructor training, and the requirement of smooth integration with LMS platforms. As a result, the majority of frameworks are unable to address the operational realities of scaling the deployment of cyber ranges in educational institutions. Addressing this gap requires not only technical classification but also a deeper understanding of how institutional context shapes implementation and long-term sustainability.

2.2.4. Simulation Fidelity and Threat Realism

While most of the cyber range architectures provide taxonomies of architecture, deployment models, or types of users, few begin to tackle the realism and quality of the simulated environments. How well a cyber range replicates the complexity, dynamics, and unpredictability of actual cyber threats, or simulation fidelity, is a key determinant of training effectiveness for incident response and decision-making under pressure. Despite its importance, simulation fidelity is handled differently across frameworks and is usually implied by deployment type or tooling rather than being specifically addressed as a separate problem.
Of the six frameworks reviewed, Ukwandu et al. (2020) [9] and Beauchamp et al. (2022) [32] provide the most explicit consideration of simulation complexity. Despite having a wide scope, Ukwandu et al.’s framework [9] introduces six architectural layers, such as “Scenario” and “Environment,” which indirectly address fidelity. Their taxonomy acknowledges the need for diverse traffic, realistic network services, and coordinated team roles, particularly in emulation-based ranges. They do not, however, provide precise standards for gauging how dynamic, unpredictable, or adversarial emulated threats are. Beyond structural attributes, little attention is paid to adversary modeling, operational dynamics, and simulated user activity.
Beauchamp et al. (2022) [32] come closer to operational fidelity in the characterization of complex, multi-stage cybersecurity exercises. Their model treats scenarios not as fixed labs but as dramas that unfold, pushing students to react to emerging incidents. These hybrid exercises add timed flow, variety of events, and communication among roles, making them similar to actual incident management. Beauchamp et al. (2022) [32] also direct attention to learner perception, how scenario realism supports competency, achievement, and behavior assessment. Although Beauchamp et al.’s (2022) [32] work is excellent for teaching, it does not outline the technical steps involved in creating realistic scenarios or offer any particular resources or guidelines to adhere to.
In their red vs. blue team exercises, Chindruș and Caruntu (2023) [33] present a competition-based method of fidelity. Their design incorporates adversarial pressure, time constraints, and semi-scripted attacker activity, through which defenders can experience realistic stress and coordination problems. The training model uses role-based activities and quick decision-making to simulate a real network under attack. They prioritize competitive play over technical realism, omitting elements such as attacker logic, system diversity, and environmental noise that would normally make defense more difficult in real-world settings.
Threat realism receives less attention from other systems, such as Soceanu et al. (2017) [34], Priyadarshini (2018) [8], and Chouliaras et al. (2021) [31]. Although Soceanu et al. (2017) [34] offer repeatable, containerized lab environments, their framework places more emphasis on modularity and standardization than on dynamic threat behavior. Its exercises are designed for large-scale educational delivery and may trade complexity for accessibility. The frameworks developed by Priyadarshini (2018) [8] and Chouliaras et al. (2021) [31] list current infrastructure and ranges, but they do not evaluate how interactive or representative those ranges are of the threat landscapes of today. Neither address how well ranges mimic lateral movement, exfiltration methods, or detection evasion strategies, nor do they address attacker automation or variability.
Some aspects of simulation fidelity are completely ignored in all six frameworks. None of them adequately address:
  • Adversary emulation frameworks (e.g., MITRE ATT&CK-driven agents) [35];
  • Replay of real-world telemetry or attack traces [35];
  • Background system/user simulation to generate environmental noise;
  • Cyber-physical fidelity, e.g., Internet of Things (IoT), SCADA, or ICS integration [35].
For scenarios that attempt to replicate modern threat vectors or incorporate industrial and critical infrastructure environments, these exclusions are especially restrictive. The lack of simulation fidelity in the majority of frameworks is a major conceptual and technical shortcoming, as it has a direct impact on the perceived relevance, cognitive load, and skill transfer of cyber range exercises.

2.2.5. Identified Gaps in Current Research

Despite recent advances in cyber range design and classification, current frameworks have significant flaws that hinder their long-term growth and practical use. One of the biggest gaps is the lack of emphasis on simulation fidelity. While some frameworks, such as Ukwandu et al.’s (2020) [9] architectural layering or Beauchamp et al.’s (2022) [32] scenario flow, address simulation fidelity, none provide practical guidance on creating realistic environments. Critical components like adversary emulation, environmental noise, real-world telemetry replay, and cyber-physical systems integration are either mentioned indirectly or not at all [35]. Even well-planned training exercises become less effective when fidelity modeling is skipped, undermining engagement, mental challenge, and skill development. The field lacks a shared definition of fidelity and any systematic way to design or evaluate it, making it difficult to develop high-stakes or industry-wide simulations.
Another ongoing limitation is the lack of integration with educational objectives and learner feedback. Only Beauchamp et al. (2022) [32] and Soceanu et al. (2017) [34] provide frameworks connected to clear learning goals and feedback systems. Most platforms do not support student tracking [35], LMS integration, or formative assessment. Achievement in training is often measured by flags or time to complete, rather than the depth of learning. There are insufficient or inconsistent formats for metadata describing learning objectives, user roles, difficulty, and scoring, with no publicly available sets of verified, shareable scenarios to promote federated use or sharing. The absence of telemetry standards prevents cross-platform analytics and performance benchmarking [35], and integration with standard academic tools (e.g., LMS, gradebooks, course templates) remains inconsistent.
Lastly, few studies evaluate performance with realistic training loads. Most metrics, such as latency, resource conflicts, session isolation, and virtual machine orchestration overhead, are rarely benchmarked, and performance tuning practices are often poorly documented. Performance tuning techniques are not always recorded, even when using advanced deployment technologies like Infrastructure-as-Code or Kubernetes. These limitations reduce the flexibility of cyber ranges to accommodate different class sizes or operational exercises and hinder repeatability.
These gaps collectively highlight the necessity for future frameworks that move beyond conceptual taxonomies and instead focus on practical, testable models that combine operational performance, pedagogical structure, and simulation realism. To create scalable, interoperable, and educationally effective cyber range platforms, these shortcomings must be addressed.

3. Framework for Cyber Range Taxonomy

The need for structured methods to evaluate, compare, and design cyber ranges has been highlighted by the rapid growth of their use across the industry. Cyber ranges are increasingly being used to support cybersecurity education, testing, and workforce development [10], despite their differences in architecture, accessibility, instructional design, and operational purpose. While the concept of a cyber range is still evolving, there is a lack of practical and cohesive frameworks for classifying and evaluating the platforms.
To bridge this gap, this entry proposes a methodical taxonomy of classification based on the operational, pedagogical, and technical features of cyber ranges. This approach is designed to capture the diverse characteristics of modern cyber ranges and support structured comparisons across different deployment contexts.
The taxonomy is developed by synthesizing existing cyber range frameworks to capture technical, pedagogical, and operational features pertinent to modern deployments. The entry employs a dual-method approach to evaluate the effectiveness of the taxonomy. First, ten cyber ranges from the government, industry, and academic sectors are examined in a comparative case study. Second, the taxonomy is tested in a live simulation in the UWF Cyber Range, providing validation in a real academic setting. It provides a framework for understanding and comparing cyber range implementations, guiding researchers, educators, and decision-makers in navigating the constantly changing field of cybersecurity training environments. By pointing out both common design patterns and areas of variation, these findings address the research goal of categorizing important features into a thorough taxonomy, which helps to clarify the current cyber range landscape.

3.1. Case Study Approach

This entry combines a comparative case study analysis with a live simulation to provide a comprehensive foundation for theoretical development and practical application.
Cyber ranges across various domains are analyzed and evaluated in this entry using a comparative case study design. Case study research is well-suited for examining complex systems in their actual environment [36], especially when it is difficult to draw clear distinctions between the event and its surroundings. The operational, technical, and pedagogical diversity of cyber ranges allows for in-depth examination of every implementation while preserving comparability.
The rationale for incorporating the UWF Cyber Range as a live case is consistent with embedded case study methodologies, which involve analyzing a single unit (the range) within a broader classification framework [37]. For structured comparison across bounded systems, established qualitative protocols are followed in the use of documentation, observation, and scenario review.
While surveys provide broad information about cyber ranges, they often fail to capture the architectural and usability complexity involved in implementation. Combining document-based analysis with hands-on simulation allows for the depth that a literature-based analysis can provide, with the direct observation of the taxonomy in action that the hands-on simulation shows. Together, they ensure that the taxonomy is not only conceptually accurate but also applicable in a real-world setting.

3.1.1. Case Study Selection Criteria

To support the literature-based component of this entry, a total of ten cyber ranges were selected through a systematic review process. These ranges include: Austrian Institute of Technology (AIT) Range [38,39], CyTrONE Training Framework [40], Cyber Range for Advanced Networking (CYRAN) by De Montfort University [41], CYRIN Cyber Range by Architecture Technology Corporation [42,43], Michigan Cyber Range (MCR) [6,7,8], Norwegian Range [31,44,45,46], Realistic Global Cyber Environment (RGCE) Cyber Range by JYVSECTEC [47,48], SANS Institute Range [23,49,50], Virginia Cyber Range [31,51], and UWF Cyber Range [52]. The selection criteria were designed to ensure a representative sample of cyber ranges that span various sectors, architectural models, and use cases.
The first criterion involved the availability of peer-reviewed analysis or publicly available technical documentation. Only cyber ranges with peer-reviewed research or those with easily accessible, thorough documentation were taken into consideration. This ensured the analysis was founded on reliable and verifiable data sources.
The second criterion emphasized representation across key sectors. Cyber ranges were chosen from a wide range of industries, including government, private, and academic institutions. This made it possible to include a broad variety of cyber ranges that serve multiple functions, ranging from educational tools to operational and research applications in government and industry.
The third criterion focused on the coverage of various architectural models. The goal of this was to document the variety of cyber range designs and configurations, including overlay configurations, emulation environments, simulation-based systems, and hybrid models. This variety allowed for a comprehensive understanding of the various ways cyber ranges can be configured to satisfy different needs.
Finally, both training-focused and non-training implementations were used to ensure the study included cyber ranges for research, testing, and experimentation, in addition to those used for certification, threat response exercises, and skill development. Providing a more detailed overview of the potential uses of cyber ranges.
Together, these standards ensured that the selection of cyber ranges allowed for a detailed examination of the various features, architectures, and applications across different sectors.

3.1.2. Analysis and Collection

Data was collected from four main sources to support the comparative case study portion of this entry: technical white papers, system architecture diagrams, peer-reviewed journal articles, and publicly available documentation. These resources were selected because they provide a range of complementary insights into the design and operation of cyber ranges across various sectors.
All of the selected cyber ranges were analyzed using the classifications and standards outlined in the taxonomy. This facilitated the identification of common architectural, operating, and pedagogical features, in addition to finding outliers and configurations that did not neatly fit into preexisting classifications.
The taxonomy itself acted as a flexible way to interpret the range data, guiding the analysis rather than relying on a fixed rubric. This method allowed for the inductive discovery of new themes that surfaced during the review process as well as deductive classification based on predefined taxonomy categories.
The ten cyber ranges were categorized according to common themes in their architecture, accessibility, and intended uses to aid in comparison and interpretation. In addition to exposing edge cases and common design strategies, these groupings helped identify underrepresented areas in the taxonomy.
This taxonomy effort not only enabled better structuring of the cyber range environment but also helped confirm the taxonomy’s adaptability and scope. It created a practical foundation for assessing the taxonomy before it was used in the UWF Cyber Range live simulation, guaranteeing that the framework is both theoretically sound and based on actual practice.

3.2. Classification Criteria

The classification criteria used in this entry were selected based on a synthesis of six established frameworks and the identification of common themes between the articles and documentation. These criteria, which represent important aspects of cyber range design, are arranged for explanation purposes into four broad categories: purpose, accessibility, architecture, and content. A comprehensive summary of these domains and the related subcategories is given in Table 2, which serves as a point of reference for the taxonomy used later in the research. Table 2 displays the complete structure of the taxonomy itself, regardless of any particular implementation, while the following subsections provide a detailed explanation of each domain, and the classification results for the particular cyber ranges are mentioned in Section 4.1.

3.2.1. Purpose

This section categorizes cyber ranges based on their target demographic and use case. While a cyber range can be used outside its intended use, this section takes into consideration the purpose for which the cyber range is designed. A cyber range’s purpose can generally say the most about what a cyber range does. If an organization is looking to create a new cyber range, it would first want to limit its search to the target demographic and use cases that best fit its needs.
The target demographic refers to the intended audience for the cyber range. The three main demographics are education, government, and private industry [3]. Education encompasses all levels, including K–12 schools as well as public and private universities. Government includes all organizations related to the government of a country. Private industry includes corporations and organizations that are neither government nor educational and also encompasses use by the general public. A cyber range can also be targeted towards all populations [40].
Use case focuses on what the cyber range is actually being used for. The three overall use cases are training, research, and testing [3]. A cyber range falls into the training category if it provides infrastructure for developing cybersecurity skills for personnel. Cyber ranges in the research category explore new ideas in cybersecurity. These include research on new vulnerabilities, threat models, defense mechanisms, and new systems. The testing category includes testing of current systems and frameworks in a controlled environment.

3.2.2. Accessibility

Accessibility determines who can use a cyber range and under what conditions. Although reaching the target audience, accessibility addresses operational availability rather than the target audience itself.
Cyber ranges can be public, private, commercial, academic, government, or unavailable. Public ranges can be accessed by individuals or organizations with minimal restrictions and typically at a low cost. Private ranges are restricted to use inside a specific organization. Commercial ranges operate on a pay model with scalable access to customers. Academic ranges are available to members of the academic community, e.g., students or staff. Government ranges are limited to staff who belong to public institutions. Finally, the “not accessible” category encompasses ranges under development, obsolete projects, or in-house-only systems that are not available for outside use.
This distinction is crucial in comprehending both how a range is maintained and whose needs are met by its existence. Accessibility most often conflicts with demographics and purpose, although hybrids and exceptions exist.

3.2.3. Architecture

Architecture refers to the infrastructural and technical organization of a cyber range. This classification has two dimensions: range type and deployment model.
Range type follows the NIST taxonomy [3] and consists of four types: simulation, overlay, emulation, and hybrid [3]. Simulation ranges replicate systems virtually, often using containerized or virtualized environments to emulate real-world functions. Overlay ranges are constructed on top of existing production infrastructure, overlaying cyber environments onto existing networks and hardware. Emulation ranges operate using separate, dedicated infrastructure to reflect operating systems, most commonly for high-fidelity training or assessment. Hybrid ranges combine two or more types, offering flexibility.
The deployment model dictates where the cyber range infrastructure resides. Ranges can be deployed on-site, in the cloud, or as a hybrid [3]. On-site deployment refers to infrastructure hosted inside the physical structure of the organization. Cloud deployment calls for remote infrastructure, most often accessed through cyber-range-as-a-service platforms. Hybrid models combine these two approaches, hosting some tasks locally and others remotely, depending on the specific scenario.

3.2.4. Content

The content type is designed for training and functional content provided by the cyber range, particularly in training environments. It consists of two primary elements: training focus and training format.
The training focus is the subject matter emphasis of a cyber range. Types are basic cybersecurity, offensive operations, defensive operations, incident response, malware analysis, and industry-specific content. Training focus also encompasses skill-level orientation, which is structured as beginner, intermediate, or advanced to align content with learner progression. Basic cybersecurity refers to fundamental skills, including the use of command-line interfaces, writing simple scripts, and maintaining security hygiene. Offensive operations would include activities such as red teaming and penetration testing, while defensive operations concentrate on intrusion detection and system hardening. Incident response situations create simulated active threat scenarios [53] where the students are required to analyze, respond, and recover. Malware analysis is the detection and reverse engineering of malicious code within contained environments. Industries like healthcare, finance, and critical infrastructure are the focus of industry-specific content.
The training format determines content delivery. Labs, scenarios, capture the flag (CTF) challenges, Red versus Blue team exercises, courses, customized modules, workshops, and in-person discussions are examples of standard formats. Labs are typically interactive, self-paced environments. Students are given time-limited or narrative-driven challenges during scenario training. Gamified problem-solving tasks are used in CTF activities to assess abilities. Attackers and defenders participate in realistic adversary simulations as part of Red versus Blue team training [33]. While customized modules are created to meet the specific needs of learners or organizations, courses provide sequential, structured instruction [40]. Workshops emphasize interactive learning, typically led by an instructor, and in-person conversations facilitate planning, reflection, and debriefing [54].

3.3. Instructional Design and Learning Alignment

While the taxonomy offers a structured way to classify and compare cyber ranges according to functional, architectural, and demographic dimensions, an equally important dimension, particularly in academic and certification-focused settings, is the degree to which cyber range scenarios are aligned with instructional design principles and targeted learning outcomes.
Effective cybersecurity training is based not just on exposure to simulated threats but on the deliberate mapping of those simulations to desired competencies. Frameworks such as the NICE Workforce Framework for Cybersecurity (NIST SP 800-181) [55] or European Cybersecurity Skills Framework (ECSF) [56] provide models for developing skill-appropriate scenarios. They can be used to map tasks across a variety of learning levels, including knowledge, comprehension, application, analysis, and synthesis.
The integration of LMSs can allow for progress tracking, feedback mechanisms, and learner analytics. These systems enable instructors to assess learner performance in real time. Although they are not yet industry standards, some advanced cyber ranges feature integrated tests, scenario grading rubrics, or automated scoring systems.
Few of the frameworks or platforms that have been surveyed thus far (most notably Beauchamp et al. (2022) [32] and Soceanu et al. (2017) [34]) systematically address this relationship between scenario design and education. Future cyber range developments can be enhanced by formally incorporating instructional design on top of functional classification, using a taxonomy as the foundation framework, and extending it to include learning objectives, pre- and post-testing, and feedback mechanisms throughout.
Although this entry does not introduce pedagogical alignment as a separate component within the taxonomy, it acknowledges the critical importance of educational coherence, especially for ranges used in higher education, professional certification, or workforce development contexts.

3.4. University of West Florida Cyber Range

The Cyber Range at UWF is a virtualized cybersecurity training and experimentation platform that supports research, workforce development, and academic instruction. Its primary audience consists of faculty members developing experiential labs and undergraduate and graduate students enrolled in cybersecurity-related courses. Use cases that are supported by the cyber range include cyber war gaming exercises, malware analysis projects, and ethical hacking labs [52]. The range is well-suited for projects and real-time monitoring exercises, where students can engage with realistic attack-and-defend scenarios.
The platform is hosted on a vSphere cluster consisting of dedicated ESXi hosts with multi-core central processing units (CPUs) and large memory pools, backed by centralized network file share (NFS) datastores for storage [52]. These datastores allow for fast provisioning of duplicate VMs and reduce overhead by minimizing duplication across courses [52]. Network segmentation is used to isolate student networks by course code, CRN, and semester. This group-based access model enforces strict boundaries between courses, ensuring secure and reproducible training environments. The deployment model is hybrid, with workloads executing locally on UWF-managed infrastructure, while management is delivered via the web-based vSphere interface. The web client streamlines access by eliminating the need for local instances on student machines, lowering the onboarding complexity. Virtual machine creation is done through automated scripts driven by PowerShell and PowerCLI [52], using a library of reusable templates, including Kali Linux [57], pfSense [58], Metasploitable [59], Security Onion [60], Ubuntu Linux [61], and Windows 10 [62], enabling developers to deploy entire classes with minimal configuration inputs, greatly improving scalability and reusability.
Training in the UWF Cyber Range emphasizes critical cybersecurity skills like intrusion detection, threat hunting, malware analysis, and vulnerability exploitation. Industry-specific tools, such as Kibana [63] and Hunt [64] for detailed log analysis and event correlation, and integrated tools like Suricata [65] and Zeek [66] for real-time network traffic monitoring, support end-to-end visibility [52]. This modular format accommodates both instructor-led demonstrations and student-driven research with scenarios that can be customized to match specific course learning outcomes.
Beyond instruction, the UWF Cyber Range also produces research-grade, MITRE ATT&CK-labeled network datasets from live class exercises. Specifically, UWF-ZeekData22 introduced a modern Zeek/PCAP dataset labeled with ATT&CK mission logs and moved daily from Security Onion into Hadoop/Spark for analysis, enabling study of both attacks and adversary behaviors leading to attacks [67]. Building on that pipeline, UWF-ZeekDataFall22 focused on classifying adversary tactics, reconnaissance (TA0043), discovery (TA0007), and resource development (TA0042), using Spark ML over Zeek conn logs [68]. Most recently, UWF-ZeekData24 extended this work into a fully controlled, enterprise-scale experiment, capturing attacks such as credential access, reconnaissance, initial access, persistence, and exfiltration, and providing precisely correlated Zeek logs and mission logs for AI/ML training [52]. This dual role of training and data generation distinguishes the UWF range among academic platforms.
The range design is also aligned with the NIST NICE Cybersecurity Workforce Framework [55], with tasks corresponding to key NICE work roles, including Vulnerability Analyst, Incident Responder, Systems Security Management, and Threat Analyst. The simulation environment also accommodates various types of situations, including assessment, training, and exercises, allowing the platform to be used for both formative learning and summative assessment.

3.5. Simulation Scenario

Instead of functioning as a stand-alone case study, this simulation acted as a testing ground for the suggested taxonomy, guaranteeing that it is both theoretically sound and practically applicable to real-world cyber range environments. Two network-based attack simulations were created to evaluate the accuracy and completeness of threat detection in the UWF Cyber Range environment: one that simulated east–west traffic (internal threat) and the other that simulated north–south traffic (external threat). The purpose of these simulations was to verify that the Security Onion [60] monitoring system, which uses Zeek [66], Suricata [65], Kibana [63], and the Hunt [64] interface, can identify both lateral movement and incoming network traffic. In addition to assessing the tools themselves, the goal was to confirm that the environment was appropriate for facilitating cybersecurity education by using observable and realistic scenarios. This same infrastructure also supports the creation of the UWF-ZeekData22 [67], UWF-ZeekDataFall22 [68], and UWF-ZeekData24 [52] datasets, where Security Onion telemetry and mission logs are exported into the Hadoop/Spark platform to produce MITRE ATT&CK-labeled datasets used for adversary behavior analysis and AI/ML research.
To represent the infrastructure used during the simulation, Figure 1 provides a detailed network topology diagram of the UWF Cyber Range. This view focuses on the experimental environment used during the Cyber War Gaming course, highlighting two isolated student groups (Groups 1 and 2), the instructor group, and the core routing group for router-to-router connections. Each student group is connected to dedicated virtual switches and configured with two Kali Linux and two Metasploitable (one Windows and one Ubuntu) hosts, while pfSense firewalls enforce network isolation. The instructor group manages centralized monitoring and analysis tools, including Security Onion. Also shown is the big data platform, consisting of the Hadoop Distributed File Systems (HDFS) and Spark nodes, which supports additional research and analytic workloads. This layout illustrates the scale, modularity, and concurrent virtualization capabilities of the range and reinforces the relevance of architectural classification when applying the proposed taxonomy to real-world, operational cyber ranges.
In the simulation, the Cyber War Gaming course was used, which consisted of 15 student groups and one instructor group. Each group had isolated access to a specific set of virtual machines, including Kali Linux [57], Metasploitable (available for both Windows and Ubuntu) [59], pfSense [58], and Security Onion [60]. All machines were provisioned using automated PowerShell scripts on the UWF VMware vSphere infrastructure, and separation was enforced through group-based access control linked to course-specific metadata. To support taxonomy validation, this configuration provided an organized academic setting for creating and examining network traffic.
During the simulation, the UWF Cyber Range configuration was categorized using each of the four dimensions of the suggested taxonomy: purpose, accessibility, architecture, and content. For example, the simulation was used to verify that the platform’s technical deployment using VMware and cloud management matched the architectural classification. This procedure showed how the taxonomy could represent actual configurations and pinpoint areas that might require more improvement, especially in the area of instructional integration.

3.5.1. North/South Traffic

The north–south simulation was configured to replicate an attack from outside the network on a susceptible internal system. This configuration used a Windows Server 2008 Metasploitable machine as the target and a Kali Linux virtual machine as the attacker. To simulate the flow of traffic from an external threat vector into a secure internal environment, these two computers were set up on different subnets.
The attacker began with an nmap scan using the nmap -O command to identify live hosts, operating systems, and service availability. The scan confirmed the presence of an open transmission control protocol (TCP) port 445 on the Windows target, which indicated a likely vulnerability to Server Message Block v1 (SMBv1)-based exploits, such as the well-documented EternalBlue exploit (MS17-010). The newer versions of the Windows Operating System have SMBv1 disabled automatically, but older ones, such as the 2008 version in this classroom environment, still have it enabled.
Using the Metasploit Framework [69], the EternalBlue module was selected and configured with the appropriate RHOST and LHOST settings, then run using:
use exploit/windows/smb/ms17_010_eternalblue
set RHOST 143.88.2.17
set LHOST 143.88.1.10
run
The exploit was successfully run as shown by buffer dumps, allocation targeting, and successful exploitation logs, and the opening of a Command Prompt shell as access to the Windows VM.
After gaining access, the first step was to gather system context to determine what access was available. The whoami, net user, and net localgroup administrators commands were run, and it was confirmed that system privilege (i.e., NT AUTHORITY\SYSTEM) was granted, giving full administrative access, the highest privilege in Windows, and listed all the user and administrator accounts on the machine.
Since system access was granted, the next step was to create and escalate another user who can serve as a backup. To do this, the net user and net localgroup commands were run to create a new user:
net user newUser password123!/add
net localgroup administrators newUser/add
After setting up another administrator user, the next step was manual reconnaissance and password harvesting. This involved using the findstr command to search for credentials:
findstr/si password *.txt *.xml *.ini
The output showed dozens of XML, .config, and schema files with password fields shown by:
<attribute name = “password” type = “string” …/>
After the successful execution of the EternalBlue exploit on the Windows Metasploitable machine from the Kali machine across different subnets, the resulting Zeek logs, Suricata alerts, and the Hunt interface were analyzed to evaluate the effectiveness of the network tracking.
The Zeek zeek.conn logs showed TCP port 445 connections, and the Suricata alerts included:
  • ET INFO Possible Kali Linux hostname in DHCP Request Packet;
  • ET EXPLOIT ETERNALBLUE Exploit M2 MS17-010;
  • ET ATTACK_RESPONSE Net User Command Response.
All three alerts were flagged as high severity and corresponded to the expected source and destination IPs and exploit sequence, confirming the full attack. The Hunt interface was also used to visualize the traffic and confirm that the exploit was detected in real-time, with a spike in activity observed at the time of the exploit. The results of the analysis definitively confirm that Security Onion is capable of monitoring north–south traffic, including detecting and alerting on EternalBlue exploits.
The goal of collecting north–south traffic is to give insight into movement from outside the network. This information can indicate potential security threats, such as attempts to gain unauthorized access to the internal system. Collecting north–south traffic aims to evaluate the effectiveness of Security Onion in detecting and analyzing this traffic to identify potential security threats and vulnerabilities.

3.5.2. East/West Traffic

North–south collection is what most consider when it comes to traffic analysis, but east–west is just as important. East–west traffic tracks data from inside the same network, indicating internal threats or lateral movement. By analyzing east–west data, organizations can gain a better understanding of what is happening within their network, enabling them to respond to incidents more efficiently.
To demonstrate the importance of tracking east–west network traffic, the experiment simulated internal traffic flow from the Kali machine to a Metasploitable machine using the ping and curl commands. The Kali machine was on the same subnet as the Windows Metasploitable to simulate an internal threat.
Just like with the north–south traffic collection, the first step is to run an nmap scan to see what hosts are up and what ports are open. The nmap -O command was run to do a scan of the target subnet in order to find out the operating systems of the target subnet. The scan showed an open TCP port 80 (Hypertext Transfer Protocol, HTTP) on a Windows Server 2008 host. With an open HTTP port, one could assume that a web service is running on the host.
To test connectivity, the attacker started a basic reconnaissance phase by running:
  • ping -c 20 143.88.1.15 (tests connectivity);
  • curl http://143.88.1.15 (interacts with the target’s web server).
The returned HTML page confirmed that the Metasploitable host was configured correctly to simulate a vulnerable web service, and the use of curl verified that an HTTP service was operational on port 80. Instead of creating a complete exploit, these commands were purposefully designed to produce subtle traffic that would mimic early-stage reconnaissance.
The resulting east–west network activity was logged by Zeek [66], and Kibana [63] was used to query and visualize the traffic. A search with the following parameters in the Discover tab:
event.module: zeek AND source.ip: “143.88.1.13” AND destination.ip: “143.88.1.15”
showed logs containing metadata like timestamps, port numbers, and byte counts that indicated HTTP (TCP/80) connections. This data gave a comprehensive picture of the east–west traffic and confirmed that east–west logging was operational in real-time.
The goal of collecting east–west traffic is to give insight into movement within the network. This information can indicate potential security threats such as lateral movement and unauthorized access to sensitive data. Collecting east–west traffic aims to evaluate the effectiveness of Security Onion in detecting and analyzing this traffic to identify potential security threats and vulnerabilities.

3.6. Rationale of the Simulations

The simulation component was incorporated in the entry to determine whether the proposed classification taxonomy could be implemented and observed in a live academic environment. While literature-based methods and theory may provide a basic understanding of cyber range typology, they often overlook important operational aspects, such as instructor interaction, provisioning complexity, access control enforcement, and the extent to which student behavior can be tracked and verified in real-time. In response to these limitations, the simulation was purposefully created to assess taxonomy categories in a live teaching environment with real users.
Because of its strong infrastructure, institutional support, and active use in academic cybersecurity instruction, the UWF Cyber Range was chosen as the simulation platform. In contrast to isolated testing frameworks or static lab environments, the UWF Cyber Range facilitates continuous coursework throughout semesters, allowing instructors and students to engage with the platform as part of regular class activities.
One of the key advantages of the UWF Cyber Range is the supporting infrastructure. The automation features of the range, including reusable virtual machine templates, group-based access control linked to course metadata, and PowerShell-based provisioning scripts, allow for scalable and uniform deployment across several student groups. Without being limited by manual configuration or system instability, these features allow for focus on crucial taxonomy dimensions such as deployment model, accessibility, and training format.
From a technical perspective, the simulation demonstrated how to detect, document, and visualize threats from both internal (east–west) and external (north–south) sources using Security Onion’s [60] integrated monitoring tools, Suricata [65], Kibana [63], and Hunt [64]. By integrating these tools, the simulation provided complete visibility across east–west and north–south traffic, validating the taxonomy’s applicability to threat detection. Most significantly, the use of realistic scenario-based simulations further reinforced the cyber range as not just a monitoring system but also an instructional tool that offers the capability for instructors and students to observe actual cyberattacks in a secure, controlled environment.
Through combining technical capability with actual classroom practice, the simulation bridged the gap between conceptual taxonomy creation and real-world implementation. This proved to be more than theoretical, but also functional and observable in an actual working academic cyber range. These results support the value of using live, scenario-driven exercises in cybersecurity curriculum development and range design, as well as the significance of simulation-based validation.

3.7. Data Analysis

The entry’s data analysis was conducted in two phases: a live simulation and a comparative case study analysis. Together, the insights from each phase allowed for a comprehensive evaluation of the proposed classification taxonomy. By mapping taxonomy categories across current cyber ranges, the first phase created a structural foundation. The second phase then evaluated the categories’ suitability in an actual classroom setting.
In the first stage, each of the ten selected cyber ranges was categorized with the full taxonomy classification. This analysis focused on determining which classification categories were consistent, identifying gaps, and examining recurring patterns in architecture, accessibility, deployment models, and instructional focus. The resulting categorized dataset offered a comparative baseline, highlighting trends and outliers across documented implementations. Additionally, it helped highlight taxonomy components that required further development or clarification before validation.
The second stage involved analyzing the data produced by the UWF Cyber Range simulation. As described in Section 3.5, the simulation consisted of two network traffic scenarios, north–south (external threat) and east–west (internal threat), executed within an isolated classroom environment. Data collection for this stage included qualitative observation and quantitative system logging.
In the north–south scenario, analysis focused on whether Security Onion, through Suricata alerts and Zeek logs, could detect the EternalBlue attack launched across subnets, with particular attention to the timeliness, accuracy, and detail regarding known exploit behavior. In the east–west scenario, Zeek metadata was used to monitor traffic within the same subnet and visualized through Kibana, with a focus on how internal reconnaissance traffic, such as pings and HTTP requests, appeared in logs and dashboards. In both scenarios, specific attention was given to host-to-host interaction observability and protocol-level depth. These results provided an insight into the internal threat, operational observability, and logging detail required to support effective analysis.
Following these two steps, a cross-comparison was then conducted to establish the consistency and feasibility of the taxonomy across both case study and simulated data. This comparative methodology allowed us to identify taxonomy categories that consistently occurred in simulated and real environments, and categories that displayed mismatches or edge cases. Taxonomy definitions were improved for clarity and application when inconsistencies were found, for example, when specific features were present in the simulation but absent from the documentation that was previously in place. This feedback loop ensured that the taxonomy was based on both practical application and conceptual soundness.
The combined analysis provided a secure foundation for assessing the completeness, flexibility, and application of the classification framework. By applying the taxonomy in two distinct contexts and cross-comparing the findings, the entry confirmed that each classification dimension was not only conceptually sound but also observable and quantifiable. The analysis also highlights how various cyber range configurations support or hinder operational performance and educational efficiency. These results support the overall goal of guiding the future cyber ranges and enhancing instructional design techniques.

4. Results of Applying the Taxonomy Across Documented Cyber Ranges

This section presents the results of the classification taxonomy applied across ten documented cyber ranges, along with an assessment of how well the proposed categories capture operational and architectural features in real-world scenarios. The findings are organized to initially provide an overview of how the selected ranges were categorized based on key features, including target demographic, use case, range type, and training format. Each range is then examined for how it aligns with the taxonomy, focusing on unique implementations, broad trends, and outlier usage.
Following comparative classification, the results of a live simulation run inside the UWF Cyber Range are presented. The purpose of this simulation was to evaluate the taxonomy’s dimensions in an active learning environment. This provided direct validation for components such as access control, threat monitoring, training formats, and deployment automation.
The findings also include a cross-comparison between the simulation and case study data, identifying areas where taxonomy categories were strengthened, expanded, or in need of refinement. To evaluate the platform’s relative strengths in areas like observability, scalability, and instructional oversight, a comparison between the UWF Cyber Range and other ranges in the entry is included.
Through the combination of comparative data and actual operating experience, this section attempts to demonstrate that the taxonomy is not only a conceptual tool but also a useful framework for assessing and creating cyber range environments.

4.1. Classification of Cyber Ranges

Based on academic literature, publicly accessible documentation, and platform descriptions, ten cyber ranges were selected and examined to evaluate the application of the proposed taxonomy. The ranges were aligned with the taxonomy dimensions introduced in the entry above, providing a structured basis for comparison.
The taxonomy includes seven big dimensions: target demographic, use case, accessibility, range type, deployment model, training focus, and training format. Each dimension is a key aspect of cyber range design and delivery and supports both high-level pattern recognition and in-depth distinction between platforms. This approach also allows for the identification of common architectural choices and pedagogical techniques used in academic, government, and commercial environments.
Table 3 displays the findings from the analysis of the cyber ranges separated into the classifications of a cyber range outlined in The Cyber Range: A Guide by NIST [3] and previous taxonomies on cyber ranges.
The classification process revealed several patterns. The hybrid (H) cyber range type was the most common architecture, used in seven out of ten ranges. This suggests a strong preference for adaptable infrastructure that can incorporate simulation, emulation, and overlay elements. Although some platforms also included cloud-based (CL) or hybrid (H) delivery methods, the on-site (OS) deployment model was present in six of the ten ranges, suggesting a continued reliance on local hosting environments. These patterns suggest that control, performance, and scalability must be balanced to achieve optimal results.
Nine out of ten ranges specifically supported education-focused use, with educational audiences (E) serving as the main target demographic in almost all cases. However, many platforms also addressed the needs of the private sector (PS) and government (G), highlighting the multipurpose nature of modern cyber ranges. Most ranges supported multiple use cases, often combining testing/evaluation (TE), research (R), and training (TR). This reflects an overall trend toward unified environments capable of performing both skill development and experimental validation.
Training focus areas were also diverse. Most of the ranges covered incident response (IR), defensive operations (DO), and basic cybersecurity (BS); some also included industry-specific (IS) training, malware analysis (MA), and offensive operations (OO). Labs (L), workshops (W), capture the flag (CTF) events, red versus. blue (RVB) scenarios, structured courses (SC), and custom modules (CM) were among the various instructional delivery formats used. This variation highlights the way the taxonomy is helpful in determining the operational and instructional flexibility required in various organizational contexts.

4.2. Rationale for Classification

The reason for each classification decision in Table 3 is described in this section. Each range was assessed for target demographic, use case, deployment model, range type, accessibility, training focus, and training model. To ensure accuracy, technical documentation and primary source material were used where available.

4.2.1. AIT Cyber Range

The AIT Cyber Range provides services to academic and commercial clients, with deployments used across Austria and on EU-level projects. Its range type was classified as Hybrid (H) because of its modular design that integrates simulation, infrastructure provision (using OpenStack Heat and Terraform), and in-house scenario engines like GameMaker. On-site (OS) deployment reflects local infrastructure management. It supports a wide variety of training use cases, ranging from incident response (IR), malware analysis (MA), to ICS-specific (IS) simulations. In-person workshops (IPD, W), incident scenarios (SC), and structured courses (CO) are examples of training formats that are frequently used in national and international cyber exercises [38,39].

4.2.2. CyTrONE Training System

CyTrONE was designed as an open, modular cybersecurity training system. It is publicly available (PU) and supports learners from beginner to intermediate levels in academic and government environments. The system is simulation (SM) based with templates and YAML-based scenario definitions. CyTrONE was classified as a Hybrid (H) range type because it has an adaptive delivery of content and training use (TR) orientation. Its content covers basic cybersecurity (BS), defensive operations (DO), and incident response (IR), making it perfect for foundational training. The user interface facilitates role-based session setup and dynamic scenario deployment. Delivery is mostly modular and scenario-based (SO) [40].

4.2.3. CYRAN Cyber Range

CYRAN was designed for academic use (A) with an emphasis on SCADA and ICS scenarios (IS). It uses a Hybrid (H) range type, combining scenario overlays with system simulation, and supports both training, testing, and research (TR, TE, R). For exercises involving cyber-physical systems, its on-site (OS) deployment model allows for controlled, physical access. Aside from using layered team structures in public, private, and educational institutions, CYRAN is known for its use of advanced red vs. blue (RVB) and CTF events [41].

4.2.4. CYRIN Cyber Range

CYRIN is a commercial cyber range that also supports educational institutions. It uses a Hybrid (H) model and offers scenario overlays, learning paths, and customized lab environments. Training materials cover basic security (BS), incident response (IR), malware analysis (MA), and industry-specific (IS) training. It offers course- (CO) and lab-based (L) models with automated scenario resets and progress tracking. Institutional partnerships and subscription-based licensing are used to manage access [42,43].

4.2.5. Michigan Cyber Range

The Michigan Cyber Range, operated by Merit Network, is funded by public and private partnerships for education, government, and public (E, G, PS) purposes. It provides environments for ICS, homeland security, and legal/justice fields and is made for training (TR), testing (TE), and certification. MCR’s deployment model is on-site (OS), and its range type is hybrid (H). The range features realistic SCADA lab environments for certification testing and infrastructure defense training (IS), providing flexible access to academic and government clients. Additionally, it offers custom CTF and red vs. blue (RVB) exercises [8].

4.2.6. The Norwegian Cyber Range

The Norwegian range is used for commercial, military, and educational purposes (E, G, PS). It offers operational simulation of intricate attacks and is known for its industry-specific (IS) training formats and scenario realism. Because it integrates both simulated and live components, the range type is Hybrid (H). The range provides offensive operations (OO) and deployment is done on-site (OS). Additionally, it features red teaming environments and collaborative workshops (W) based on actual infrastructure and tools [31,44,45,46].

4.2.7. The Realistic Global Cyber Environment

The Realistic Global Cyber Environment (RGCE) is used for educational (E) and private industry (PS) training purposes. It is a hybrid range type (H) installed on-site (OS). With training centered on offensive operation (OO), defensive operation (DO), incident response (IR), and being industry specific (IS), RGCE is organized around CTFs and red vs. blue scenarios (RVB), with standard features including personalized modules (CM), and organized workshops (W). The range is repeatable, modular, and customized for a range of user types, from university students to enterprise security teams [47,48].

4.2.8. SANS NetWars

SANS Institute’s NetWars range is a commercially focused range (C), characterized by its hybrid deployment (H) and its use for certification preparation, offensive operations (OO), and game-style learning. NetWars is delivered through public and private channels (PU, PS) and can facilitate repeatable, scenario-based exercises (SC). With leaderboards, skill tracking, and scenario replays, its user interface and scoring system are designed for extensive professional development. SANS is excellent at professional credentialing and on-demand training, but it does not provide academic integrations [23,50].

4.2.9. The Virginia Cyber Range

The Virginia Cyber Range is an academic-only (A), cloud-deployed (CL) platform. It offers training labs (L), capture-the-flag (CTF) activities, and personalized modules (CM) that align with academic schedules, all hosted on AWS. Its range type is hybrid (H) because of the blending of virtual environments with course-linked overlays. Access is restricted to Virginia’s public universities, including collaborations with James Madison and George Mason Universities. The platform incorporates course-specific access controls, like semester tagging and CRNs, and places a strong emphasis on scalability [3,31,51].

4.2.10. The University of West Florida Cyber Range

The University of West Florida (UWF) Cyber Range is a research-driven, education-centered (E), simulation-based (SM) system. It is deployed via an on-site (OS) model and is scalable by using PowerShell and PowerCLI to automate scenario deployment, allowing for quick provisioning of customized environments. Complete cyber kill chain scenarios (OO, DO) are covered in training, along with malware analysis (MA), red vs. blue (RVB), and basic cybersecurity (BS) procedures. Zeek [66], Suricata [65], Kibana [63], and Hunt [64] are integrated into UWF’s deployment model to enable real-time monitoring and analysis during training exercises. This gives instructors and students instant insight into system-level activity and network behavior. Accessibility is managed through course metadata, such as CRNs and semester labels, which ensures a secure separation between student groups [52].

4.2.11. Summary

Each range represents a unique set of priorities and operational constraints. Some prioritize accessibility and repeatability to large audiences, while others prioritize realism, customization, or academic fidelity. The classification structure allows for direct comparison across these attributes, based on design and implementation trade-offs, to be used for future range development.

4.3. Operational Validation and Taxonomy Refinement with UWF Range Simulation

The purpose of the UWF Cyber Range simulation was to determine whether the suggested taxonomy could be directly observed, used, and examined in a dynamic academic setting. This phase of research extended beyond conceptual modeling and classification through a literature review alone, offering an opportunity to assess the behavior of each taxonomy dimension in real-world teaching scenarios. Based on operational evidence, the simulation also provided the chance to pinpoint areas where the taxonomy’s definitions needed to be expanded or clarified.

4.3.1. Taxonomy Elements Validated Through Simulation

The UWF simulation successfully validated each of the main taxonomy categories: target demographic, use case, deployment model, range type, accessibility, training focus, and training mode. Built on a simulation-based model, the range’s infrastructure used PowerShell and PowerCLI scripts to automate scenario provisioning [52]. This not only validated its hybrid range type classification but also demonstrated how automation can be leveraged to achieve elasticity and modularity. Accessibility was enforced through structured roster-based controls, where course metadata, such as CRNs and semester codes, were used to assign permissions and isolate environments [52]. This structure provided concrete user segmentation and scalability among several classes without the need for manual reconfiguration.
The simulation instructional design supported red versus blue team exercises and hands-on training through labs [52]. These exercises were closely tied to the content taught in academic courses and were presented in a structured way that allowed for monitoring attacker and defender activity. From fundamental cyber to reconnaissance, vulnerability exploitation, and incident response, the training material addressed a broad range of proficiency areas [52]. Each of these elements was consistent with the training focus categories established by the taxonomy and showed how the range could support multi-tiered cybersecurity learning objectives.

4.3.2. Operational Insights from the Simulation

Training format, training focus, deployment model, and architecture were the functionalities that provided the best support during the simulation. The use of tools such as Zeek [66], Suricata [65], Kibana [63], and Hunt [64] created an interactive live experience with actual network conditions. These telemetries were not in the distinct class classification but were essential in supporting the training styles that filled the taxonomy, namely red versus blue activities and hands-on labs. As students carried out reconnaissance, exploitation, and incident response operations, their network activity generated verifiable network artifacts that were collected through Zeek [66] and Security Onion [60]. This same process supported research through the UWF-ZeekData series: ZeekData22 introduced the first MITRE ATT&CK-labeled Zeek/PCAP dataset [67], ZeekDataFall22 extended this to tactic-level classification of reconnaissance, discovery, and resource development [68], and ZeekData24 expanded into a controlled, enterprise-scale dataset for AI/ML benchmarking [52]. For instructors, the telemetry and supporting tools facilitated direct observation and post-event review through searchable logs and graphical timelines.
The taxonomy’s definitions of training format and deployment model were clarified as a result of these observations. First, the training format was expanded to better include simulations that generate system and network-level telemetry for instructional use, separating these from formats unsuitable for real-time or post-event examination. Second, the deployment model category was redefined to better capture infrastructure with scripted provisioning, multi-group segmentation, and instructor-accessible monitoring capabilities that were not properly captured by the original on-site/cloud/hybrid categories alone. The simulation demonstrated that instructional control and telemetry are not distinct design elements, but are rather integral to how training is actually presented and how infrastructure is executed in reality.

4.3.3. Validation Outcomes and Implications

Cross-referencing the UWF Cyber Range simulation results with the ranges examined in the case study component revealed that the UWF Cyber Range filled several gaps in other documentation. Several ranges provided high-level descriptions of pedagogical or architectural aspects but lacked real-time evidence of logging fidelity, user isolation, or detection accuracy. The UWF Cyber Range did the opposite and applied all three categories of taxonomy and showed evidence that it was not only theoretically valid but also quantifiable. This direct foundation improves the overall usability of the taxonomy, not only as an evaluation tool but also as a model for implementation and instructional design in cyber range facilities.

4.4. Comparative Analysis, UWF Versus Other Ranges

The UWF Cyber Range was compared to other academic, government, and commercial ranges evaluated in this entry to place it within the wider picture of cyber ranges. UWF displays clear advantages in automation, observability, and curriculum integration, even though it shares many fundamental characteristics with the other ranges.

4.4.1. Compared to Other Academic Ranges

Each of the four platforms, the UWF Cyber Range [52], the Virginia Cyber Range [3,31,51], CYRAN [41], and RGCE [47,48], is designed primarily for higher education. Each includes curriculum-mapped modules, labs, and CTF challenges as well as other student-focused training resources. Similar to the Virginia Cyber Range [3,31,51] and CYRAN [41], the UWF Cyber Range [52] easily fits into academic schedules. To manage student environments and manage access, course metadata, such as semester codes and CRNs, are used [52]. This provides the assurance that scenarios are both technically relevant and pedagogically sound.
Where the UWF Cyber Range [52] differentiates itself from the rest of the academic ranges is in the clarity and depth of its training format and infrastructure. Detailed, real-time interaction in red versus blue and lab-based scenarios is made possible by the use of integrated tools like Zeek [66], Suricata [65], Kibana [63], and Hunt [64]. These tools provide insight into how students behave in simulations, including lateral movement, exploit execution, and system enumeration, all of which are important elements of the taxonomy’s training focus areas. Instructors can evaluate student performance during and after exercises because these activities are recorded and displayed using network telemetry and log analysis platforms [52]. Other academic ranges included in the entry do not consistently have this capacity for ongoing instructional feedback and post-scenario review, nor is it well-documented. By incorporating these tools into its on-site deployment model, the UWF Cyber Range is better equipped to provide training formats that are realistic, quantifiable, and in line with course-level objectives.
The UWF Cyber Range [52] is also at the forefront of automated deployment. With PowerCLI and PowerShell, administrators can provision group environments automatically without the need for manual configuration, reducing setup time and enabling greater scalability to many groups. Other academic ranges, like the Virginia Cyber Range [3,31,51] and RGCE [47,48], usually require manual configuration or rely on static templates, which restricts their capacity to adapt to changing instructional needs.
In terms of training scenarios, the UWF Cyber Range [52] supports customized red versus blue labs that simulate real attack profiles, allowing for both instructor direction and student discovery. These enable students to observe exploitation across segmented environments, including both east–west (internal) and north–south (external) attack paths. CYRAN [41] also employs adversary formats, but the UWF Cyber Range includes these within curriculum cycles rather than just viewing them as stand-alone or competition events.
Lastly, the UWF Cyber Range integrates control and visibility at the instructor level in every simulation environment. It features roster-based, role-specific access along with instructor dashboards that track VM states, telemetry, and group-specific timelines [52].

4.4.2. Compared to Government and Commercial Ranges

With its academic focus, the UWF Cyber Range differs greatly from the objectives and designs of commercial and governmental cyber ranges. While UWF is restricted to internal academic use, platforms such as SANS NetWars [23,49,50], MCR [6,7,8], and CYRIN [42,43] provide public or subscription-based access. With scenarios organized around industry compliance, DoD compliance, or NIST standards, these external platforms are frequently used for high-intensity workforce development, certification preparation, or industry compliance.
In contrast, the UWF range remains based on academic pedagogy. The design encourages lab-based experimentation, critical thinking, and sustained engagement. Exercises that prioritize exploration over quick scenario completion or point-based performance metrics are frequently incorporated into coursework.
Commercial ranges, such as SANS [23,49,50] and CYRIN [42,43], which use features like leaderboards, scoring systems, and integrated assessments, primarily rely on gamification and user interface overlays to engage learners. The UWF Cyber Range avoids these intentionally, instead mirroring real-world tactics, techniques, and procedures (TTPs) [52]. This assists in solidifying technical learning, but at the cost of the extra engagement required by commercial platforms.
The UWF Cyber Range prioritizes depth over scale in its operations. Global access infrastructure, enterprise dashboards, and external onboarding portals are not included. It makes up for this with features that work better for educational institutions than commercial gamified tools, such as automated scenario provisioning, real-time logging, and strict curriculum alignment.

4.4.3. UWF Cyber Range: Unique Strengths Across All Ranges

The UWF Cyber Range [52] is the only platform among the ranges examined in this entry that went through experimental validation. Live simulated attacks with north–south and east–west traffic were performed in controlled settings and tested with built-in logging tools. High-fidelity logging, time-aligned visualizations, and post-exploitation tracking using both Suricata [65] and Zeek [66] were validated by observations from these simulations.
In addition, the UWF Cyber Range demonstrated operational deployment of every taxonomy category, including range type, accessibility, training format, and training focus. Group assignment, network segmentation, log collection, and the entire provisioning process were all automated, enabling smooth replication across several class sections [52].
Additionally, the platform gives teachers and students real-time visibility. Teachers can monitor simulations in progress, identify anomalies, and assess student responses without interfering with the process. Students can access logs to view their work, encouraging greater participation and transparency.
The UWF Cyber Range fills a critical gap in cyber range literature. While most platforms describe range type or use case at the general level, few capture operational practice at real-time simulation. The UWF Cyber Range offers a clear example of how to implement a taxonomy-informed range in an active, educational setting.

4.4.4. Instructional Design Considerations

Although scenario-based training is used in the UWF Cyber Range to promote hands-on skills development, the current implementation of the exercises does not specifically link them to formal learning objectives or assessment frameworks. While the platform supports alignment with instructional models, many scenarios are still loosely tied to course outcomes rather than being directly mapped to specified competencies.
Linking each scenario to specific competencies based on frameworks like ECSF [56] or NIST SP 800-181 [55] can accelerate future growth by allowing instructors to directly target and assess specific skill areas such as threat detection, incident response, or secure configuration.
These improvements would allow the development of evaluation models, including instructor-based grading templates, reflective post-test measures, and automated participation grading. The range’s usefulness as a training and instructional tool for competency-based instruction, curriculum-based certification, and evidence-based teaching would be improved by these additions.
The UWF Cyber Range could become a dual-purpose facility that functions as both a technical simulation platform and an organized, curriculum-specific learning and assessment space by enhancing pedagogical integration.

4.5. Summary of Findings

The results of this entry confirmed that the proposed taxonomy is a realistic and functional model for classifying and analyzing cyber ranges in educational, government, and corporate settings. Through a comparative case study of ten cyber ranges and simulation-based validation with the UWF Cyber Range, all dimensions of the taxonomy were shown to correspond to measurable and applicable domains of cyber range design and delivery.
Classification analysis indicated several repeat patterns. The most prevalent range type was a hybrid architecture, indicating a general need for adaptability to accommodate various training models. Despite the increasing availability of cloud-based services, especially among academic institutions, on-site deployment models continued to be widely used. Although many ranges also catered to stakeholders in the public and private sectors, the educational audience was the sample’s primary target demographic. Most ranges supported several use cases, including training, research, and testing, indicating a shift toward multi-role cyber environments. Although there was a wide range of training formats and objectives, the most effective educational implementations were often seen in scenario-driven instruction, hands-on labs, and red versus blue exercises.
The UWF Cyber Range simulation provided support for the taxonomy. The operational observability and instructional relevance of the taxonomy were demonstrated by the implementation of each classification category in an actual academic setting. UWF demonstrated how instructional and architectural choices directly aligned with taxonomy dimensions through the use of integrated monitoring tools, scripted provisioning, and modular scenario deployment. The simulation also highlighted underemphasized dimensions of the taxonomy, particularly in the significance of training format and deployment model, where automated orchestration and telemetry-supported instruction developed as key distinguishing factors. These categories were subsequently improved to better represent the realities of transparent and scalable cyber range operation.
The comparative analysis indicated that, although the UWF Cyber Range shares core characteristics with other academic ranges, it excels in areas such as scenario automation, pedagogical management, and alignment with curricular goals. Compared to government and commercial ranges, the UWF Cyber Range lacks universal public access and gamification delivery mechanisms; however, it compensates for this by emphasizing authenticity, flexibility, and an educational focus.
The findings confirm that not only is the taxonomy theoretically valid, but it is also operationally applicable. It may be used to structure cyber range evaluation, inform future development, and support technical infrastructure in alignment with instructional objectives. The entry demonstrates that classification frameworks based on observation and literature can be effective tools for advancing the development of cyber range environments.

5. Gaps in Current Cyber Range Research

Cyber ranges have become the core of defensive testing, threat simulation, and cybersecurity education. Their applications are extensive in government, industry, and academia, and recent studies have started to examine their architecture, instructional role, and integration in cyber readiness programs. Despite this momentum, significant gaps still exist in the areas of standardization, simulation realism, and instructional design. To support scalable, adaptive, and pedagogically sound range environments, this section identifies areas that require further research, drawing from the limitations observed in both literature and practice.
A significant gap is the limited support for concurrent, multi-user deployments. Many existing platforms lack the ability to monitor latency, resource contention, or session isolation performance. While container-based cyber ranges have shown that lightweight orchestration can improve scalability and reduce costs, adoption remains limited [31]. Few have dynamic infrastructure scaling features, such as cloud burst capacity or Kubernetes, even though modern federated range designs increasingly emphasize elastic orchestration [70]. Single-site deployments hinder collaboration, long-term work, and the simulation of diverse networks. Federated and distributed test environments, such as the Virtual Cyber-Security Testing Capability, illustrate how automated model synthesis and hybrid testbeds can scale to thousands of nodes [71]. Tools like Kubernetes and Infrastructure-as-Code provisioning (e.g., Terraform, Ansible) still lack consistent structure and common deployment standards, limiting realistic simulation of large-scale threat environments, cross-site collaboration, and ongoing testing.
A second critical gap is the lack of integration and governance. Without shared architectural standards, evaluation frameworks, and curriculum alignment, cyber ranges remain isolated and challenging to scale. Work on integrating hands-on labs with educational platforms shows that the absence of interoperability mechanisms and governance models is a persistent barrier [34]. There is limited research on the association between scenario planning or range configuration and learning outcomes, which makes measuring skill transfer, retention, or engagement challenging. Initiatives such as Open Distributed European Virtual Campus on ICT Security (DECAMP) illustrate how standardization across universities can help, but they also highlight the lack of broader frameworks to align cyber range pedagogy with outcomes [34]. Key concepts that are either underdeveloped or nonexistent include peer reflection, adaptive difficulty, and LMS-driven analytics. Without regular learning analytics, cyber ranges are isolated from classroom systems and feedback loops that promote long-term skill development [35].
As interest and research increase, cyber range platforms still face challenges because of ongoing design flaws, integration problems, and systemic coordination deficiencies. The lack of standardization in architecture, content, and policy hampers portability and scalability.

6. Conclusions

This entry presented a thorough taxonomy for cyber ranges through an extensive analysis of current classification frameworks, a structured comparative analysis, and a grounded case study. As cyber range implementations in the commercial, military, and academic sectors become increasingly diverse and complex, the taxonomy helps to clarify how these environments can be assessed, compared, and effectively created to facilitate cybersecurity research, testing, and training.
The entry began with a historical overview of cyber range development, from early information assurance efforts to modern, mission-realistic platforms integrated into national defense and educational ecosystems. Six main frameworks were analyzed in the literature, and common elements, such as deployment architectural model, pedagogical focus, and simulation fidelity, were identified. These dimensions served as the foundation for the development of a synthesized taxonomy that included both technical and academic viewpoints.
The taxonomy’s descriptive power and practical relevance were then evaluated by applying it to the UWF Cyber Range, a real-world example. This methodological approach made it possible to identify opportunities and gaps across current cyber range designs by combining literature synthesis, thematic classification, and applied case analysis.
Despite the UWF case study’s successful application of the taxonomy, a more comprehensive evaluation spanning several ranges is required to demonstrate its significance and generalizability completely. Testing the framework repeatedly in various academic and professional training settings, with the support of structured feedback mechanisms, will help improve learning outcomes, scenario design, and user experience.
A validated and flexible framework is becoming increasingly necessary as cyber ranges continue to grow in sophistication and use. By providing a structured lens through which future cyber range design and evaluation can be approached, the taxonomy presented here offers a step toward such standardization.

7. Future Research Directions

Despite advancements in cyber range design, there are still several areas that could be explored further to support the next generation of training and testing platforms. Future research should concentrate on improving automation and orchestration, as well as using emerging technologies like AI/ML to enable ranges that are more scalable, adaptive, and tightly integrated with learning objectives.
The next generation of research should bridge innovation with governance and infrastructure with pedagogy, and develop scalable, interoperable, and educationally grounded platforms to drive cybersecurity forward.

Author Contributions

Conceptualization, E.M., D.M., P.S., S.S.B. and S.C.B.; methodology, E.M., D.M. and P.S.; software, E.M., D.M. and P.S.; validation, E.M., D.M., P.S., S.S.B. and S.C.B.; formal analysis, E.M., D.M. and P.S.; investigation, E.M., D.M. and P.S.; resources, D.M., S.S.B. and S.C.B.; data curation, D.M. and P.S.; writing—original draft preparation, E.M., D.M. and P.S.; writing—review and editing, E.M., S.S.B. and S.C.B.; visualization, E.M., D.M. and P.S.; supervision, D.M., S.S.B. and S.C.B.; project administration, D.M., S.S.B. and S.C.B.; funding acquisition, D.M., P.S., S.S.B. and S.C.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by 2021 NCAE-C-002: Cyber Research Innovation Grant Program, Grant Number: H98230-21-1-0170. This research was also partially supported by the Askew Institute at the University of West Florida.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UWFUniversity of West Florida
AIArtificial Intelligence
MLMachine Learning
U.S.United States
NISTNational Institute of Standards and Technology
DoDDepartment of Defense
INFOSECInformation Security
IAInformation Assurance
NASANational Aeronautics and Space Administration
DoEDepartment of Energy
MITMassachusetts Institute of Technology
DACUMDevelop a Curriculum
KSAKnowledge, Skill, Ability
AT&EAwareness, Training, and Education
NSTISSCNational Security Telecommunications and Information System Security Committee
AISAutomated Information Systems
NIETPNational INFOSEC Education and Training Program
NSANational Security Agency
DHSDepartment of Homeland Security
CAECenters for Academic Excellence
DARPADefense Advanced Research Projects Agency
DOT&EDirector of Operational Test and Evaluation
SCADASupervisory Control and Data Acquisition
PCTEPersistent Cyber Training Environment
NCRCNational Cyber Range Complex
CPTCyber Protection Team
JMETCJoint Mission Environment Test Capability
KVMKernel-based Virtual Machine
VMVirtual Machine
LMSLearning Management System
ICSIndustrial Control System
IoTInternet of Things
AITAustrian Institute of Technology
CYRANCyber Range for Advanced Networking
MCRMichigan Cyber Range
RGCERealistic Global Cyber Environment Cyber
CTFCapture the Flag
ECSFEuropean Cybersecurity Skills Framework
CPUCentral Processing Units
NFSNetwork File Share
HDFSHadoop Distributed File System
TCPTransmission Control Protocol
SMBv1Server Message Block Version 1
HTTPHypertext Transfer Protocol
TTPTactic, Technique, Procedure
CORRCybersecurity Operations Research Range

References

  1. Orman, H. The Morris Worm: A Fifteen-Year Perspective. IEEE Secur. Priv. 2003, 1, 35–43. [Google Scholar] [CrossRef]
  2. Beerman, J.; Berent, D.; Falter, Z.; Bhunia, S. A Review of Colonial Pipeline Ransomware Attack. In Proceedings of the 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW), Bangalore, India, 1–4 May 2023; pp. 8–15. [Google Scholar]
  3. NIST. The Cyber Range: A Guide. 2023. Available online: https://www.nist.gov/system/files/documents/2023/09/29/The%20Cyber%20Range_A%20Guide.pdf (accessed on 10 July 2025).
  4. NICE. Cyber Ranges. Available online: www.nist.gov/system/files/documents/2018/02/13/cyber_ranges.pdf (accessed on 19 July 2025).
  5. ISC2. How the Economy, Skills Gap and Artificial Intelligence Are Challenging the Global Cybersecurity Workforce; ISC2 Cybersecurity Workforce Study; International Information Systems Security Certification Consortium; ISC2: Alexandria, VA, USA, 2023. [Google Scholar]
  6. Lohrmann, D. Cyber Range: Who, What, When, Where, How and Why? 2018. Available online: https://www.govtech.com/blogs/lohrmann-on-cybersecurity/cyber-range-who-what-when-where-how-and-why.html (accessed on 5 July 2025).
  7. Lohrmann, D. Introducing the Michigan Cyber Range. 2012. Available online: https://www.govtech.com/blogs/lohrmann-on-cybersecurity/introducing-the-michigan-cyber-111212.html (accessed on 11 July 2025).
  8. Priyadarshini, I. Features and Architecture of the Modern Cyber Range: A Qualitative Analysis and Survey. Master’s Theses, University of Delaware, Newark, DE, USA, 2018. [Google Scholar]
  9. Ukwandu, E.; Farah, M.A.B.; Hindy, H.; Brosset, D.; Kavallieros, D.; Atkinson, R.; Tachtatzis, C.; Bures, M.; Andonovic, I.; Bellekens, X. A Review of Cyber-Ranges and Test-Beds: Current and Future Trends. Sensors 2020, 20, 7148. [Google Scholar] [CrossRef] [PubMed]
  10. Urias, V.E.; Stout, W.M.S.; Van Leeuwen, B.; Lin, H. Cyber Range Infrastructure Limitations and Needs of Tomorrow: A Position Paper. In Proceedings of the 2018 International Carnahan Conference on Security Technology (ICCST), Montreal, QC, Canada, 22–25 October 2018; pp. 1–5. [Google Scholar]
  11. Ranka, J. National Cyber Range. 2011. Available online: https://apps.dtic.mil/sti/citations/ADA551864 (accessed on 1 July 2025).
  12. United States General Accounting Office. General Accounting Office, GAO/IMTEC-89-57, Computer Security: Virus Highlights Need for Improved Internet Management, June 1989. Unclassified; United States General Accounting Office: Washington, DC, USA, 1989.
  13. Borror, S. National Information Assurance Education and Training Program. 2002. Available online: https://csrc.nist.gov/csrc/media/events/csspab-june-2002-meeting/documents/borror-06-2002.pdf (accessed on 1 July 2025).
  14. National Security Telecommunications and Information Systems Security. National Training Standard for Information Systems Security (Infosec) Professionals; No. 4011. 1994. Available online: https://www.sait.fsu.edu/resources/NSTISSI-4011.pdf (accessed on 1 July 2025).
  15. Clinton, W.J. Decision Directive/NSC-63, “Subject: Critical Infrastructure Protection”. 1998. Available online: https://nsarchive.gwu.edu/document/21409-document-12 (accessed on 30 June 2025).
  16. National Security Agency/Central Security Service. NSA Designates First Centers of Academic Excellence in Information Assurance Education (1999). Available online: https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/1636090/nsa-designates-first-centers-of-academic-excellence-in-information-assurance-ed/ (accessed on 25 June 2025).
  17. National Security Agency. National Centers of Academic Excellence. Available online: https://www.nsa.gov/Academics/Centers-of-Academic-Excellence/ (accessed on 1 July 2025).
  18. Kewley, D.L.; Bouchard, J.F. DARPA Information Assurance Program Dynamic Defense Experiment Summary. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2001, 31, 331–336. [Google Scholar] [CrossRef]
  19. Ferguson, B.; Tall, A.; Olsen, D. National Cyber Range Overview. In Proceedings of the 2014 IEEE Military Communications Conference, Baltimore, MD, USA, 6–8 October 2014; pp. 123–128. [Google Scholar]
  20. DARPA. The National Cyber Range: A National Testbed for Critical Security; Obama White House. Available online: https://obamawhitehouse.archives.gov/files/documents/cyber/DARPA%20-%20NationalCyberRange_FactSheet.pdf (accessed on 3 July 2025).
  21. Christensen, P. The National Cyber Range: A Systems Engineering Resource for Cybersecurity R&D, S&T, Testing and Training. 2015. Available online: https://ndia.dtic.mil/wp-content/uploads/2015/system/17864_Christensen.pdf (accessed on 3 July 2025).
  22. Defense Acquisition University. Defense Acquisition Guidebook—Test and Evaluation. 2018. Available online: https://www.dau.edu/sites/default/files/2023-09/DAG-CH-8-Test-and-Evaluation.pdf (accessed on 7 July 2025).
  23. SANS Institute. SANS Cyber Ranges. Available online: https://www.sans.org/cyber-ranges (accessed on 8 July 2025).
  24. NIST. Cybersecurity Games: Building Tomorrow’s Workforce. 2017. Available online: https://www.nist.gov/system/files/documents/2017/04/24/cyber_games-_building_future_workforce_final_1031a_lr.pdf (accessed on 7 July 2025).
  25. Merit. Michigan Cyber Range to Debut SCADA Security Training Component—Merit. Available online: https://www.merit.edu/about/news/michigan-cyber-range-to-debut-scada-security-training-component/ (accessed on 7 July 2025).
  26. Shachtman, N. Darpa Taking Fire for Its Cyberwar Range. Available online: https://www.wired.com/2010/06/darpa-taking-fire-for-its-cyberwar-range/ (accessed on 7 July 2025).
  27. US Department of Defense. DoD Instruction 5000.02, January 7, 2015; Incorporating (Change 1 on 1/26/2017) and (Change 2 on 2/2/2017). 2017. Available online: https://www.secnav.navy.mil/rda/workforce/Documents/DoDI-5000.02.pdf (accessed on 4 July 2025).
  28. PM CT2. Persistent Cyber Training Environment (PCTE). Available online: https://www.peostri.army.mil/Project-Offices/PM-CT2/PdM-CRT/PCTE/ (accessed on 7 July 2025).
  29. NCRC. National Cyber Range Complex. Available online: https://www.trmc.osd.mil/ncrc.html (accessed on 7 July 2025).
  30. Williams, B. Cyber Flag 21-2 Showcases New CYBERCOM Training Environment—Breaking Defense. Available online: https://breakingdefense.com/2021/06/cyber-flag-21-2-showcases-new-cybercom-training-environment/ (accessed on 11 July 2025).
  31. Chouliaras, N.; Kittes, G.; Kantzavelou, I.; Maglaras, L.; Pantziou, G.; Ferrag, M.A. Cyber Ranges and Testbeds for Education, Training, and Research. Appl. Sci. 2021, 11, 1809. [Google Scholar] [CrossRef]
  32. Beauchamp, C.L.; Matusovich, H.M.; Katz, A.S.; Raymond, D.R.; Reid, K.J. Exploring Cyber Ranges in Cybersecurity Education; Virginia Tech: Blacksburg, VA, USA, 2022. [Google Scholar]
  33. Chindrus, C.; Caruntu, C.F. Securing the Network: A Red and Blue Cybersecurity Competition Case Study. Information 2023, 14, 587. [Google Scholar] [CrossRef]
  34. Soceanu, A.; Vasylenko, M.; Gradinaru, A. Improving Cybersecurity Skills Using Network Security Virtual Labs; International Association of Engineer: Hong Kong, China, 2017; p. 599. ISBN 978-988-14047-7-0. [Google Scholar]
  35. United States Cyber Command. Technical Challenge Problem Set. Available online: https://www.cybercom.mil/Portals/56/Documents/Cyber%20Command%20Problem%20Set%203rd%20Edition.pdf?ver=peIH1oSQMl5uvPQxB4Cs-g%3D%3D (accessed on 21 June 2025).
  36. Yin, R.K. Case Study Research and Applications: Design and Methods, 6th ed.; SAGE: Thousand Oaks, LA, USA, 2018; ISBN 978-1-5063-3616-9. [Google Scholar]
  37. Stake, R.E. The Art of Case Study Research; Nachdr.; SAGE: Thousand Oaks, LA, USA, 2010; ISBN 978-0-8039-5767-1. [Google Scholar]
  38. Leitner, M.; Frank, M.; Hotwagner, W.; Langner, G.; Maurhart, O.; Pahi, T.; Reuter, L.; Skopik, F.; Smith, P.; Warum, M. AIT Cyber Range: Flexible Cyber Security Environment for Exercises, Training and Research. In Proceedings of the European Interdisciplinary Cybersecurity Conference, Rennes France, 18 November 2020; pp. 1–6. [Google Scholar]
  39. Austrian Institute of Technology. AIT CyberRange. Available online: https://cyberrange.at/ (accessed on 21 June 2025).
  40. Beuran, R.; Pham, C.; Tang, D.; Chinen, K.; Tan, Y.; Shinoda, Y. Cytrone: An Integrated Cybersecurity Training Framework. In Proceedings of the 3rd International Conference on Information Systems Security and Privacy ICISSP, Porto, Portugal, 19–21 February 2017; Volume 2017, pp. 157–166. [Google Scholar]
  41. Hallaq, B.; Nicholson, A.; Smith, R.; Maglaras, L.; Janicke, H.; Jones, K. CYRAN: A Hybrid Cyber Range for Testing Security on ICS/SCADA Systems. In Security Solutions and Applied Cryptography in Smart Grid Communications; Ferrag, M.A., Ahmim, A., Eds.; IGI Global Scientific Publishing: Hershey, PA, USA, 2017; pp. 226–241. ISBN 978-1-5225-1829-7. [Google Scholar]
  42. CYRIN. Cyber Range CYRIN. Available online: https://cyrin.atcorp.com/ (accessed on 21 June 2025).
  43. CYRIN. The Power to Protect—Cyber Range Training for Maximum Grid Security. Available online: https://www.pressrelease.com/files/1a/9d/10d2ee7f3bf7579b5ff147f294f2.pdf (accessed on 21 June 2025).
  44. Dwyer, A.; Moog, K.; Silomon, J.; Hansel, M. On the Use and Strategic Implications of Cyber Ranges in Military Contexts: A Dual Typology. In Proceedings of the 8th International Conference on Cyber Warfare and Security (ICCWS 2023), Baltimore, MD, USA, 9–10 March 2023; pp. 57–66. [Google Scholar]
  45. Kianpour, M.; Kowalski, S.; Zoto, E.; Frantz, C.; Overby, H. Designing Serious Games for Cyber Ranges: A Socio-Technical Approach. In Proceedings of the 2019 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Stockholm, Sweden, 17–19 June 2019; pp. 85–93. [Google Scholar]
  46. RISE SICS Software Week; Norwegian Cyber Range—YouTube; 2018. Available online: https://www.youtube.com/watch?v=nJGgc3h0TBQ (accessed on 5 July 2025).
  47. JYVSECTEC. Realistic Global Cyber Environment—Overview. Available online: https://jyvsectec.fi/en/realistic-global-cyber-environment/ (accessed on 5 July 2025).
  48. European Cyber Security Organisation. Cyber Range Features Checklist & List of European Providers. 2024. Available online: https://ecs-org.eu/cyber-range-features-checklist-list-of-european-providers-updated-2024-edition/ (accessed on 5 July 2025).
  49. WJAI Podcast—Let’s Settle This in the Cyber Range 2023. Available online: https://www.sans.org/podcasts/wait-just-an-infosec/lets-settle-this-in-the-cyber-range (accessed on 5 July 2025).
  50. Counter Hack. Cyber Range Training—Penetration Testers and Cybersecurity Experts. Available online: https://www.counterhack.com/cyber-ranges (accessed on 5 July 2025).
  51. Virginia Cyber Range. Virginia Cyber Range. Available online: https://www.virginiacyberrange.org/ (accessed on 21 June 2025).
  52. Elam, M.; Mink, D.; Bagui, S.S.; Plenkers, R.; Bagui, S.C. Introducing UWF-ZeekData24: An Enterprise MITRE ATT&CK Labeled Network Attack Traffic Dataset for Machine Learning/AI. Data 2025, 10, 59. [Google Scholar] [CrossRef]
  53. O’Neil, A.; Ahmad, A.; Maynard, S.B. Cybersecurity Incident Response in Organisations: A Meta-Level Framework for Scenario-Based Training. Available online: https://arxiv.org/abs/2108.04996 (accessed on 29 June 2025).
  54. Mourning, C.; Juedes, D.; Hallman-Thrasher, A.; Chenji, H.; Kaya, S.; Karanth, A. Reflections of Cybersecurity Workshop for K-12 Teachers and High School Students. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education, Providence, RI, USA, 3–5 March 2022; p. 1127. [Google Scholar]
  55. Petersen, R.; Santos, D.; Smith, M.C.; Wetzel, K.A.; Witte, G. Workforce Framework for Cybersecurity (NICE Framework); Gaithersburg, MD, 2020. Available online: https://www.nist.gov/publications/workforce-framework-cybersecurity-nice-framework (accessed on 9 July 2025).
  56. ENISA. European Cybersecurity Skills Framework (ECSF). Available online: https://www.enisa.europa.eu/topics/skills-and-competences/skills-development/european-cybersecurity-skills-framework-ecsf (accessed on 18 September 2025).
  57. Kali Linux. Testing and Ethical Hacking Linux Distribution. Available online: https://www.kali.org/ (accessed on 17 August 2025).
  58. pfSense. Available online: https://www.pfsense.org/ (accessed on 17 August 2025).
  59. Metasploitable 2. Available online: https://docs.rapid7.com/metasploit/metasploitable-2/ (accessed on 17 August 2025).
  60. Security Onion Solutions. Available online: https://securityonionsolutions.com/ (accessed on 17 August 2025).
  61. Ubuntu. Available online: https://ubuntu.com/ (accessed on 17 August 2025).
  62. Windows Communications. Windows 10—Release Information. Available online: https://learn.microsoft.com/en-us/windows/release-health/release-information (accessed on 17 August 2025).
  63. Kibana: Explore, Visualize, Discover Data. Available online: https://www.elastic.co/kibana (accessed on 17 August 2025).
  64. Hunt. Available online: https://docs.securityonion.net/en/2.4/hunt.html (accessed on 17 August 2025).
  65. Suricata. Available online: https://suricata.io/ (accessed on 17 August 2025).
  66. Zeek Documentation—Book of Zeek (Git/Master). Available online: https://docs.zeek.org/en/master/ (accessed on 17 August 2025).
  67. Bagui, S.S.; Mink, D.; Bagui, S.C.; Ghosh, T.; Plenkers, R.; McElroy, T.; Dulaney, S.; Shabanali, S. Introducing UWF-ZeekData22: A Comprehensive Network Traffic Dataset Based on the MITRE ATT&CK Framework. Data 2023, 8, 18. [Google Scholar] [CrossRef]
  68. Bagui, S.S.; Mink, D.; Bagui, S.C.; Madhyala, P.; Uppal, N.; McElroy, T.; Plenkers, R.; Elam, M.; Prayaga, S. Introducing the UWF-ZeekDataFall22 Dataset to Classify Attack Tactics from Zeek Conn Logs Using Spark’s Machine Learning in a Big Data Framework. Electronics 2023, 12, 5039. [Google Scholar] [CrossRef]
  69. Rapid7 Metasploit. Available online: https://www.metasploit.com/ (accessed on 18 September 2025).
  70. Chouliaras, N.; Kantzavelou, I.; Maglaras, L.; Pantziou, G.; Amine Ferrag, M. A Novel Autonomous Container-Based Platform for Cybersecurity Training and Research. PeerJ Comput. Sci. 2023, 9, e1574. [Google Scholar] [CrossRef] [PubMed]
  71. Pederson, P.; Lee, D.; Shu, G.; Chen, D.; Liu, Z.; Li, N.; Sang, L. Virtual Cyber-Security Testing Capability for Large Scale Distributed Information Infrastructure Protection. In Proceedings of the 2008 IEEE Conference on Technologies for Homeland Security, Waltham, MA, USA, 12–13 May 2008; pp. 372–377. [Google Scholar]
Figure 1. Network topology of the UWF Cyber Range showing simulation layout including two student groups, instructor group, core routing infrastructure, and big data platform.
Figure 1. Network topology of the UWF Cyber Range showing simulation layout including two student groups, instructor group, core routing infrastructure, and big data platform.
Encyclopedia 05 00162 g001
Table 1. Comparison of six cyber range frameworks across domain, focus, architecture, pedagogy, fidelity, and gaps.
Table 1. Comparison of six cyber range frameworks across domain, focus, architecture, pedagogy, fidelity, and gaps.
FrameworkPrimary
Domain(s)
Taxonomy
Focus
Architectural ModelPedagogical
Focus
Simulation
Fidelity
Known Gaps
Ukwandu et al. (2020) [9]Categorize by industryPurpose, Roles, Deployment Type, Six-Pillar ArchitectureSix functional layersTeam-based learning roles; scenario flexibilityModerate; includes emulation focus and traffic realism, but lacks adversary modeling detailLacks practical guidance for fidelity implementation; unclear LMS integration; limited performance benchmarking
Priyadarshini (2018) [8]Categorize by industryBroad classification of 44 ranges by use case and deployment modelEmulation and virtualization heavy; VMs, containers, cloud feature listing without depthGeneral reference to scenarios and roles, not tied to learning objectivesImplied via tooling; no evaluation of threat realism or adversary emulationNo link between training structure and pedagogy; low fidelity; no assessment linkage
Chouliaras et al. (2021) [31]Categorize by industryReal-world goals, tooling perspective; classifies by implementationTool-focused; details on platforms but not teaching systemsScenario design seen as deployment-driven, not pedagogically drivenMinimal; infrastructure-centric without threat realism analysisNo fidelity modeling, educational fit, or guidance for integration in classroom
Beauchamp et al. (2022) [32]EducationInstructional alignment, team roles, hybrid exercises, learner trackingLearning-driven; team coordination to track developmentCompetency-based; hybrid scenarios; learner feedback loops and metricsModerate; multi-stage hybrid realism, event flow, learner response stressNo tooling detail for fidelity; technical deployment unspecified
Chindruș and Caruntu (2023) [33]Education, competition environmentsCompetition-based training; team roles as evaluation toolsExecution-focused; red/blue team exercise shellStress, collaboration, decision-making; game-like learning formatModerate; adversarial timing but lacks system diversity and realismOveremphasizes competition; omits realism depth; lacks environmental noise
Soceanu et al. (2017) [34]EducationEducational scalability and reusabilityOpenStack + Moodle integration; LMS-focused labsTemplate-based repeatable labs; auto provisioning for academic useLow; task-based labs, modularity prioritized over dynamic behaviorLacks dynamic threat modeling; ignores adversary or behavioral complexity
Table 2. Cyber range classification schema organized by domain, subcategory, and classification criteria.
Table 2. Cyber range classification schema organized by domain, subcategory, and classification criteria.
DomainSubcategoryClassification Criteria
PurposeTarget DemographicEducation, Government, Private Sector, Public
Use CaseTraining, Research, Testing
Accessibility Academic, Commercial, Government, Not Accessible, Private, Public
ArchitectureRange TypeEmulation, Hybrid, Overlay, Simulation
Deployment ModelOn-Site, Cloud, Hybrid
ContentTraining FocusAdvanced, Basic Cybersecurity, Beginner, Defensive Operations, Incident Response, Industry Specific, Intermediate, Malware Analysis, Offensive Operations
Training FormatCapture the Flag, Courses, Custom Modules, In-Person Discussions, Labs, Red vs. Blue, Scenarios, Workshops
Table 3. Taxonomy of Cyber Ranges.
Table 3. Taxonomy of Cyber Ranges.
RangeTarget DemographicUse Case(s)AccessibilityRange TypeDeployment ModelTraining
Focus
Training
Format
AITE
[38]
TR, R, TE
[38]
C, A
[38]
H
[38]
OS
[38]
IR, MA, IS
[38]
SC, L, CO, CM, W, IPD
[38,39]
CyTrONEE, G, PS
[40]
TR
[40]
PU
[40]
SM
[40]
H
[40]
BS, OO, DO, IR
[40]
SC, CM
[40]
CYRANE
[41]
TR, R, TE
[41]
A
[41]
H
[41]
OS
[41]
OO, DO, IR, IS
[41]
SC, RVB, CTF, CM
[41]
CYRINE, PS
[42]
TR
[42]
CO, A
[42]
H
[42,43]
OS
[42]
BS, OO, DO, IR, MA, IS
[42]
SC, RVB, CTF, L, CO, CU, W
[42]
MCRE, G, PS
[8]
TR, R, TE
[8]
PR, PU
[8]
H
[8]
H
[8]
BS, OO, DO, IR, IS
[8]
SC, RVB, CTF, L, CO, W
[8]
NorwegianE, G, PS
[44]
TR, R, TE
[45]
PR, C, A
[45]
H
[31,45]
OS
[44]
OO
[45,46]
SC, L, CO, W
[45,46]
RGCEE, PS
[47,48]
TR, R, TE
[47,48]
A, CO
[47]
H
[47]
OS
[47]
OO, DO, IR, IS
[47]
SC, CTF, CM, W
[47]
SANSPS
[23]
TR
[23]
PU, PR, C
[23]
EM
[23,49]
H
[23]
BS, OO, DO, IR, MA, IS
[23]
SC, RVB, CTF, L, CO, CM, W
[23]
VirginiaE
[51]
TR, R
[31,51]
A
[51]
H
[3]
CL
[31,51]
BS, OO, DO, IR, MA, IS
[51]
SC, CTF, L, CO, CM
[51]
UWFE
[52]
TR, R
[52]
A
[52]
SM
[52]
OS
[52]
BS, OO, DO, MA
[52]
RVB, L, CO
[52]
Legend
A: AcademicDO: Defensive OperationsL: LabsR: Research
AD: AdvancedE: EducationMA: Malware AnalysisRVB: Red versus Blue
B: BeginnerEM: EmulationNA: Not AccessibleSC: Scenario
BS: Basic CybersecurityG: GovernmentOO: Offensive OperationsSM: Simulation
C: CommercialH: HybridOS: On-SiteTE: Testing
CL: CloudI: IntermediateOV: OverlayTR: Training
CM: Custom ModelsIPD: In-Person DiscussionsPR: PrivateW: Workshops
CO: CoursesIS: Industry SpecificPU: Public
CTF: Capture the FlagIR: Incident ResponsePS: Private Sector
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Miller, E.; Mink, D.; Spellings, P.; Bagui, S.S.; Bagui, S.C. Classifying Cyber Ranges: A Case-Based Analysis Using the UWF Cyber Range. Encyclopedia 2025, 5, 162. https://doi.org/10.3390/encyclopedia5040162

AMA Style

Miller E, Mink D, Spellings P, Bagui SS, Bagui SC. Classifying Cyber Ranges: A Case-Based Analysis Using the UWF Cyber Range. Encyclopedia. 2025; 5(4):162. https://doi.org/10.3390/encyclopedia5040162

Chicago/Turabian Style

Miller, Emily, Dustin Mink, Peyton Spellings, Sikha S. Bagui, and Subhash C. Bagui. 2025. "Classifying Cyber Ranges: A Case-Based Analysis Using the UWF Cyber Range" Encyclopedia 5, no. 4: 162. https://doi.org/10.3390/encyclopedia5040162

APA Style

Miller, E., Mink, D., Spellings, P., Bagui, S. S., & Bagui, S. C. (2025). Classifying Cyber Ranges: A Case-Based Analysis Using the UWF Cyber Range. Encyclopedia, 5(4), 162. https://doi.org/10.3390/encyclopedia5040162

Article Metrics

Back to TopTop