Next Article in Journal
Hardware-Based Implementation of Algorithms for Data Replacement in Cache Memory of Processor Cores
Previous Article in Journal
Integrity and Privacy Assurance Framework for Remote Healthcare Monitoring Based on IoT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automation Bias and Complacency in Security Operation Centers

by
Jack Tilbury
* and
Stephen Flowerday
*
School of Cyber Studies, The University of Tulsa, Tulsa, OK 74104, USA
*
Authors to whom correspondence should be addressed.
Computers 2024, 13(7), 165; https://doi.org/10.3390/computers13070165
Submission received: 22 May 2024 / Revised: 20 June 2024 / Accepted: 27 June 2024 / Published: 3 July 2024

Abstract

:
The volume and complexity of alerts that security operation center (SOC) analysts must manage necessitate automation. Increased automation in SOCs amplifies the risk of automation bias and complacency whereby security analysts become over-reliant on automation, failing to seek confirmatory or contradictory information. To identify automation characteristics that assist in the mitigation of automation bias and complacency, we investigated the current and proposed application areas of automation in SOCs and discussed its implications for security analysts. A scoping review of 599 articles from four databases was conducted. The final 48 articles were reviewed by two researchers for quality control and were imported into NVivo14. Thematic analysis was performed, and the use of automation throughout the incident response lifecycle was recognized, predominantly in the detection and response phases. Artificial intelligence and machine learning solutions are increasingly prominent in SOCs, yet support for the human-in-the-loop component is evident. The research culminates by contributing the SOC Automation Implementation Guidelines (SAIG), comprising functional and non-functional requirements for SOC automation tools that, if implemented, permit a mutually beneficial relationship between security analysts and intelligent machines. This is of practical value to human automation researchers and SOCs striving to optimize processes. Theoretically, a continued understanding of automation bias and its components is achieved.

1. Introduction

To protect their organizations from cybersecurity incidents, SOCs must detect, mitigate, and respond to perilous security alerts from an over-saturated alert system in a timely manner [1]. As a result, SOCs are challenged with the “needle in a haystack” dilemma. Alert overload and the manual interrogation process have resulted in SOC analysts facing the pervasive issue of alert fatigue [2]. Amidst the rich profusion of security alerts and fatigued analysts, literature calls for SOCs to leverage increased levels of automation to assist in identifying and triaging security threats. When correctly designed and configured, automated security tools harness the potential to significantly reduce the volume of alerts by only flagging highly critical threats and assisting with responsive countermeasures [3,4]. However, the current implementation of automation in SOCs exposes a different reality. Ref. [5] reported that even after applying automated security tools, the average SOC received 4484 daily alerts. This is because security tool vendors compete based on low levels of false negatives rather than low levels of false positives [6]. Put another way, vendors will flood analysts with benign alerts in fear of a genuine threat slipping through, contributing to the volume of alerts. Therefore, the notion of increased automation to alleviate existing issues appears paradoxical in its current form. The argument’s premise states that to combat rising alert volumes, more automation must be employed. However, the sizable volume of alerts is caused, in part, by the continuous addition of automated security tools. Ref. [5] found that 39% of the 2000 security analysts interviewed attributed alert overload to tool overload. This phenomenon is further supported by [7,8] industry reports, whereby 34.4% of the 468 and 49% of the 1000 security analysts interviewed respectfully reiterated tool overload as a contributing factor to ineffective operations. This demonstrates a radical approach to automation that some SOCs have adopted, seemingly over-automating and over-reliant on automation. The complex threat landscape and number of alerts mean that the SOC environment necessitates intelligent automation systems. Automation in SOCs is in a transitional phase, moving from traditional (intrusion detection and prevention systems that constantly examine an organization’s network to identify predefined threats and prevent their proliferation where possible) to more advanced forms of automation in artificial intelligence (AI) and machine learning (ML) systems that can act autonomously, detect new threats, and extract novel insights from data.
Studies in human-machine (i.e., automation) interaction have reiterated that automation should not supplant human activity but complement it while stating that it often results in unintended and unanticipated changes to human behavior [9,10,11,12]. Behavioral changes include automation misuse (placing increased levels of trust in automation that exceeds its capabilities—over-utilization) and disuse (placing decreased trust in automation and failing to recognize its capabilities—under-utilization). Automation misuse is called automation bias, whereby individuals fail to seek contradictory or confirmatory information beyond the automated results [11]. Automation bias is the antecedent to automation complacency, whereby human operators do not sufficiently monitor automated systems nor verify their results [13]. Irrespective of the phase of automation implementation that SOCs find themselves in, we believe that the field warrants a thorough discussion of the human factor implications of increased automation on SOC analysts. Potential adverse implications that security automation can have on SOC analysts are summarized by [14] (p. 4) “…despite the drive to place our faith in technical safeguards, they still leave us vulnerable to cyberattacks. In fact, over-trusting these controls could foster poor cybersecurity behaviors by engendering a false sense of security”. Based on a literature review, the functionality of what a SOC does and the surface-level challenges it faces are well-researched. Since 2015, SOC research has gained prominence, with a notable surge in publications since 2019 [15,16]. A finding that percolates consistently is that the problem lies in the absence of increased automation—with research advocating for increased automation as a panacea for all SOC challenges. However, few studies extend beyond this proposition. For the most part, studies remark that caution must be applied during implementation. Where these studies fall short is that they fail to consider the human factor implications of increased automation on SOC analysts. Thus, there exists a research opportunity to explore both the functional and non-functional requirements of SOC automation tools that allow for a conducive human-automation environment.
The aim of this study is to review existing literature and elicit current knowledge concerning (1) the proposed and current utilization of automation in SOCs (i.e., areas where automation can and is being applied), and (2) the implications of increased automation on security analysts.

2. Materials and Methods

Following the [17] methodological guidance, a scoping review of the relevant literature was conducted. A scoping review is focused on identifying and examining the characteristics and factors of a particular concept. This differs from a systematic literature review, which also assesses the articles’ quality and their methods [18,19]. The research design strategy comprised three phases.

2.1. Article(s) Retrieval Process

Literature from four databases was extracted: ScienceDirect, Scopus, IEEE Xplore, and ACM Digital Library. This included formulating two distinct search strings per database, adhering to the two objectives mentioned above. Search terms were applied to each database in accordance with their syntax requirements. The complete search terms for all four databases can be seen in Appendix A.

2.2. Article(s) Selection Process

The selection process began by removing 52 duplicate articles from the 599 articles. Following this, the title and abstract for each of the 547 articles were examined to determine their eligibility. The PCC framework (outlined below) was leveraged to aid in determining articles included and excluded. Articles in this review needed to adhere to all three components of the inclusion criteria of a population of security analysts, the concept of automated security tools, and the context of SOCs. Based on this; 481 articles were excluded. The final 66 articles were read in full, and 28 articles were excluded on the grounds of not sufficiently addressing all three aspects. This resulted in 38 articles. During the process of full article inspection, 10 new articles that matched the inclusion criteria were found via backward searching. Backward searching refers to the process of sourcing and obtaining applicable literature from the reference list or bibliographies of articles currently under examination [20]. This increased our final count of included articles to 48 articles (Table S1). Figure 1 displays our final PRISMA diagram. As per [17], the PCC framework (population, concept, context) outlines the inclusion criteria.
Population: The population criteria specify important characteristics of the participants to be reviewed within the included literature—SOC analysts. However, enterprises employ teams serving similar objectives to SOCs but under different names, e.g., computer security incident response [IR] teams (CSIRTs), representing SOC alternative terms. This implies that analysts within these teams also operate under different titles. Studies where participants are cybersecurity analysts, regardless of title, were included.
Concept: The core concept of interest is using automated security tools in SOCs. This research explored the degree of consideration given to the human factor (SOC analysts) with respect to increased automation. Moreover, the degree to which automation (LOA) levels are applied, what tasks are automated, and the potential implications this has on SOC analysts were considered. Despite automation bias and complacency existing in other high automation environments (e.g., air traffic control rooms), this concept is relatively unexplored in SOCs.
Context: Organizations that employ the services of SOCs do so in three ways—an in-house SOC, an outsourced SOC (SOC-as-a-service), or a hybrid approach. Studies focusing on all forms of SOCs were included.

2.3. Article(s) Data Analysis Process

Before importing all 48 articles into NVivo 14, a second reviewer analyzed the final set of articles. This was performed to limit subjective inclusion bias. No major discrepancies were found. Following the thematic analysis approach of [21], the full-text articles were qualitatively coded until semantic (themes directly present in the data) and latent (themes interpreted based on concepts in the data) themes emerged. Thematic analysis is defined as the process of encoding qualitative information that allows for the detection of patterns and the formulation of themes within the data. A six-phase approach developed by [21] was followed, involving familiarization with the data; initial coding; theme searching; theme reviewing (merging and deleting possible themes); theme defining; and reporting. After thoroughly engaging with the full articles, we observed a saturation point where no new themes emerged. At this point, article content was assigned to the most relevant theme from our existing list of codes. Additionally, from this we were able to quantify the qualitative data.

3. Results

A total of 477 coding references were coded against 61 codes, grouped into four main themes: (1) SOC Automation Application Areas; (2) SOC Automation Implications; (3) Human Factor Sentiment; and (4) SOC Challenges Necessitating Automation. As ample “SOC Challenges” literature exists, this theme will not be discussed despite being coded. Thematic maps were produced (see Appendix B).

3.1. SOC Automation Application Areas

The final set of included articles discusses automation currently utilized in SOCs as well as novel solutions that are being proposed. The thematic map has been structured to attempt to distribute application areas across the traditional incident response (IR) lifecycle phases.

3.1.1. Automated Incident Detection

To contend with the volume of alerts, automated incident detection tools that present SOC analysts with top-priority alerts are proposed [3,22,23]. In one study using an unsupervised ML algorithm, Ref. [24] developed Beehive to detect threats. When implemented, Beehive identified 784 incidents, of which only eight were previously detected by the organization’s existing tools. A different study trained their AI alert screening framework by using 593 alerts that were manually labeled as malicious from a dataset of 137 million. After training, the model was reapplied to the same dataset for validation, identifying 1646 additional alerts [25], highlighting analysts’ alert fatigue. Another study proposed an untested Super Intelligent Action Recommender Engine tool that can identify malicious behavior [26]. To reduce alert volumes, the importance of low false positive rates compared to low false negative rates was stressed [23,27,28,29].

Automated Alert Triage

Automated tools to perform alert triage were mentioned in two studies. Based on the attributes of alerts and suspected attacks, Ref. [30] developed APIRO to automatically identify and route security analysts to the most appropriate mitigation and response security tool. Ref. [31] traced the manual triage operations of 29 security analysts before developing an automated triage agent. The process through which analysts performed their tasks was recorded to inform tool development. The tool achieved an 82% accuracy score, identifying 322 of the 394 operations recorded, inferring that analysts solely focus on the balance.

Automated Correlation of Security Alerts

The correlation of security alerts from multiple sources is not a new capability within SOCs and is achieved by implementing security information and event management (SIEM) tools [31,32]. However, the ability to identify malicious activity by correlating a sequence of alerts (i.e., alerts before and after a raised alert) over a fixed period, as opposed to inspecting alerts in isolation, represents a novel contribution [27,33,34]. Ref. [34] grouped similar alerts and presented them to analysts as “clusters”, and the analysts then labeled each cluster accordingly. The authors intend for their tool to incorporate the analyst’s decision and learn from it to label future clusters automatically. This overcomes the “weak signal” challenge from [22], stating that predictive models’ learning is prohibited without linking preceding and proceeding contextual data to alerts raised.

3.1.2. Automated Alert Reports—Explainable and Interpretable

The efficacy of automated security tools in relation to their ability to deliver explanations of their actions to human analysts has led to increased levels of tool acceptance [35,36,37]. Currently, automated tools operate in a black-box environment—not providing understandable information to SOC analysts [23]. Ref. [2] elucidates the poor quality of Security Orchestration, Automation, and Response (SOAR) tools that automatically generate alert reports, finding that analysts manually produced more complete reports. To combat this, Ref. [38] leveraged automated AI journalism to extract meaningful information from malware and generate storytelling alert reports that are more comprehensible, improving response time by 83%. This is supported by [22], who recommend an ML tool that presents analysts with an attack storyline of events. Ref. [39] found that analysts’ comfort level with automated solutions is positively associated with the explanations provided.

3.1.3. Automated Alert Prioritization

Prioritization of alerts was the main intention of select studies [23,40,41,42,43]. One study discovered that domain name(s) alerts were subject to repeated and incomplete investigations [40]. Those same authors developed DomainPrio to automatically prioritize alerts concerning specific domain names, state whether the domain name has been previously investigated, and provide additional information on the domain name if available. Another study presented SmartValidator, a tool that validates the severity of security alerts based on correlations with cyber threat intelligence (CTI) information [42]. The tool collects and classifies CTI information in the form of indicators of compromise (IOCs), and matches it against characteristics found in alerts [42,44].

3.1.4. Decision Support Systems (DSS)

Automated decision support systems are predominantly employed in the dynamic threat environment, contrasting with signature-based approaches where the decision to be made is rule-based. Automating decisions for known procedures is still a valued utilization of resources if efficiency gains are to be incrementally realized.

Adaptable Decision Support Systems

Security analysts are tasked with processing excessive amounts of information daily, leading to a lack of clarity in their decision-making [45,46]. One study designed a decision support tool that correlates incident alerts, maps the associated business assets at risk of being infected, and assigns an impact value to the attack [47]. A contribution of their tool is its capability to provide mitigation strategies for the analyst to review and select. If permitted, the tool can automatically conduct response actions. Ref. [48] investigated user trust in a phishing website detection DSS and found a positive association between the tool’s ability to communicate the severity of an attack and users’ trust in the tool—capabilities evident in CRUSOE. Two additional studies recognized the importance of providing analysts with DSS that match security alerts with the organizational processes affected before recommending mitigation actions [49,50].

Decision Support Systems for Known Procedures

The safest application of DSS is in traditional rule- and condition-based procedures. One study promoted the use of real-time analytics to drive automated response decisions against known threats [51]. Another study argued for the automation of playbooks, which catered to routine threats [52].

3.1.5. Automated Incident Response

Traditional forms of automation (IDS and IPS tools) focus on incident detection, yet automation in incident response is emerging. For the most part, LOA decreases the closer that SOCs get to implementing response strategies. However, Ref. [53] reverse this notion by presenting a conceptual system design permitting fully automated responses to known threat indicators. Their proposition is that SOCs should employ automated responses with the capabilities for analysts to intervene if required.

AI and ML in Incident Response

A study reviewed 10 SOAR platforms and revealed AI/ML’s use cases across the incident response lifecycle phases [54]. To clarify AI tools in our context of SOCs, the AI-related articles included in this study refer to intelligent systems that think and act like humans and can perform tasks independently. Further, ML studies refer to how these systems develop their intelligence and are trained to identify patterns in the threat data. The importance of AI tools providing explanations to SOC analysts was referred to as explainable AI (XAI) by [55]. 60.4% (29/48) of the studies included in this analysis discussed AI/ML solutions, focusing on neural networks (24.14%), visualization modules (13.79%), and natural language processing (10.3%) (see Table 1). Ref. [30] cite the importance of well-written queries, showing that their tool performed better when the analysts’ query contained the correct keywords. Despite the dominant presence of AI/ML in research, Ref. [6] reported that only 10% of the analysts interviewed used ML-based tools in their SOCs. Noteworthy precautionary aspects exist, such as attacks on AI/ML training sets, the quality data used for training, imbalanced datasets, and a lack of context in tool results [44,47,55].

3.2. SOC Automation Implications

A total of 24 articles engaged directly with the human factor element for three reasons: to assist with tool development, evaluate perceptions toward automation, or validate the performance of analysts using automated tools. The other 24 papers discussed the human factor in an indirect manner. Thirteen noteworthy papers were identified, conducting qualitative and quantitative data collection. Ten of the 13 represent core papers, as they clearly identify the implications that automation has on security analysts, ranging from bias, complacency, mis- and disuse, human-machine trust, and trust intentions [3,6,14,28,36,37,39,56,57].

3.2.1. Automation of Defined Processes

The automation of well-defined, low-risk, predictive, and repetitive manual tasks was a generally agreed-upon use case [2,32,52,58]. Ambiguous processes often require increased human oversight [45,46]. Looking at this sub-theme differently, Ref. [22] developed APTEmu to automate well-defined attacks in a simulation environment to assess the sufficiency of mitigation procedures. Automating attacks in a controlled environment is a growing field.

3.2.2. Levels of Automation

The ability for automated tools to dynamically adapt their LOA based on the situation appeared in two studies [46,59]. This was further echoed by security analysts, whereby Ref. [39] (p. 12) stated that “changing LOAs made it more human-like, [as] human teammates are told they can have more or less autonomy, based upon their skill level and the situation”. Ref. [60] found that the LOA proved to provoke high levels of disagreement amongst analysts. Ref. [39] investigated comfort levels with AI agents by presenting participants with full or partial automation. Results indicated a negatively associated relationship between participants’ comfort with AI and the LOA applied. Put differently, participants were more comfortable with partial automation and less comfortable with full automation. Analysis showed higher comfort levels in the identification phase compared to the recovery phase.

3.2.3. Automation Bias

Various studies recognize the potential of automation bias, yet no studies have conducted empirical work to measure the construct. Multiple studies warn against becoming over-reliant on automated tools due to integrity issues [45,46,59,60]. In eliciting feedback from analysts on the adequacy of their automated IOC generation tool, one study acknowledged that automation bias exists in that they did not inform participants that the IOCs were automatically generated [28]. Another study suggested that the speed of automation introduces bias whereby analysts value quick response time over seeking confirmatory/contradictory information [52]. Ref. [35] asserted that over-explanation may put analysts in a position where they deem automation as superior and fail to consider its correctness and Ref. [55] comment that bias could originate from misleading explanations.

Automation Complacency

One study found that participants who placed a high level of trust in anti-virus systems demonstrated an inferior ability to differentiate between phishing and legitimate emails, resulting in complacent behavior [14]. Additionally, Ref. [61] found that participants operating at higher LOA performed worse than those operating at lower levels—measured by a quantifiable hit rate of accurately detecting threats. Another study commented that alert and tool overload results in a habituation effect, leading to complacent behavior whereby less attention to warnings is given [1].

Automation Mis- and Disuse

One study found that analysts’ frustration with automated tools led to automation disuse [62]. Further, Ref. [6] point out that Target’s data breach of 2013 arose due to disuse, as a lack of tool confidence meant that the automatic capability to delete malicious files had been disabled. Four other studies Refs. [14,56,57,61] mentioned that neither over-utilization nor under-utilization is preferred; however, limited studies measured the effect of these concepts in SOCs.

3.2.4. Trust in Automation

Trust in automation is improved when tools are transparent in how they operate and justify their actions [35,49,55]. When evaluating the impact of false positives and negatives from automated tools, Ref. [56] found that false positives lower trust more than their counterpart. Two articles investigated trust in the automated patching of vulnerabilities [3,57]. In their survey, a general distrust toward automated patching tools was found, with higher levels of distrust correlating with years of experience [3]. To counter the issue of low-quality CTI information, Ref. [37] developed an automated CTI report tool and validated its performance on analysts operating with and without automation. Results highlighted that analysts were more confident in automation’s predictions and that those using the tool outperformed those not using it. To improve tool explanations and maintain tool usage, the confidence that the tool has in its identification of CTI information was provided.

Human-Machine Trust

One study surveyed 20 analysts, asking three human-machine teaming questions that were centered around analysts assisting in the incident detection and response phases [6]. All three questions had a mode answer of 4 or 5 (on a scale from 0–5). Follow-up interviews revealed that a level of oversight is needed to develop trust. Ref. [57] conducted two experiments, employing experienced and inexperienced software developers, with the purpose of garnering trust perceptions toward automated vulnerability patching. All participants received automated code repairs but were informed that half of the solutions were human-generated. Both groups showed greater levels of trust toward the automated code repair. Over the course of the experiment, inexperienced developers increased their trust in the automated solution and decreased their trust in “human” repairs. Experienced developers showed decreases in both, albeit only a slight decrease in trust in the automated solution. Trust must be calibrated so that well-performing tools are used when needed and poor-performing tools are treated with caution [6,48,56,61].

3.3. Human Factor Sentiment

A human factor sentiment analysis was conducted once the article had been read in full and coded. This revealed overwhelming support for the continued inclusion of SOC analysts (87.5%, or 42, of all articles). The sentiment analysis involved the articles being tagged as either positive or negative. This activity was performed based on our interpretation of how the articles under examination discussed the involvement and inclusion of security analysts. It is worth noting that one article did not provide a clear enough distinction as to their sentiment toward security analysts and was therefore not tagged [62].

3.3.1. Human-in-the-Loop (HITL)

Within those positively tagged, articles were grouped as being moderately (42.9%) or very (57.1%) positive. For instance, very positive human-factor sentiment codes included the following:
  • Ref. [29] (p. 2119) states that “… these tools can seldom replicate the decision-making process of an analyst”.
  • Ref. [35] (p. 22) states that “Although many researchers have focused on the algorithmic aspects of protecting against data exfiltration, human analysts remain at the core of what are effectively socio-technical systems”.
An example of a moderately positive human-factor sentiment code includes the following:
  • Ref. [2] (p. 5) states that “the cybersecurity domain requires some level of human oversight due to the inherent uncertainty”.
Three general sub-themes emerged regarding the importance of security analysts: decision-makers, human-AI teaming, and tacit domain knowledge.

Decision-Makers

The act of decision-making is predominantly the responsibility of the security analyst, driven by threat complexity, the possibility for automation errors, and high levels of risk. When testing CRUSOE, Ref. [47] (p. 16) asserted that “… it is too risky to let the machine decide and act autonomously”.

Human-AI Teamings

Nine studies endorsed the human-AI teaming approach. AI-assisting agents can act as team members to assist with analyzing large amounts of data, identifying nuanced threats, and guiding analysts through mitigation procedures [25,39,45,49,54,63].

Tacit Domain Knowledge

The knowledge of the external environment, organizational assets, and business processes emerged as an area whereby security analysts outperform automated tools [6,31,38,59]. Ref. [2] remind us that “…we cannot automate what we do not know and, therefore, can never completely remove humans from the loop because uncertainty is a given”.

3.3.2. Human-out-of-the-Loop (HOTL)

Five studies (10.42%) were tagged as negative [4,26,32,44,53]. Negatively tagged human-factor sentiment codes included the following:
  • Ref. [53] (p. 3) state that “… it would be ideal if systems that receive TI could also respond quickly and efficiently, i.e., as fully automated as possible”.
  • Ref. [4] (p. 3) state that “… developing a completely automated response mechanism that can combat new emerging attacks without considerable human intervention”.

Fully Autonomous SOCs

Two studies promoted fully autonomous SOCs [26,44], with three others indicating a similar approach in the future [32,46,53]. In a detailed technical discussion of theirs, Ref. [44] validated their lambda architecture-based SOC against a real-world dataset. All five classifiers achieved an F1 score of 0.96 and above when demystifying malware traffic samples. [26] marks the only study to use the term “human-out of-the-loop” when referring to their tool’s ability to directly apply recommended responsive actions.

4. Discussion

SOC automation is a fast-growing field of research, yet limited evidence-based synthesis works (scoping or systematic literature reviews) have been published, with only two being identified [64,65]. Four other SOC literature reviews were discovered, with three of them focusing on SOC challenges [16,66,67] and one discussing the integration of NOCs with SOCs [15]. The discussion below comprises four parts: (1) the findings as to where automation is applied in the IR lifecycle; (2) noticeable characteristics of automation; (3) the prominence of AI and ML in SOCs; and (4) the human-in-the-loop approach.

4.1. Automation in the Incident Response Lifecycle

Based on our results, the predominant application area for automation is incident detection, which is closely followed by incident response. These findings are further supported by [68]. Traditionally, automation has dominated the earlier phases of the incident response lifecycle, but our study showcases the advancements in automation’s use cases. This finding is congruent with [64], who reviewed automation tools as part of the SOAR framework and highlighted the inclusion of automation in a response and mitigation capacity. However, in these latter phases, the levels of automation tend to decrease with a corresponding increase in security analyst reliance. Put differently, making critical decisions and implementing response strategies is the core responsibility of the human analyst.
In the detection phase, tools that redefined the correlation of alerts represent a novel finding that makes way for new discussions on how SOCs define alert correlation. Typically, alert correlation refers to grouping the same isolated alert across various sources. We term this as vertical correlation. However, the novel approach discovered in this study refers to sequential correlation, whereby an alert is examined in context with its preceding and proceeding alerts. This provides a new level of analysis, as the causality and effect of the surrounding alerts can significantly impact understanding the alert being investigated. We refer to this as horizontal correlation.
Most of the automated tools in the response and implementation phase are decision support systems or recommender engines. This is supported by [65], who reviewed the growing number of “action recommender tools” in SOCs to assist analysts in threat response initiatives. It is encouraging to see the development and deployment of real-time, adaptable decision support systems—tools that are more intelligent than static-based approaches and can assist SOC analysts in coping with dynamic threats. Over time, these systems will continue to improve as they learn from analysts’ final decisions. However, whether SOC analysts feel comfortable providing complete autonomy to these tools in responding to cyber incidents remains unknown. Evidence from this review suggests that SOCs are not moving toward that point.

4.2. Characteristics of Automation

Three specific characteristics of automation appeared important: (1) levels of false positives, (2) degree of explanation given by automated tools, and (3) disclosing automation’s reliability levels to analysts. The prevalent challenge of false positives indicates that current industry-adopted tools focus on limiting false negatives—supported by [15]. This is evident, as analysts are still flooded with alerts even after applying automation. Based on articles included in this review, it is promising to see recent academic advancements in this field reverse this notion, with a blatant focus on reporting the low false positive rates being achieved. Ultimately, this results in fewer alerts being presented to analysts with the premise that these alerts represent those most critical. The influence that false positives have on analysts’ trust in automation (more false positives lead to greater distrust), discussed by [56] was particularly insightful. The authors stated that this is because false positives are more noticeable than false negatives and, therefore, are more frustrating. However, this finding could also be interpreted as analysts’ willingness to accept performance issues (initially, at least) if it means that vendors start to compete on the right metrics to reduce alert volume.
As indicated in the results, analysts are seeking automated tools that explain why alerts are deemed threatening and why certain decisions have been made. We believe that this explanation needs to adopt a two-fold approach. Firstly, tools must provide alert explanation and information on the security alert, such as why it is believed to be malicious, what business assets it affects, contextual information as to where it originated, its trajectory, remediation strategies, alternative remediation options, and any additional information required. Secondly, tools must provide an explanation as to how they themselves operate (i.e., the foundational structure of the tool). This is done at a higher level than the alert explanation whereby the inner workings of the tool are delineated: what information is collected, what sources are monitored, how information is condensed, what the characteristics of malicious or benign events are, what levels of automation are employed and in what phase of the incident response lifecycle, and any additional information required. This gives analysts a level of understanding of the tool’s core responsibility boundary.
Lastly, disclosing automation’s performance to analysts—i.e., the reliability and confidence level metrics and justifiable decisions—impacts their trust. The trustworthiness of automation was also discussed by [64]. While trust is an important factor in assessing analysts’ intention to use the tool and their reliance thereof, it alone does not fully encapsulate the human factor relationship with automation in SOCs. In certain instances, increased trust showed increased complacency. It can also be inferred that increased support for human analysts may result in less trust toward automation, thereby confirming that trust must be calibrated. Of all articles reviewed, the constructs of automation bias, complacency, and trust were discussed, albeit disjointed. Despite the dynamic connections between these constructs, no study comprehensively investigated their joint implication on SOC analysts. Rather, trust in automation was empirically measured in isolation; complacency was indirectly measured based on analyst performance metrics, and no study empirically measured the construct of automation bias. Furthermore, many of these discussions simply suggested bias and complacency to be negative outcomes based on qualitative data. No studies quantitatively measure these relationships with SOC analysts, representing a legitimate gap for further exploration.

4.3. The Emergence of AI and ML in SOCs

There is a clear trend toward using AI and ML in the SOC domain. Given the surge of large language models (LLMs) since the introduction of ChatGPT, this was expected. A frequent decision regarding how the tools are trained is whether they are subject to supervised versus unsupervised machine learning algorithms. Put simply, supervised ML models use labeled datasets in training. In our context, this refers to datasets whereby security analysts have labeled alerts as malicious or benign and labeled characteristics (or IOCs) of both types of alerts. These models clearly understand the relationship between input (a given alert with a set of characteristics) and output (the given alert is malicious or benign based on its characteristics) [69]. Conversely, unsupervised learning models do not require human input. These models discover patterns and insights independently, without instructions [36,55]. Evidently, there are trade-offs to be made. While supervised learning models are more manually intensive, as they require the laborious process of SOC analysts labeling large datasets, they tend to provide more accurate predictions. The benefits of supervised models extend beyond detection accuracy in that they assist in maintaining the cognitive skills of the analysts through inclusion. Here, security analysts are constantly engaging with datasets of alerts, investigating their properties, and labeling them accordingly. Moreover, the analysts are getting first-hand insights into how these tools operate, as they play a direct role in developing them. In their discussion on explainable AI systems, Ref. [55] (p. 112407) stated that “… one promising technique is to design X-IDS with a human-in-the-loop- approach”. On the other hand, unsupervised learning may discover novel threat patterns and be more efficient in their deployment, yet anomalous or “extremely rare pattern” behavior discovered by unsupervised models does not always directly imply actionable threats. For instance, Ref. [25] compared supervised to unsupervised learning models and found that the former outperformed the latter, with unsupervised models having much higher false positive rates. For as long as semi-supervised ML tools are integrated into SOCs, the reliance on skilled SOC analysts to label data remains [35]. This contrasts the views of [36], who are more critical of supervised learning methods. Other authors were also strongly opinionated in their favor of unsupervised learning models [32,44,70]. Ref. [55] discuss the benefits and drawbacks of supervised and unsupervised learning models and the training techniques used for both. We deduce that the severity of missed alerts necessitates well-trained supervised learning models that can guarantee higher accuracy in threat detection. However, security analysts would also benefit from interacting with unsupervised models that can aid in threat detection, given the unpredictable nature of cybersecurity and the complexity of alerts. Therefore, we recommend a semi-supervised approach. Irrespective of the learning approach taken, both models come unstuck in the imbalanced datasets on which they are trained. This is because more data and information exist on benign alerts than malicious alerts. To combat this issue, a common technique known as oversampling is applied. Here, the minority class (true positives) are sampled with replacements until their frequencies match those of the majority class (true negatives and false positives). This can lead to overfitting, which results in precise models that struggle to detect variations in new threat data, causing a roundabout effect. Thus, oversampling solves one challenge but creates another. Additionally, Ref. [71] exhibits the advancement of ML tools by utilizing ensemble methods whereby systems employ multiple learning algorithms to obtain improved predictive performance that would otherwise not be achieved if the learning algorithms were implemented in isolation. By utilizing ensemble algorithms, Ref. [71] showcased that their tool boosts detection accuracy while considering computational complexity, two qualities pertinent to SOCs.
Overall, when interpreting the results from our study, the same problems that call for AI and ML are oftentimes the same problems that impact AI and ML models: the tool’s lack of environmental context, lack of quality data to train the tool, and imbalanced datasets that largely comprise benign alerts when compared to malicious threats. Thus, there is room for continued discussion in this area.

4.4. The Human-in-the-Loop Approach

It was evident yet surprising to see the level of support for SOC analysts given the coinciding increase in AI and ML tools. None of the six reviews mentioned contradicts this finding, with a near-unanimous “human-in-the-loop” outcome. There was only one review that explored the relationship between security analysts and automation, concluding that the latter should serve as an intelligence assistant to the former [64]. Harnessing a collaborative relationship between the two was also found by [16] (p. 227774), who stated that current automation is “not advanced enough to deliver on the expectations and hype they have created”. Overall, given the support for the analyst-in-the-loop, SOCs must move away from the unjustified belief that human analysts can only assist by handing off tasks and responsibilities to automation (the human-first and automation-final strategy). Instead, this should be reversed by adopting a machine-first approach whereby automation can be applied to alerts so that analysts are presented with the most critical threats and associated information from which they can act and make mitigation decisions (the automation-first and human-final strategy). Results from our study show the value of skilled SOC analysts as the key decision-makers, allowing us to confidently state that despite the presence of increased automation, the human-final strategy remains.

5. Findings

The SOC Automation Implementation Guidelines (SAIG) represent the main contribution of this study. As per [17], the purpose of a scoping review is to delineate the body of literature in a given field, mapping correlations where applicable. Further, identifying and analyzing knowledge gaps represent a core reason for conducting such a study. Based on our results, we recognized that the development and discussion of functional and non-functional SOC automation requirement recommendations were absent from the existing literature. As such, SAIG marks our contribution to the body of SOC automation research, as shown in Figure 2. As per [72], software engineering guidelines are intended to foster an environment of stimulated thinking that can inform various approaches to systems development rather than dictate it. Based on our study, we believe that a mutually beneficial relationship between security analysts and intelligent machines would be achieved if these requirements are followed (i.e., if these requirements are implemented into SOC automation tools). Secondly, and of most importance, these requirements would aid in alleviating automation bias and complacency.

5.1. SOC Automation Functional Requirements

Functional requirements are defined as features that systems possess to assist users in completing their tasks, i.e., what the system does [73]. In Figure 2, these are denoted as the gray blocks. Ten core functional requirements across five high-level groups were identified and are discussed below. The discussion will follow the format of [group]—[requirement]: [explanation]. For a summarized version of these requirements, as well as results informing each requirement (derived from the thematic maps), please see Appendix C.
  • Automated Scanning of the Threat Environment—Collection: Automated tools must constantly scan the threat landscape and collect cyber threat information. This can be achieved by scraping vulnerability databases and tracking emerging trends from verified sources.
  • Automated Scanning of the Threat Environment—Analysis: Automated tools convert collected information into cyber threat intelligence (CTI). CTI is more meticulous than general threat information, providing threat actor, vector, and characteristic-specific intelligence.
  • Automated Scanning of the Threat Environment—Validation: The analyzed CTI is mapped against the indicators of compromise (IOCs) present in alerts, providing security analysts with validated threats that can be marked as high priority.
  • Automated Scanning of the Threat Environment—Generation: If the analyzed CTI cannot be mapped against any IOC present in a SOCs alert (i.e., there is no indication of that specific threat), new IOCs can be generated, and detection tools can be updated accordingly.
The above four requirements are grouped together, following a sequential approach to implementation. These requirements involve the external environment, as marked on the right-hand side of Figure 2. This is because threat information from outside the organization is collected. These four requirements, centered around information collection and analysis, follow the notion that “we cannot flag an alert that we do not know should be alert in the first place”. For these reasons, those requirements should be implemented first, as this provides an understanding of the threat environment. This understanding leads to the following:
5.
Contextual Alert Investigations—Sequential Alert Correlation: Automated tools must employ horizontal alert correlation, collecting alerts with their preceding and proceeding (i.e., surrounding) alerts.
6.
Contextual Alert Investigations—Sequential Alert Analysis: Automated tools must employ horizontal alert analysis whereby alerts are analyzed and investigated in context with their preceding and proceeding (i.e., surrounding) alerts.
7.
Security to Business Associations—Potential Threat Implications: Automated tools must map threatening alerts to the associated business processes and organizational assets they will affect if not resolved.
The above three requirements begin to factor in the internal business environment, as marked on the right-hand side of Figure 2. Requirements five and six are grouped together, following a sequential approach to implementation. It is recommended that requirements five and six are implemented in tandem with requirement seven to provide context-specific alerts. This detailed triage process allows for the following:
8.
Automation Explainability—Alert Report Interpretability: Automated tools must leverage the information above, outline alert characteristics, and explain their maliciousness.
Requirement eight is a product of requirements one to seven occurring. Requirement eight occurs specifically at the SOC level, as marked on the right-hand side of Figure 2. Here, security analysts are presented with human-readable and alert-explainable reports, permitting them to enhance their threat-hunting and mitigation skills rather than following black-box orders from automation. Automated tools of this nature (those that factor in the eight requirements above) are foundationally driven by the following:
9.
Automation Explainability—Tool Operation Interpretability: Automated tools must explain how they operate and arrive at decisions, providing security analysts with logical reasoning.
10.
Human Automation Teaming—Semi-Supervised Learning: Automated tools that leverage AI and ML approaches must leverage semi-supervised learning models that are trained on the expertise of SOC analysts but also have the freedom to discover new insights independently.
Requirements nine and ten must be adhered to in the internal technical environment during the automated tool development stage. This is marked on the right-hand side of Figure 2.

5.2. SOC Automation Non-Functional Requirements

Non-functional requirements are defined as system properties, behaviors, and constraints [73,74]. These requirements typically describe users’ implicit system expectations (for example, the system will provide an efficient response). According to [74], non-functional requirements are classified as security, reliability, usability, efficiency, and portability requirements. In Figure 2, these are denoted as red blocks and are connected to specific functional requirements. Six core non-functional requirements were identified and are discussed below. For a summarized version of these requirements, as well as results informing each requirement (derived from the thematic maps), please see Appendix D.
  • Automation Competes on Low False Positive Rates: Given the volume of alerts that SOC analysts must triage, automated tools attempt to reduce the number of alerts rather than add to them. While false negatives are critical and cannot be afforded, false positives represent a legitimate area for improvement.
  • Decision Support Systems Provide Options and Alternatives: Security analysts are presented with all viable paths for threat mitigation and restoration activities.
  • Decision Support Systems Disclose Options and Alternatives Not Taken: Security analysts are presented with all non-viable paths for threat mitigation and restoration activities (i.e., those deemed not feasible) to showcase that automation has conducted noteworthy consideration for countering legitimate attacks.
  • Decision Support Systems Provide Justification: Automated tools justify why certain decisions have been recommended (or executed) by providing logical reasoning based on the CTI and IOC information provided.
  • Automation Tool Reliability Levels Disclosed: Security analysts must be made clear about the accuracy and reliability of automated tools based on performance history. This requirement pertains to the tool’s performance over time, not in an individual event/alert capacity.
  • Automation Recommendation Confidence Levels Disclosed: When providing recommendations for action, security analysts must be made clear about the confidence that automated tools have in their decisions. Unlike the requirement above, confidence levels pertain to an individual event/alert capacity, whereby automation expresses how confident it is in a specific action being taken.
To conclude, SOCs represent an automation-rich environment. However, an increased reliance on security analysts as the final decision-makers still exists. Thus, not only are SOC analysts relied upon, but they are specifically called upon at the most critical time of threat mitigation. Therefore, it is important that the tools that security analysts interact with enhance their abilities. As per [75], “automation in a SOC is meant to act as a force multiplier, meaning that it must strengthen and augment the skills of the SOC analyst”. These requirements lean toward benefiting the security analyst. In their work on augmenting SOCs, Ref. [68] propose a mutual human-automation collaboration strategy. It is our recommendation that the functional and non-functional requirements mentioned above be implemented in conjunction with their matrix [68].

6. Limitations and Future Work

The first limitation is that only English databases and papers were utilized. Secondly, the term SOC is not unanimously adopted within industry and academic literature, implying that the search strings needed to consider SOC alternative terms. Thus, we cannot assume that our search strings fully encompassed the landscape of terms. Furthermore, four databases were selected based on their domain relevance, and articles not indexed in them would have been omitted. Gray literature and industry reports were also excluded from the analysis. Additionally, selection bias during the screening process may be considered subjective. To mitigate this impact, the final set of 48 articles was reviewed by another researcher. Scoping reviews differ from systematic literature reviews in that they do not assess the methodological quality of included articles [17], and considering the results in lieu of this fact is recommended. Future work will include developing a research instrument and empirically measuring automation bias and complacency with security analysts.

7. Conclusions and Contributions

This review highlighted the application areas of automation in security operation centers and increased automation’s impact on security analysts. Automation is advancing in its capabilities, and this review showcased its applicability across all phases of the incident response lifecycle. A dominant factor for tool success is how interpretable its results are, stressing the importance of human-readable information. While limited, there were papers that investigated analyst trust in automation, complacency levels resulting from automation, and the ideal level of automation applied. Our findings culminated in the SOC Automation Implementation Guidelines (SAIG) contribution, comprising functional and non-functional requirements for automated tools. We believe that a mutually beneficial relationship between security analysts and intelligent machines would be established if these requirements are implemented, and it would alleviate the potential for automation bias and complacency. To the best of our knowledge, SAIG appears to be the first automation system guidelines factoring in the human factor within the SOC context. Further, no prior research has studied the theory of automation bias in our context. Although there exist reviews that examine currently implemented automation in SOCs, our study extends beyond these in three ways:
  • Firstly, our review includes proposed automated tools and solutions discussed in academic literature and validated in real-world SOCs (or against real-world datasets).
  • Secondly, our review analyzes the human factor implications of increased automation. Although human factor challenges within SOCs have been reported, no review discusses the implications in terms of bias, complacency, and trust.
  • Lastly, our review conducted a human-factor sentiment analysis against all 48 included articles, yielding insightful results in a field dominated by technical approaches.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/computers13070165/s1, A spreadsheet titled Table S1: Scoping Review Literature Overview Per Paper. Table S1 contains a summarized description of all 48 articles included in the final selection. Articles are summarized by the following headers: Problem, Automation Aspect, Automation Aspect Explained, Contribution/Findings, Technology Implemented, Human Factor Sentiment, and Discussed the Role of the SOC Analyst.

Author Contributions

Conceptualization, J.T. and S.F.; methodology, J.T. and S.F.; software, J.T.; validation, J.T. and S.F.; formal analysis, J.T.; investigation, J.T.; resources, J.T. and S.F.; data curation, J.T.; writing—original draft preparation, J.T.; writing—review and editing, J.T. and S.F.; visualization, J.T.; supervision, S.F.; project administration, S.F. Both researchers contributed equally to this study, playing an integral role throughout the course of the data collection and analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Materials; further inquiries can be directed to the corresponding author/s.

Acknowledgments

We want to acknowledge the University of Tulsa Cyber Fellows initiative.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Search Strings by Database

Table A1 displays the two search strings that were applied to each database. Each search string relates to the research objectives.
Table A1. Search Strings by Database.
Table A1. Search Strings by Database.
DatabaseSearch StringResults
ScienceDirect (199)Secondary Objective #1
Title, abstract or author-specified keywords (“Security Operations Center” OR “Network Operation Center” OR “Cybersecurity Operation” OR “Cyber Security Operation” OR “Incident Response”) AND (“Automate” OR “Automation” OR “Decision Support” OR “Decision Aid”)
39
Secondary Objective #2
Title, abstract or author-specified keywords (“Security Operations Center” OR “Network Operations Center” OR “Cybersecurity” OR “Cyber Security”) AND (“SOC Analyst” OR “Analyst” OR “Security Analyst” OR “Human”) AND Find articles with these terms (“Automate” OR “Automation” OR “Decision Support” OR “Decision Aid” OR “Technical Control”) AND (“Complacency” OR “Bias” OR “Trust”)
160
Total after removing duplicates: 193
Scopus (183)Secondary Objective #1
(TITLE-ABS-KEY((“Security Operation Center” OR “Security Operations Center” OR “Network Operation Center” OR “Network Operations Center” OR “Cybersecurity Operation” OR “Cyber Security Operation”) AND ALL (“Automate” OR “Automated” OR “Automation” OR “Automated Security Tools” OR “Decision Aid*” OR “Decision Support”))
140
Secondary Objective #2
TITLE-ABS-KEY((“Security Operation* Center” OR “Network Operation* Center” OR “Cybersecurity Operation” OR “Cyber Security Operation” OR “Incident Response”) AND (“SOC Analyst” OR “Analyst” OR “Security Analyst” OR “Human”)) AND ALL ((“Automate” OR “Automation” OR “Decision Support” OR “Decision Aid” OR “Technical Control”) AND (“Complacency” OR “Bias” OR “Trust” OR “Reliance”))
Total after removing duplicates: 161
43
IEEE Xplore (189)Secondary Objective #1
(“All Metadata”: “Security Operation Centre” OR “All Metadata”: “Security Operations Centre” OR “All Metadata”: “Security Operation Center” OR “All Metadata”: “Security Operations Center” OR “All Metadata”: “Network Operation Center” OR “All Metadata”: “Network Operations Center” OR “All Metadata”: “Cybersecurity Operation “ OR “All Metadata”: “Cyber Security Operation “ OR “All Metadata”: “Incident Response”) AND (“All Metadata”: “Automate” OR “All Metadata”: “Automated” OR “All Metadata”: “Automation” OR “All Metadata”: “Decision Support” OR “All Metadata”: “Decision Aid”)
127
Secondary Objective #2
(“All Metadata”: “Security Operation Centre” OR “All Metadata”: “Security Operations Centre” OR “All Metadata”: “Security Operation Center” OR “All Metadata”: “Security Operations Center” OR “All Metadata”: “Network Operation Center” OR “All Metadata”: “Network Operations Center” OR “All Metadata”: “Cybersecurity” OR “All Metadata”: “Cyber Security” OR “All Metadata”: “Incident Response”) AND (“All Metadata”: “SOC Analyst” OR “All Metadata”: “Analyst” OR “All Metadata”: “Security Analyst” OR “All Metadata”: “Human” OR “All Metadata”: “Operator”) AND (“All Metadata”: “Automate” OR “All Metadata”: “Automated” OR “All Metadata”: “Automation” “Decision Support” OR “All Metadata”: “Decision Aid” OR “All Metadata”: “Technical Control”) AND (“Full Text & Metadata”: “Complacency” OR “Full Text & Metadata”: “Bias” OR “Full Text & Metadata”: “Trust” OR “Full Text & Metadata”: “Reliance”)
Total after removing duplicates: 176
62
ACM Digital Library (128) Secondary Objective #1
[[All: “security operation center”] OR [All: “security operations center”] OR [All: “security operation centre”] OR [All: “security operations centre”]] AND [[All: “automate”] OR [All: “automation”] OR [All: “automated”] OR [All: “technical controls”] OR [All: “decision support system”] OR [All: “security automation”] OR [All: “automated security”]]
82
Secondary Objective #2
[[All: “security operation center”] OR [All: “security operations center”]] AND [[All: “soc analyst”] OR [All: “analyst”] OR [All: “security analyst”] OR [All: “human”]] AND [[All: “automated”] OR [All: “automated”] OR [All: “automation”]] AND [[All: “complacency”] OR [All: “bias”] OR [All: “trust”] OR [All: “reliance”]]
Total after removing duplicates: 69
46

Appendix B. Thematic Maps

Figure A1. Automation Application Areas: The thematic map above displays the SOC automation application areas, discussed regarding the incident response lifecycle. Please note that the number [1] in the figure simply denotes the number of the thematic map and not a citation. For instance, the figure above represents the firs thematic map.
Figure A1. Automation Application Areas: The thematic map above displays the SOC automation application areas, discussed regarding the incident response lifecycle. Please note that the number [1] in the figure simply denotes the number of the thematic map and not a citation. For instance, the figure above represents the firs thematic map.
Computers 13 00165 g0a1
Figure A2. Automation Implications on Analysts: The thematic map above displays increased automation implications on SOC analysts. In accordance with the above, the number [2] marks this as the second thematic map and does not pertain to a citation.
Figure A2. Automation Implications on Analysts: The thematic map above displays increased automation implications on SOC analysts. In accordance with the above, the number [2] marks this as the second thematic map and does not pertain to a citation.
Computers 13 00165 g0a2
Figure A3. Automation Human Factor Sentiment: The thematic map above displays the human factor sentiment discovered in the articles. In accordance with the above, the number [3] marks this as the third thematic map and does not pertain to a citation.
Figure A3. Automation Human Factor Sentiment: The thematic map above displays the human factor sentiment discovered in the articles. In accordance with the above, the number [3] marks this as the third thematic map and does not pertain to a citation.
Computers 13 00165 g0a3

Appendix C. SOC Automation Functional Requirements

Table A2 displays the functional requirements for SOC automation, representing desirable features that should be considered and implemented.
Table A2. SOC Automation Functional Requirements.
Table A2. SOC Automation Functional Requirements.
Requirement GroupRequirementDetailsResults Informing Feature
Automated Scanning of the Threat EnvironmentCyber threat information collectionAutomation collects cyber threat information.Automation Application Areas
-
SOC Automation Implications
-
Human Factor Sentiment
-
Cyber threat intelligence analysisAutomation converts cyber threat information into CTI.
Cyber threat intelligence and IOC validationMapping CTI with associated indicators of IOCs flagged in alerts.
IOC generationGenerating new IOCs if necessary.
Contextual Alert InvestigationsSequential alert correlationAlerts are collected with their preceding and proceeding alerts.Automation Application Areas
-
Sequential alert analysisAlerts are analyzed in context with their preceding and proceeding alerts.
Security to Business AssociationsPotential threat implicationsThreatening alerts are mapped to the associated business processes and organizational assets they will affect.Automation Application Areas
-
-
Human Factor Sentiment
-
Automation ExplainabilityAlert report interpretabilityAutomation explains why alerts are malicious and what characteristics they possess.Automation Application Areas
-
-
SOC Automation Implications
-
Tool operation interpretabilityAutomated tools explain how they operate and arrive at decisions.Automation Application Areas
-
-
SOC Automation Implications
-
Human Automation TeamingSemi-supervised machine learningTools are trained based on the expertise of SOC analyst insights.Automation Application Areas
-
-

Appendix D. SOC Automation Non-Functional Requirements

Table A3 displays the non-functional requirements for SOC automation, representing desirable traits that SOC analysts would expect from their automated tools.
Table A3. SOC Automation Non-Functional Requirements.
Table A3. SOC Automation Non-Functional Requirements.
RequirementDetailsResults Informing Requirement
Automation competes on low false positive ratesThe premise is to reduce alert volume.Automation Application Areas
-
SOC Automation Implications
-
Automation reliability levels disclosedThe accuracy of automated tools is made clear to analysts.SOC Automation Implications
-
-
-
Automation recommendation confidence levels disclosedThe confidence of automation’s decisions is provided to analysts.
Decision support systems provide options and alternativesThe different possible paths to restoration are provided to analysts.Automation Application Areas
-
-
SOC Automation Implications
-
Human Factor Sentiment
-
Section Tacit Domain Knowledge
Decision support systems disclose options and alternatives not takenAll decisions, even those not feasible, should be presented to SOC analysts.
Decision support systems provide justificationAutomation justifies its reasoning.
Regarding Appendix C and Appendix D, when referring to the “Results Informing Requirement”, these results are as per the structure of this paper. For example, to understand the results informing the non-functional requirement of “automation reliability levels disclosed”, please see Section 3.2.2, Section 3.2.3 and Section 3.2.4.

References

  1. Basyurt, A.S.; Fromm, J.; Kuehn, P.; Kaufhold, M.-A.; Mirbabaie, M. Help Wanted—Challenges in Data Collection, Analysis and Communication of Cyber Threats in Security Operation Centers. In Proceedings of the 17th International Conference on Wirtschaftsinformatik, WI, Nuremberg, Germany, 21–23 February 2022; Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85171997510&partnerID=40&md5=30a02b455898c7c2c9d2421d82606470 (accessed on 15 September 2023).
  2. Bridges, R.A.; Rice, A.E.; Oesch, S.; Nichols, J.A.; Watson, C.; Spakes, K.; Norem, S.; Huettel, M.; Jewell, B.; Weber, B.; et al. Testing SOAR Tools in Use. Comput. Secur. 2023, 129, 103201. [Google Scholar] [CrossRef]
  3. Dietrich, C.; Krombholz, K.; Borgolte, K.; Fiebig, T. Investigating System Operators’ Perspective on Security Misconfigurations. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 1272–1289. [Google Scholar] [CrossRef]
  4. Hughes, K.; McLaughlin, K.; Sezer, S. Dynamic Countermeasure Knowledge for Intrusion Response Systems. In Proceedings of the 2020 31st Irish Signals and Systems Conference, ISSC, Letterkenny, Ireland, 11–12 June 2020. [Google Scholar] [CrossRef]
  5. Vectra AI 2023 State of Threat Detection—The Defenders’ Dilemma; Vectra AI: San Jose, CA, USA, 2023; pp. 1–15. Available online: https://www.vectra.ai/resources/2023-state-of-threat-detection (accessed on 15 January 2024).
  6. Alahmadi, B.A.; Axon, L.; Martinovic, I. 99% False Positives: A Qualitative Study of SOC Analysts’ Perspectives on Security Alarms. In Proceedings of the 31st Usenix Security Symposium, Boston, MA, USA, 10–12 August 2022; Usenix—The Advanced Computing Systems Association: Berkeley, CA, USA, 2022. [Google Scholar]
  7. Devo 2021 Devo SOC Performance Report 2021; Ponemon Institute: Cambridge, MA, USA, 14 December 2021; pp. 1–43. Available online: https://www.devo.com/blog/2021-devo-soc-performance-report-soc-leaders-and-staff-are-not-aligned/ (accessed on 22 August 2023).
  8. Tines. Voice of the SOC Analyst. Tines: San Francisco, CA, USA, 2022; pp. 1–39. Available online: https://www.tines.com/reports/voice-of-the-soc-analyst (accessed on 10 September 2023).
  9. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  10. Mosier, K.L.; Skitka, L.J.; Heers, S.; Burdick, M. Automation Bias: Decision Making and Performance in High-Tech Cockpits. Int. J. Aviat. Psychol. 1998, 8, 47–63. [Google Scholar] [CrossRef]
  11. Parasuraman, R.; Manzey, D.H. Complacency and Bias in Human Use of Automation: An Attentional Integration. Hum. Factors 2010, 52, 381–410. [Google Scholar] [CrossRef]
  12. Skitka, L.J.; Mosier, K.L.; Burdick, M. Does Automation Bias Decision-Making? Int. J. Hum.-Comput. Stud. 1999, 51, 991–1006. [Google Scholar] [CrossRef]
  13. Parasuraman, R.; Sheridan, T.B.; Wickens, C.D. A Model for Types and Levels of Human Interaction with Automation. IEEE Trans. Syst. Man Cybern. A 2000, 30, 286–297. [Google Scholar] [CrossRef]
  14. Butavicius, M.; Parsons, K.; Lillie, M.; McCormac, A.; Pattinson, M.; Calic, D. When believing in technology leads to poor cyber security: Development of a trust in technical controls scale. Comput. Secur. 2020, 98, 102020. [Google Scholar] [CrossRef]
  15. Shahjee, D.; Ware, N. Integrated Network and Security Operation Center: A Systematic Analysis. IEEE Access 2022, 10, 27881–27898. [Google Scholar] [CrossRef]
  16. Vielberth, M.; Bohm, F.; Fichtinger, I.; Pernul, G. Security Operations Center: A Systematic Study and Open Challenges. IEEE Access 2020, 8, 227756–227779. [Google Scholar] [CrossRef]
  17. Peters, M.D.J.; Marnie, C.; Tricco, A.C.; Pollock, D.; Munn, Z.; Alexander, L.; McInerney, P.; Godfrey, C.M.; Khalil, H. Updated Methodological Guidance for the Conduct of Scoping Reviews. JBI Evid. Synth. 2020, 18, 2119–2126. [Google Scholar] [CrossRef]
  18. Peters, M.D.J.; Godfrey, C.; McInerney, P.; Khalil, H.; Larsen, P.; Marnie, C.; Pollock, D.; Tricco, A.C.; Munn, Z. Best Practice Guidance and Reporting Items for the Development of Scoping Review Protocols. JBI Evid. Synth. 2022, 20, 953–968. [Google Scholar] [CrossRef] [PubMed]
  19. Arksey, H.; O’Malley, L. Scoping Studies: Towards a Methodological Framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef]
  20. Haddaway, N.R.; Grainger, M.J.; Gray, C.T. Citationchaser: A Tool for Transparent and Efficient Forward and Backward Citation Chasing in Systematic Searching. Res. Synth. Methods 2022, 13, 533–545. [Google Scholar] [CrossRef] [PubMed]
  21. Braun, V.; Clarke, V. Using Thematic Analysis in Psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  22. Chen, C.; Lin, S.C.; Huang, S.C.; Chu, Y.T.; Lei, C.L.; Huang, C.Y. Building Machine Learning-Based Threat Hunting System from Scratch. Digit. Threat. 2022, 3, 1–21. [Google Scholar] [CrossRef]
  23. Oprea, A.; Li, Z.; Norris, R.; Bowers, K. MADE: Security Analytics for Enterprise Threat Detection. In ACSAC ’18: Proceedings of the 34th Annual Computer Security Applications Conference, San Juan, PR, USA, 3–7 December 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 124–136. [Google Scholar] [CrossRef]
  24. Yen, T.-F.; Oprea, A.; Onarlioglu, K.; Leetham, T.; Robertson, W.; Juels, A.; Kirda, E. Beehive: Large-Scale Log Analysis for Detecting Suspicious Activity in Enterprise Networks. In ACSAC ’13: Proceedings of the 29th Annual Computer Security Applications Conference, New Orleans, LA, USA, 9–13 December 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 199–208. [Google Scholar] [CrossRef]
  25. Ban, T.; Samuel, N.; Takahashi, T.; Inoue, D. Combat Security Alert Fatigue with AI-Assisted Techniques. In Proceedings of the CSET ’21: Proceedings of the 14th Cyber Security Experimentation and Test Workshop, Virtual, 9 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; Volume 21, pp. 9–16. [Google Scholar] [CrossRef]
  26. Altamimi, S.; Altamimi, B.; Côté, D.; Shirmohammadi, S. Toward a Superintelligent Action Recommender for Network Operation Centers Using Reinforcement Learning. IEEE Access 2023, 11, 20216–20229. [Google Scholar] [CrossRef]
  27. Hassan, W.U.; Guo, S.; Li, D.; Chen, Z.; Jee, K.; Li, Z.; Bates, A. NoDoze: Combatting Threat Alert Fatigue with Automated Provenance Triage. In Proceedings of the Proceedings 2019 Network and Distributed System Security Symposium, San Diego, CA, USA, 24–27 February 2019. [Google Scholar] [CrossRef]
  28. Kurogome, Y.; Otsuki, Y.; Kawakoya, Y.; Iwamura, M.; Hayashi, S.; Mori, T.; Sen, K. EIGER: Automated IOC Generation for Accurate and Interpretable Endpoint Malware Detection. In Proceedings of ACSAC ’19: Proceedings of the 35th Annual Computer Security Applications Conference, San Juan, PR, USA, 9–13 December 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 687–701. [Google Scholar] [CrossRef]
  29. Ndichu, S.; Ban, T.; Takahashi, T.; Inoue, D. A Machine Learning Approach to Detection of Critical Alerts from Imbalanced Multi-Appliance Threat Alert Logs. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 2119–2127. [Google Scholar] [CrossRef]
  30. Sworna, Z.T.; Islam, C.; Babar, M.A. APIRO: A Framework for Automated Security Tools API Recommendation. ACM Trans. Softw. Eng. Methodol. 2023, 32, 1–42. [Google Scholar] [CrossRef]
  31. Zhong, C.; Yen, J.; Lui, P.; Erbacher, R. Learning From Experts’ Experience: Toward Automated Cyber Security Data Triage. IEEE Syst. J. 2019, 13, 603–614. [Google Scholar] [CrossRef]
  32. González-Granadillo, G.; González-Zarzosa, S.; Diaz, R. Security Information and Event Management (SIEM): Analysis, Trends, and Usage in Critical Infrastructures. Sensors 2021, 21, 4759. [Google Scholar] [CrossRef]
  33. Akinrolabu, O.; Agrafiotis, I.; Erola, A. The Challenge of Detecting Sophisticated Attacks: Insights from SOC Analysts. In Proceedings of the Proceedings of the 13th International Conference on Availability, Reliability and Security, Hamburg, Germany, 27–30 August; 2018; pp. 1–9. [Google Scholar] [CrossRef]
  34. van Ede, T.; Aghakhani, H.; Spahn, N.; Bortolameotti, R.; Cova, M.; Continella, A.; Steen, M.v.; Peter, A.; Kruegel, C.; Vigna, G. DEEPCASE: Semi-Supervised Contextual Analysis of Security Events. In Proceedings of the 2022 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 23–25 May 2022; pp. 522–539. [Google Scholar] [CrossRef]
  35. Chung, M.-H.; Yang, Y.; Wang, L.; Cento, G.; Jerath, K.; Raman, A.; Lie, D.; Chignell, M.H. Implementing Data Exfiltration Defense in Situ: A Survey of Countermeasures and Human Involvement. ACM Comput. Surv. 2023, 55, 1–37. [Google Scholar] [CrossRef]
  36. Goodall, J.R.; Ragan, E.D.; Steed, C.A.; Reed, J.W.; Richardson, D.; Huffer, K.; Bridges, R.; Laska, J. Situ: Identifying and Explaining Suspicious Behavior in Networks. IEEE Trans. Vis. Comput. Graph. 2019, 25, 204–214. [Google Scholar] [CrossRef]
  37. Strickson, B.; Worsley, C.; Bertram, S. Human-Centered Assessment of Automated Tools for Improved Cyber Situational Awareness. In Proceedings of the 2023 15th International Conference on Cyber Conflict: Meeting Reality (CyCon), Tallinn, Estonia, 30 May–2 June 2023; pp. 273–286. [Google Scholar] [CrossRef]
  38. Afzaliseresht, N.; Miao, Y.; Michalska, S.; Liu, Q.; Wang, H. From Logs to Stories: Human-Centred Data Mining for Cyber Threat Intelligence. IEEE Access 2020, 8, 19089–19099. [Google Scholar] [CrossRef]
  39. Hauptman, A.I.; Schelble, B.G.; McNeese, N.J.; Madathil, K.C. Adapt and Overcome: Perceptions of Adaptive Autonomous Agents for Human-AI Teaming. Comput. Hum. Behav. 2023, 138, 107451. [Google Scholar] [CrossRef]
  40. Chiba, D.; Akiyama, M.; Otsuki, Y.; Hada, H.; Yagi, T.; Fiebig, T.; Van Eeten, M. DomainPrio: Prioritizing Domain Name Investigations to Improve SOC Efficiency. IEEE Access 2022, 10, 34352–34368. [Google Scholar] [CrossRef]
  41. Gupta, N.; Traore, I.; de Quinan, P.M.F. Automated Event Prioritization for Security Operation Center using Deep Learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 5864–5872. [Google Scholar] [CrossRef]
  42. Islam, C.; Babar, M.A.; Croft, R.; Janicke, H. SmartValidator: A Framework for Automatic Identification and Classification of Cyber Threat Data. J. Network Comput. Appl. 2022, 202, 103370. [Google Scholar] [CrossRef]
  43. Renners, L.; Heine, F.; Kleiner, C.; Rodosek, G.D. Adaptive and Intelligible Prioritization for Network Security Incidents. In Proceedings of the 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security), Oxford, UK, 3–4 June 2019; pp. 1–8. [Google Scholar] [CrossRef]
  44. Demertzis, K.; Tziritas, N.; Kikiras, P.; Sanchez, S.L.; Iliadis, L. The next Generation Cognitive Security Operations Center: Adaptive Analytic Lambda Architecture for Efficient Defense against Adversarial Attacks. Big Data Cogn. Comput. 2019, 3, 6. [Google Scholar] [CrossRef]
  45. Andrade, R.O.; Yoo, S.G. Cognitive Security: A Comprehensive Study of Cognitive Science in Cybersecurity. J. Inf. Secur. Appl. 2019, 48, 102352. [Google Scholar] [CrossRef]
  46. Chamberlain, L.B.; Davis, L.E.; Stanley, M.; Gattoni, B.R. Automated Decision Systems for Cybersecurity and Infrastructure Security. In Proceedings of the 2020 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 18–20 May 2020; pp. 196–201. [Google Scholar] [CrossRef]
  47. Husák, M.; Sadlek, L.; Špaček, S.; Laštovička, M.; Javorník, M.; Komárková, J. CRUSOE: A Toolset for Cyber Situational Awareness and Decision Support in Incident Handling. Comput. Secur. 2022, 115, 102609. [Google Scholar] [CrossRef]
  48. Chen, Y.; Zahedi, F.M.; Abbasi, A.; Dobolyi, D. Trust Calibration of Automated Security IT Artifacts: A Multi-Domain Study of Phishing-Website Detection Tools. Inf. Manag. 2021, 58, 103394. [Google Scholar] [CrossRef]
  49. Erola, A.; Agrafiotis, I.; Happa, J.; Goldsmith, M.; Creese, S.; Legg, P.A. RicherPicture: Semi-Automated Cyber Defence Using Context-Aware Data Analytics. In Proceedings of the 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA), London, UK, 19–20 June 2017; pp. 1–8. [Google Scholar] [CrossRef]
  50. Happa, J.; Agrafiotis, I.; Helmhout, M.; Bashford-Rogers, T.; Goldsmith, M.; Creese, S. Assessing a Decision Support Tool for SOC Analysts. Digit. Threat. Res. Pract. 2021, 2, 1–35. [Google Scholar] [CrossRef]
  51. Naseer, A.; Naseer, H.; Ahmad, A.; Maynard, S.B.; Masood Siddiqui, A. Real-Time Analytics, Incident Response Process Agility and Enterprise Cybersecurity Performance: A Contingent Resource-Based Analysis. Int. J. Inf. Manag. 2021, 59, 102334. [Google Scholar] [CrossRef]
  52. van der Kleij, R.; Schraagen, J.M.; Cadet, B.; Young, H. Developing Decision Support for Cybersecurity Threat and Incident Managers. Comput. Secur. 2022, 113, 102535. [Google Scholar] [CrossRef]
  53. Amthor, P.; Fischer, D.; Kühnhauser, W.E.; Stelzer, D. Automated Cyber Threat Sensing and Responding: Integrating Threat Intelligence into Security-Policy-Controlled Systems. In Proceedings of the ARES ’19: Proceedings of the 14th International Conference on Availability, Reliability and Security, Canterbury, UK, 26–29 August 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  54. Kinyua, J.; Awuah, L. Ai/Ml in Security Orchestration, Automation and Response: Future Research Directions. Intell. Autom. Soft Comp. 2021, 28, 527–545. [Google Scholar] [CrossRef]
  55. Neupane, S.; Ables, J.; Anderson, W.; Mittal, S.; Rahimi, S.; Banicescu, I.; Seale, M. Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities. IEEE Access 2022, 10, 112392–112415. [Google Scholar] [CrossRef]
  56. Chen, J.; Mishler, S.; Hu, B. Automation Error Type and Methods of Communicating Automation Reliability Affect Trust and Performance: An Empirical Study in the Cyber Domain. IEEE Trans. Hum.-Mach. Syst. 2021, 51, 463–473. [Google Scholar] [CrossRef]
  57. Ryan, T.J.; Alarcon, G.M.; Walter, C.; Gamble, R.; Jessup, S.A.; Capiola, A.; Pfahler, M.D. Trust in Automated Software Repair: The Effects of Repair Source, Transparency, and Programmer Experience on Perceived Trustworthiness and Trust. In Proceedings of the HCI for Cybersecurity, Privacy and Trust; Moallem, A., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; Volume 11594, pp. 452–470. [Google Scholar] [CrossRef]
  58. Husák, M.; Čermák, M. SoK: Applications and Challenges of Using Recommender Systems in Cybersecurity Incident Handling and Response. In ARES ’22: Proceedings of the 17th International Conference on Availability, Reliability and Security, Vienna Austria, 23–26 August 2022; Association for Computing Machinery: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  59. Gutzwiller, R.S.; Fugate, S.; Sawyer, B.D.; Hancock, P.A. The Human Factors of Cyber Network Defense. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, USA, 26–30 October 2015; Volume 59, pp. 322–326. [Google Scholar] [CrossRef]
  60. Kokulu, F.B.; Soneji, A.; Bao, T.; Shoshitaishvili, Y.; Zhao, Z.; Doupé, A.; Ahn, G.-J. Matched and Mismatched SOCs: A Qualitative Study on Security Operations Center Issues. In Proceedings of the CCS ’19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1955–1970. [Google Scholar] [CrossRef]
  61. Brown, P.; Christensen, K.; Schuster, D. An Investigation of Trust in a Cyber Security Tool. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Washington, DC, USA, 19–23 September 2016; Volume 60, pp. 1454–1458. [Google Scholar] [CrossRef]
  62. Butavicius, M.; Taib, R.; Han, S.J. Why People Keep Falling for Phishing Scams: The Effects of Time Pressure and Deception Cues on the Detection of Phishing Emails. Comput. Secur. 2022, 123, 102937. [Google Scholar] [CrossRef]
  63. Baroni, P.; Cerutti, F.; Fogli, D.; Giacomin, M.; Gringoli, F.; Guida, G.; Sullivan, P. Self-Aware Effective Identification and Response to Viral Cyber Threats. In Proceedings of the 13th International Conference on Cyber Conflict (CyCon), Tallinn, Estonia, 25–28 May 2021; Jancarkova, T., Lindstrom, L., Visky, G., Zotz, P., Eds.; NATO CCD COE Publications; CCD COE: Tallinn, Estonia, 2021; Volume 2021, pp. 353–370. [Google Scholar] [CrossRef]
  64. Islam, C.; Babar, M.A.; Nepal, S. A Multi-Vocal Review of Security Orchestration. ACM Comput. Surv. 2019, 52, 1–45. [Google Scholar] [CrossRef]
  65. Pawlicka, A.; Pawlicki, M.; Kozik, R.; Choraś, R.S. A Systematic Review of Recommender Systems and Their Applications in Cybersecurity. Sensors 2021, 21, 5248. [Google Scholar] [CrossRef]
  66. Agyepong, E.; Cherdantseva, Y.; Reinecke, P.; Burnap, P. A Systematic Method for Measuring the Performance of a Cyber Security Operations Centre Analyst. Comput. Secur. 2023, 124, 102959. [Google Scholar] [CrossRef]
  67. Ofte, H.J.; Katsikas, S. Understanding Situation Awareness in SOCs, a Systematic Literature Review. Comput. Secur. 2023, 126, 103069. [Google Scholar] [CrossRef]
  68. Tilbury, J.; Flowerday, S. Humans and Automation: Augmenting Security Operation Centers. J. Cybersecur. Priv. JCP 2024, 4, 388–409. [Google Scholar] [CrossRef]
  69. Yang, W.; Lam, K.-Y. Automated Cyber Threat Intelligence Reports Classification for Early Warning of Cyber Attacks in Next Generation SOC. In Information and Communications Security: 21st International Conference; Zhou, J., Luo, X., Shen, Q., Xu, Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 11999, pp. 145–164. [Google Scholar] [CrossRef]
  70. Liu, J.; Zhang, R.; Liu, W.; Zhang, Y.; Gu, D.; Tong, M.; Wang, X.; Xue, J.; Wang, H. Context2Vector: Accelerating Security Event Triage via Context Representation Learning. Inf. Softw. Technol. 2022, 146, 106856. [Google Scholar] [CrossRef]
  71. John, A.; Isnin, I.F.B.; Madni, S.H.H.; Faheem, M. Cluster-Based Wireless Sensor Network Framework for Denial-of-Service Attack Detection Based on Variable Selection Ensemble Machine Learning Algorithms. Intell. Syst. Appl. 2024, 22, 200381. [Google Scholar] [CrossRef]
  72. Keating, C.B.; Padilla, J.J.; Adams, K. System of Systems Engineering Requirements: Challenges and Guidelines. Eng. Manag. J. 2008, 20, 24–31. [Google Scholar] [CrossRef]
  73. Kurtanovic, Z.; Maalej, W. Automatically Classifying Functional and Non-Functional Requirements Using Supervised Machine Learning. In Proceedings of the 2017 IEEE 25th International Requirements Engineering Conference (RE), Lisbon, Portugal, 4–8 September 2017; pp. 490–495. [Google Scholar]
  74. Eckhardt, J.; Vogelsang, A.; Fernández, D.M. Are “Non-Functional” Requirements Really Non-Functional? An Investigation of Non-Functional Requirements in Practice. In Proceedings of the 38th International Conference on Software Engineering, Austin, TX, USA, 14–22 May 2016; pp. 832–842. [Google Scholar]
  75. Tilbury, J.; Flowerday, S. The Rationality of Automation Bias in Security Operation Centers. J. Inf. Syst. Secur. 2024, 20, 87–107. [Google Scholar]
Figure 1. Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) diagram.
Figure 1. Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) diagram.
Computers 13 00165 g001
Figure 2. SOC Automation Implementation Guidelines.
Figure 2. SOC Automation Implementation Guidelines.
Computers 13 00165 g002
Table 1. Articles by Technology Type.
Table 1. Articles by Technology Type.
Technology TypeNumber of Articles
AI/ML29
   Not Classified (simple AI/ML)   12
   Neural Networks   7
   Visualization Modules   4
   Natural Language Processing   3
   Deep Learning   2
   Reinforcement Learning   1
IDS3
SIEM/SOAR3
Other5
N/A8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tilbury, J.; Flowerday, S. Automation Bias and Complacency in Security Operation Centers. Computers 2024, 13, 165. https://doi.org/10.3390/computers13070165

AMA Style

Tilbury J, Flowerday S. Automation Bias and Complacency in Security Operation Centers. Computers. 2024; 13(7):165. https://doi.org/10.3390/computers13070165

Chicago/Turabian Style

Tilbury, Jack, and Stephen Flowerday. 2024. "Automation Bias and Complacency in Security Operation Centers" Computers 13, no. 7: 165. https://doi.org/10.3390/computers13070165

APA Style

Tilbury, J., & Flowerday, S. (2024). Automation Bias and Complacency in Security Operation Centers. Computers, 13(7), 165. https://doi.org/10.3390/computers13070165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop