Automation Bias and Complacency in Security Operation Centers
Abstract
:1. Introduction
2. Materials and Methods
2.1. Article(s) Retrieval Process
2.2. Article(s) Selection Process
2.3. Article(s) Data Analysis Process
3. Results
3.1. SOC Automation Application Areas
3.1.1. Automated Incident Detection
Automated Alert Triage
Automated Correlation of Security Alerts
3.1.2. Automated Alert Reports—Explainable and Interpretable
3.1.3. Automated Alert Prioritization
3.1.4. Decision Support Systems (DSS)
Adaptable Decision Support Systems
Decision Support Systems for Known Procedures
3.1.5. Automated Incident Response
AI and ML in Incident Response
3.2. SOC Automation Implications
3.2.1. Automation of Defined Processes
3.2.2. Levels of Automation
3.2.3. Automation Bias
Automation Complacency
Automation Mis- and Disuse
3.2.4. Trust in Automation
Human-Machine Trust
3.3. Human Factor Sentiment
3.3.1. Human-in-the-Loop (HITL)
- Ref. [29] (p. 2119) states that “… these tools can seldom replicate the decision-making process of an analyst”.
- Ref. [35] (p. 22) states that “Although many researchers have focused on the algorithmic aspects of protecting against data exfiltration, human analysts remain at the core of what are effectively socio-technical systems”.
- Ref. [2] (p. 5) states that “the cybersecurity domain requires some level of human oversight due to the inherent uncertainty”.
Decision-Makers
Human-AI Teamings
Tacit Domain Knowledge
3.3.2. Human-out-of-the-Loop (HOTL)
- Ref. [53] (p. 3) state that “… it would be ideal if systems that receive TI could also respond quickly and efficiently, i.e., as fully automated as possible”.
- Ref. [4] (p. 3) state that “… developing a completely automated response mechanism that can combat new emerging attacks without considerable human intervention”.
Fully Autonomous SOCs
4. Discussion
4.1. Automation in the Incident Response Lifecycle
4.2. Characteristics of Automation
4.3. The Emergence of AI and ML in SOCs
4.4. The Human-in-the-Loop Approach
5. Findings
5.1. SOC Automation Functional Requirements
- Automated Scanning of the Threat Environment—Collection: Automated tools must constantly scan the threat landscape and collect cyber threat information. This can be achieved by scraping vulnerability databases and tracking emerging trends from verified sources.
- Automated Scanning of the Threat Environment—Analysis: Automated tools convert collected information into cyber threat intelligence (CTI). CTI is more meticulous than general threat information, providing threat actor, vector, and characteristic-specific intelligence.
- Automated Scanning of the Threat Environment—Validation: The analyzed CTI is mapped against the indicators of compromise (IOCs) present in alerts, providing security analysts with validated threats that can be marked as high priority.
- Automated Scanning of the Threat Environment—Generation: If the analyzed CTI cannot be mapped against any IOC present in a SOCs alert (i.e., there is no indication of that specific threat), new IOCs can be generated, and detection tools can be updated accordingly.
- 5.
- Contextual Alert Investigations—Sequential Alert Correlation: Automated tools must employ horizontal alert correlation, collecting alerts with their preceding and proceeding (i.e., surrounding) alerts.
- 6.
- Contextual Alert Investigations—Sequential Alert Analysis: Automated tools must employ horizontal alert analysis whereby alerts are analyzed and investigated in context with their preceding and proceeding (i.e., surrounding) alerts.
- 7.
- Security to Business Associations—Potential Threat Implications: Automated tools must map threatening alerts to the associated business processes and organizational assets they will affect if not resolved.
- 8.
- Automation Explainability—Alert Report Interpretability: Automated tools must leverage the information above, outline alert characteristics, and explain their maliciousness.
- 9.
- Automation Explainability—Tool Operation Interpretability: Automated tools must explain how they operate and arrive at decisions, providing security analysts with logical reasoning.
- 10.
- Human Automation Teaming—Semi-Supervised Learning: Automated tools that leverage AI and ML approaches must leverage semi-supervised learning models that are trained on the expertise of SOC analysts but also have the freedom to discover new insights independently.
5.2. SOC Automation Non-Functional Requirements
- Automation Competes on Low False Positive Rates: Given the volume of alerts that SOC analysts must triage, automated tools attempt to reduce the number of alerts rather than add to them. While false negatives are critical and cannot be afforded, false positives represent a legitimate area for improvement.
- Decision Support Systems Provide Options and Alternatives: Security analysts are presented with all viable paths for threat mitigation and restoration activities.
- Decision Support Systems Disclose Options and Alternatives Not Taken: Security analysts are presented with all non-viable paths for threat mitigation and restoration activities (i.e., those deemed not feasible) to showcase that automation has conducted noteworthy consideration for countering legitimate attacks.
- Decision Support Systems Provide Justification: Automated tools justify why certain decisions have been recommended (or executed) by providing logical reasoning based on the CTI and IOC information provided.
- Automation Tool Reliability Levels Disclosed: Security analysts must be made clear about the accuracy and reliability of automated tools based on performance history. This requirement pertains to the tool’s performance over time, not in an individual event/alert capacity.
- Automation Recommendation Confidence Levels Disclosed: When providing recommendations for action, security analysts must be made clear about the confidence that automated tools have in their decisions. Unlike the requirement above, confidence levels pertain to an individual event/alert capacity, whereby automation expresses how confident it is in a specific action being taken.
6. Limitations and Future Work
7. Conclusions and Contributions
- Firstly, our review includes proposed automated tools and solutions discussed in academic literature and validated in real-world SOCs (or against real-world datasets).
- Secondly, our review analyzes the human factor implications of increased automation. Although human factor challenges within SOCs have been reported, no review discusses the implications in terms of bias, complacency, and trust.
- Lastly, our review conducted a human-factor sentiment analysis against all 48 included articles, yielding insightful results in a field dominated by technical approaches.
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Search Strings by Database
Database | Search String | Results |
---|---|---|
ScienceDirect (199) | Secondary Objective #1 Title, abstract or author-specified keywords (“Security Operations Center” OR “Network Operation Center” OR “Cybersecurity Operation” OR “Cyber Security Operation” OR “Incident Response”) AND (“Automate” OR “Automation” OR “Decision Support” OR “Decision Aid”) | 39 |
Secondary Objective #2 Title, abstract or author-specified keywords (“Security Operations Center” OR “Network Operations Center” OR “Cybersecurity” OR “Cyber Security”) AND (“SOC Analyst” OR “Analyst” OR “Security Analyst” OR “Human”) AND Find articles with these terms (“Automate” OR “Automation” OR “Decision Support” OR “Decision Aid” OR “Technical Control”) AND (“Complacency” OR “Bias” OR “Trust”) | 160 | |
Total after removing duplicates: 193 | ||
Scopus (183) | Secondary Objective #1 (TITLE-ABS-KEY((“Security Operation Center” OR “Security Operations Center” OR “Network Operation Center” OR “Network Operations Center” OR “Cybersecurity Operation” OR “Cyber Security Operation”) AND ALL (“Automate” OR “Automated” OR “Automation” OR “Automated Security Tools” OR “Decision Aid*” OR “Decision Support”)) | 140 |
Secondary Objective #2 TITLE-ABS-KEY((“Security Operation* Center” OR “Network Operation* Center” OR “Cybersecurity Operation” OR “Cyber Security Operation” OR “Incident Response”) AND (“SOC Analyst” OR “Analyst” OR “Security Analyst” OR “Human”)) AND ALL ((“Automate” OR “Automation” OR “Decision Support” OR “Decision Aid” OR “Technical Control”) AND (“Complacency” OR “Bias” OR “Trust” OR “Reliance”)) Total after removing duplicates: 161 | 43 | |
IEEE Xplore (189) | Secondary Objective #1 (“All Metadata”: “Security Operation Centre” OR “All Metadata”: “Security Operations Centre” OR “All Metadata”: “Security Operation Center” OR “All Metadata”: “Security Operations Center” OR “All Metadata”: “Network Operation Center” OR “All Metadata”: “Network Operations Center” OR “All Metadata”: “Cybersecurity Operation “ OR “All Metadata”: “Cyber Security Operation “ OR “All Metadata”: “Incident Response”) AND (“All Metadata”: “Automate” OR “All Metadata”: “Automated” OR “All Metadata”: “Automation” OR “All Metadata”: “Decision Support” OR “All Metadata”: “Decision Aid”) | 127 |
Secondary Objective #2 (“All Metadata”: “Security Operation Centre” OR “All Metadata”: “Security Operations Centre” OR “All Metadata”: “Security Operation Center” OR “All Metadata”: “Security Operations Center” OR “All Metadata”: “Network Operation Center” OR “All Metadata”: “Network Operations Center” OR “All Metadata”: “Cybersecurity” OR “All Metadata”: “Cyber Security” OR “All Metadata”: “Incident Response”) AND (“All Metadata”: “SOC Analyst” OR “All Metadata”: “Analyst” OR “All Metadata”: “Security Analyst” OR “All Metadata”: “Human” OR “All Metadata”: “Operator”) AND (“All Metadata”: “Automate” OR “All Metadata”: “Automated” OR “All Metadata”: “Automation” “Decision Support” OR “All Metadata”: “Decision Aid” OR “All Metadata”: “Technical Control”) AND (“Full Text & Metadata”: “Complacency” OR “Full Text & Metadata”: “Bias” OR “Full Text & Metadata”: “Trust” OR “Full Text & Metadata”: “Reliance”) Total after removing duplicates: 176 | 62 | |
ACM Digital Library (128) | Secondary Objective #1 [[All: “security operation center”] OR [All: “security operations center”] OR [All: “security operation centre”] OR [All: “security operations centre”]] AND [[All: “automate”] OR [All: “automation”] OR [All: “automated”] OR [All: “technical controls”] OR [All: “decision support system”] OR [All: “security automation”] OR [All: “automated security”]] | 82 |
Secondary Objective #2 [[All: “security operation center”] OR [All: “security operations center”]] AND [[All: “soc analyst”] OR [All: “analyst”] OR [All: “security analyst”] OR [All: “human”]] AND [[All: “automated”] OR [All: “automated”] OR [All: “automation”]] AND [[All: “complacency”] OR [All: “bias”] OR [All: “trust”] OR [All: “reliance”]] Total after removing duplicates: 69 | 46 |
Appendix B. Thematic Maps
Appendix C. SOC Automation Functional Requirements
Requirement Group | Requirement | Details | Results Informing Feature |
---|---|---|---|
Automated Scanning of the Threat Environment | Cyber threat information collection | Automation collects cyber threat information. | Automation Application Areas
|
Cyber threat intelligence analysis | Automation converts cyber threat information into CTI. | ||
Cyber threat intelligence and IOC validation | Mapping CTI with associated indicators of IOCs flagged in alerts. | ||
IOC generation | Generating new IOCs if necessary. | ||
Contextual Alert Investigations | Sequential alert correlation | Alerts are collected with their preceding and proceeding alerts. | Automation Application Areas
|
Sequential alert analysis | Alerts are analyzed in context with their preceding and proceeding alerts. | ||
Security to Business Associations | Potential threat implications | Threatening alerts are mapped to the associated business processes and organizational assets they will affect. | Automation Application Areas
|
Automation Explainability | Alert report interpretability | Automation explains why alerts are malicious and what characteristics they possess. | Automation Application Areas
|
Tool operation interpretability | Automated tools explain how they operate and arrive at decisions. | Automation Application Areas
| |
Human Automation Teaming | Semi-supervised machine learning | Tools are trained based on the expertise of SOC analyst insights. | Automation Application Areas
|
Appendix D. SOC Automation Non-Functional Requirements
Requirement | Details | Results Informing Requirement |
---|---|---|
Automation competes on low false positive rates | The premise is to reduce alert volume. | Automation Application Areas
|
Automation reliability levels disclosed | The accuracy of automated tools is made clear to analysts. | SOC Automation Implications
|
Automation recommendation confidence levels disclosed | The confidence of automation’s decisions is provided to analysts. | |
Decision support systems provide options and alternatives | The different possible paths to restoration are provided to analysts. | Automation Application Areas
|
Decision support systems disclose options and alternatives not taken | All decisions, even those not feasible, should be presented to SOC analysts. | |
Decision support systems provide justification | Automation justifies its reasoning. |
References
- Basyurt, A.S.; Fromm, J.; Kuehn, P.; Kaufhold, M.-A.; Mirbabaie, M. Help Wanted—Challenges in Data Collection, Analysis and Communication of Cyber Threats in Security Operation Centers. In Proceedings of the 17th International Conference on Wirtschaftsinformatik, WI, Nuremberg, Germany, 21–23 February 2022; Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85171997510&partnerID=40&md5=30a02b455898c7c2c9d2421d82606470 (accessed on 15 September 2023).
- Bridges, R.A.; Rice, A.E.; Oesch, S.; Nichols, J.A.; Watson, C.; Spakes, K.; Norem, S.; Huettel, M.; Jewell, B.; Weber, B.; et al. Testing SOAR Tools in Use. Comput. Secur. 2023, 129, 103201. [Google Scholar] [CrossRef]
- Dietrich, C.; Krombholz, K.; Borgolte, K.; Fiebig, T. Investigating System Operators’ Perspective on Security Misconfigurations. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 1272–1289. [Google Scholar] [CrossRef]
- Hughes, K.; McLaughlin, K.; Sezer, S. Dynamic Countermeasure Knowledge for Intrusion Response Systems. In Proceedings of the 2020 31st Irish Signals and Systems Conference, ISSC, Letterkenny, Ireland, 11–12 June 2020. [Google Scholar] [CrossRef]
- Vectra AI 2023 State of Threat Detection—The Defenders’ Dilemma; Vectra AI: San Jose, CA, USA, 2023; pp. 1–15. Available online: https://www.vectra.ai/resources/2023-state-of-threat-detection (accessed on 15 January 2024).
- Alahmadi, B.A.; Axon, L.; Martinovic, I. 99% False Positives: A Qualitative Study of SOC Analysts’ Perspectives on Security Alarms. In Proceedings of the 31st Usenix Security Symposium, Boston, MA, USA, 10–12 August 2022; Usenix—The Advanced Computing Systems Association: Berkeley, CA, USA, 2022. [Google Scholar]
- Devo 2021 Devo SOC Performance Report 2021; Ponemon Institute: Cambridge, MA, USA, 14 December 2021; pp. 1–43. Available online: https://www.devo.com/blog/2021-devo-soc-performance-report-soc-leaders-and-staff-are-not-aligned/ (accessed on 22 August 2023).
- Tines. Voice of the SOC Analyst. Tines: San Francisco, CA, USA, 2022; pp. 1–39. Available online: https://www.tines.com/reports/voice-of-the-soc-analyst (accessed on 10 September 2023).
- Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
- Mosier, K.L.; Skitka, L.J.; Heers, S.; Burdick, M. Automation Bias: Decision Making and Performance in High-Tech Cockpits. Int. J. Aviat. Psychol. 1998, 8, 47–63. [Google Scholar] [CrossRef]
- Parasuraman, R.; Manzey, D.H. Complacency and Bias in Human Use of Automation: An Attentional Integration. Hum. Factors 2010, 52, 381–410. [Google Scholar] [CrossRef]
- Skitka, L.J.; Mosier, K.L.; Burdick, M. Does Automation Bias Decision-Making? Int. J. Hum.-Comput. Stud. 1999, 51, 991–1006. [Google Scholar] [CrossRef]
- Parasuraman, R.; Sheridan, T.B.; Wickens, C.D. A Model for Types and Levels of Human Interaction with Automation. IEEE Trans. Syst. Man Cybern. A 2000, 30, 286–297. [Google Scholar] [CrossRef]
- Butavicius, M.; Parsons, K.; Lillie, M.; McCormac, A.; Pattinson, M.; Calic, D. When believing in technology leads to poor cyber security: Development of a trust in technical controls scale. Comput. Secur. 2020, 98, 102020. [Google Scholar] [CrossRef]
- Shahjee, D.; Ware, N. Integrated Network and Security Operation Center: A Systematic Analysis. IEEE Access 2022, 10, 27881–27898. [Google Scholar] [CrossRef]
- Vielberth, M.; Bohm, F.; Fichtinger, I.; Pernul, G. Security Operations Center: A Systematic Study and Open Challenges. IEEE Access 2020, 8, 227756–227779. [Google Scholar] [CrossRef]
- Peters, M.D.J.; Marnie, C.; Tricco, A.C.; Pollock, D.; Munn, Z.; Alexander, L.; McInerney, P.; Godfrey, C.M.; Khalil, H. Updated Methodological Guidance for the Conduct of Scoping Reviews. JBI Evid. Synth. 2020, 18, 2119–2126. [Google Scholar] [CrossRef]
- Peters, M.D.J.; Godfrey, C.; McInerney, P.; Khalil, H.; Larsen, P.; Marnie, C.; Pollock, D.; Tricco, A.C.; Munn, Z. Best Practice Guidance and Reporting Items for the Development of Scoping Review Protocols. JBI Evid. Synth. 2022, 20, 953–968. [Google Scholar] [CrossRef] [PubMed]
- Arksey, H.; O’Malley, L. Scoping Studies: Towards a Methodological Framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef]
- Haddaway, N.R.; Grainger, M.J.; Gray, C.T. Citationchaser: A Tool for Transparent and Efficient Forward and Backward Citation Chasing in Systematic Searching. Res. Synth. Methods 2022, 13, 533–545. [Google Scholar] [CrossRef] [PubMed]
- Braun, V.; Clarke, V. Using Thematic Analysis in Psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
- Chen, C.; Lin, S.C.; Huang, S.C.; Chu, Y.T.; Lei, C.L.; Huang, C.Y. Building Machine Learning-Based Threat Hunting System from Scratch. Digit. Threat. 2022, 3, 1–21. [Google Scholar] [CrossRef]
- Oprea, A.; Li, Z.; Norris, R.; Bowers, K. MADE: Security Analytics for Enterprise Threat Detection. In ACSAC ’18: Proceedings of the 34th Annual Computer Security Applications Conference, San Juan, PR, USA, 3–7 December 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 124–136. [Google Scholar] [CrossRef]
- Yen, T.-F.; Oprea, A.; Onarlioglu, K.; Leetham, T.; Robertson, W.; Juels, A.; Kirda, E. Beehive: Large-Scale Log Analysis for Detecting Suspicious Activity in Enterprise Networks. In ACSAC ’13: Proceedings of the 29th Annual Computer Security Applications Conference, New Orleans, LA, USA, 9–13 December 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 199–208. [Google Scholar] [CrossRef]
- Ban, T.; Samuel, N.; Takahashi, T.; Inoue, D. Combat Security Alert Fatigue with AI-Assisted Techniques. In Proceedings of the CSET ’21: Proceedings of the 14th Cyber Security Experimentation and Test Workshop, Virtual, 9 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; Volume 21, pp. 9–16. [Google Scholar] [CrossRef]
- Altamimi, S.; Altamimi, B.; Côté, D.; Shirmohammadi, S. Toward a Superintelligent Action Recommender for Network Operation Centers Using Reinforcement Learning. IEEE Access 2023, 11, 20216–20229. [Google Scholar] [CrossRef]
- Hassan, W.U.; Guo, S.; Li, D.; Chen, Z.; Jee, K.; Li, Z.; Bates, A. NoDoze: Combatting Threat Alert Fatigue with Automated Provenance Triage. In Proceedings of the Proceedings 2019 Network and Distributed System Security Symposium, San Diego, CA, USA, 24–27 February 2019. [Google Scholar] [CrossRef]
- Kurogome, Y.; Otsuki, Y.; Kawakoya, Y.; Iwamura, M.; Hayashi, S.; Mori, T.; Sen, K. EIGER: Automated IOC Generation for Accurate and Interpretable Endpoint Malware Detection. In Proceedings of ACSAC ’19: Proceedings of the 35th Annual Computer Security Applications Conference, San Juan, PR, USA, 9–13 December 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 687–701. [Google Scholar] [CrossRef]
- Ndichu, S.; Ban, T.; Takahashi, T.; Inoue, D. A Machine Learning Approach to Detection of Critical Alerts from Imbalanced Multi-Appliance Threat Alert Logs. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 2119–2127. [Google Scholar] [CrossRef]
- Sworna, Z.T.; Islam, C.; Babar, M.A. APIRO: A Framework for Automated Security Tools API Recommendation. ACM Trans. Softw. Eng. Methodol. 2023, 32, 1–42. [Google Scholar] [CrossRef]
- Zhong, C.; Yen, J.; Lui, P.; Erbacher, R. Learning From Experts’ Experience: Toward Automated Cyber Security Data Triage. IEEE Syst. J. 2019, 13, 603–614. [Google Scholar] [CrossRef]
- González-Granadillo, G.; González-Zarzosa, S.; Diaz, R. Security Information and Event Management (SIEM): Analysis, Trends, and Usage in Critical Infrastructures. Sensors 2021, 21, 4759. [Google Scholar] [CrossRef]
- Akinrolabu, O.; Agrafiotis, I.; Erola, A. The Challenge of Detecting Sophisticated Attacks: Insights from SOC Analysts. In Proceedings of the Proceedings of the 13th International Conference on Availability, Reliability and Security, Hamburg, Germany, 27–30 August; 2018; pp. 1–9. [Google Scholar] [CrossRef]
- van Ede, T.; Aghakhani, H.; Spahn, N.; Bortolameotti, R.; Cova, M.; Continella, A.; Steen, M.v.; Peter, A.; Kruegel, C.; Vigna, G. DEEPCASE: Semi-Supervised Contextual Analysis of Security Events. In Proceedings of the 2022 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 23–25 May 2022; pp. 522–539. [Google Scholar] [CrossRef]
- Chung, M.-H.; Yang, Y.; Wang, L.; Cento, G.; Jerath, K.; Raman, A.; Lie, D.; Chignell, M.H. Implementing Data Exfiltration Defense in Situ: A Survey of Countermeasures and Human Involvement. ACM Comput. Surv. 2023, 55, 1–37. [Google Scholar] [CrossRef]
- Goodall, J.R.; Ragan, E.D.; Steed, C.A.; Reed, J.W.; Richardson, D.; Huffer, K.; Bridges, R.; Laska, J. Situ: Identifying and Explaining Suspicious Behavior in Networks. IEEE Trans. Vis. Comput. Graph. 2019, 25, 204–214. [Google Scholar] [CrossRef]
- Strickson, B.; Worsley, C.; Bertram, S. Human-Centered Assessment of Automated Tools for Improved Cyber Situational Awareness. In Proceedings of the 2023 15th International Conference on Cyber Conflict: Meeting Reality (CyCon), Tallinn, Estonia, 30 May–2 June 2023; pp. 273–286. [Google Scholar] [CrossRef]
- Afzaliseresht, N.; Miao, Y.; Michalska, S.; Liu, Q.; Wang, H. From Logs to Stories: Human-Centred Data Mining for Cyber Threat Intelligence. IEEE Access 2020, 8, 19089–19099. [Google Scholar] [CrossRef]
- Hauptman, A.I.; Schelble, B.G.; McNeese, N.J.; Madathil, K.C. Adapt and Overcome: Perceptions of Adaptive Autonomous Agents for Human-AI Teaming. Comput. Hum. Behav. 2023, 138, 107451. [Google Scholar] [CrossRef]
- Chiba, D.; Akiyama, M.; Otsuki, Y.; Hada, H.; Yagi, T.; Fiebig, T.; Van Eeten, M. DomainPrio: Prioritizing Domain Name Investigations to Improve SOC Efficiency. IEEE Access 2022, 10, 34352–34368. [Google Scholar] [CrossRef]
- Gupta, N.; Traore, I.; de Quinan, P.M.F. Automated Event Prioritization for Security Operation Center using Deep Learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 5864–5872. [Google Scholar] [CrossRef]
- Islam, C.; Babar, M.A.; Croft, R.; Janicke, H. SmartValidator: A Framework for Automatic Identification and Classification of Cyber Threat Data. J. Network Comput. Appl. 2022, 202, 103370. [Google Scholar] [CrossRef]
- Renners, L.; Heine, F.; Kleiner, C.; Rodosek, G.D. Adaptive and Intelligible Prioritization for Network Security Incidents. In Proceedings of the 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security), Oxford, UK, 3–4 June 2019; pp. 1–8. [Google Scholar] [CrossRef]
- Demertzis, K.; Tziritas, N.; Kikiras, P.; Sanchez, S.L.; Iliadis, L. The next Generation Cognitive Security Operations Center: Adaptive Analytic Lambda Architecture for Efficient Defense against Adversarial Attacks. Big Data Cogn. Comput. 2019, 3, 6. [Google Scholar] [CrossRef]
- Andrade, R.O.; Yoo, S.G. Cognitive Security: A Comprehensive Study of Cognitive Science in Cybersecurity. J. Inf. Secur. Appl. 2019, 48, 102352. [Google Scholar] [CrossRef]
- Chamberlain, L.B.; Davis, L.E.; Stanley, M.; Gattoni, B.R. Automated Decision Systems for Cybersecurity and Infrastructure Security. In Proceedings of the 2020 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 18–20 May 2020; pp. 196–201. [Google Scholar] [CrossRef]
- Husák, M.; Sadlek, L.; Špaček, S.; Laštovička, M.; Javorník, M.; Komárková, J. CRUSOE: A Toolset for Cyber Situational Awareness and Decision Support in Incident Handling. Comput. Secur. 2022, 115, 102609. [Google Scholar] [CrossRef]
- Chen, Y.; Zahedi, F.M.; Abbasi, A.; Dobolyi, D. Trust Calibration of Automated Security IT Artifacts: A Multi-Domain Study of Phishing-Website Detection Tools. Inf. Manag. 2021, 58, 103394. [Google Scholar] [CrossRef]
- Erola, A.; Agrafiotis, I.; Happa, J.; Goldsmith, M.; Creese, S.; Legg, P.A. RicherPicture: Semi-Automated Cyber Defence Using Context-Aware Data Analytics. In Proceedings of the 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA), London, UK, 19–20 June 2017; pp. 1–8. [Google Scholar] [CrossRef]
- Happa, J.; Agrafiotis, I.; Helmhout, M.; Bashford-Rogers, T.; Goldsmith, M.; Creese, S. Assessing a Decision Support Tool for SOC Analysts. Digit. Threat. Res. Pract. 2021, 2, 1–35. [Google Scholar] [CrossRef]
- Naseer, A.; Naseer, H.; Ahmad, A.; Maynard, S.B.; Masood Siddiqui, A. Real-Time Analytics, Incident Response Process Agility and Enterprise Cybersecurity Performance: A Contingent Resource-Based Analysis. Int. J. Inf. Manag. 2021, 59, 102334. [Google Scholar] [CrossRef]
- van der Kleij, R.; Schraagen, J.M.; Cadet, B.; Young, H. Developing Decision Support for Cybersecurity Threat and Incident Managers. Comput. Secur. 2022, 113, 102535. [Google Scholar] [CrossRef]
- Amthor, P.; Fischer, D.; Kühnhauser, W.E.; Stelzer, D. Automated Cyber Threat Sensing and Responding: Integrating Threat Intelligence into Security-Policy-Controlled Systems. In Proceedings of the ARES ’19: Proceedings of the 14th International Conference on Availability, Reliability and Security, Canterbury, UK, 26–29 August 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
- Kinyua, J.; Awuah, L. Ai/Ml in Security Orchestration, Automation and Response: Future Research Directions. Intell. Autom. Soft Comp. 2021, 28, 527–545. [Google Scholar] [CrossRef]
- Neupane, S.; Ables, J.; Anderson, W.; Mittal, S.; Rahimi, S.; Banicescu, I.; Seale, M. Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities. IEEE Access 2022, 10, 112392–112415. [Google Scholar] [CrossRef]
- Chen, J.; Mishler, S.; Hu, B. Automation Error Type and Methods of Communicating Automation Reliability Affect Trust and Performance: An Empirical Study in the Cyber Domain. IEEE Trans. Hum.-Mach. Syst. 2021, 51, 463–473. [Google Scholar] [CrossRef]
- Ryan, T.J.; Alarcon, G.M.; Walter, C.; Gamble, R.; Jessup, S.A.; Capiola, A.; Pfahler, M.D. Trust in Automated Software Repair: The Effects of Repair Source, Transparency, and Programmer Experience on Perceived Trustworthiness and Trust. In Proceedings of the HCI for Cybersecurity, Privacy and Trust; Moallem, A., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; Volume 11594, pp. 452–470. [Google Scholar] [CrossRef]
- Husák, M.; Čermák, M. SoK: Applications and Challenges of Using Recommender Systems in Cybersecurity Incident Handling and Response. In ARES ’22: Proceedings of the 17th International Conference on Availability, Reliability and Security, Vienna Austria, 23–26 August 2022; Association for Computing Machinery: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
- Gutzwiller, R.S.; Fugate, S.; Sawyer, B.D.; Hancock, P.A. The Human Factors of Cyber Network Defense. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, USA, 26–30 October 2015; Volume 59, pp. 322–326. [Google Scholar] [CrossRef]
- Kokulu, F.B.; Soneji, A.; Bao, T.; Shoshitaishvili, Y.; Zhao, Z.; Doupé, A.; Ahn, G.-J. Matched and Mismatched SOCs: A Qualitative Study on Security Operations Center Issues. In Proceedings of the CCS ’19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1955–1970. [Google Scholar] [CrossRef]
- Brown, P.; Christensen, K.; Schuster, D. An Investigation of Trust in a Cyber Security Tool. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Washington, DC, USA, 19–23 September 2016; Volume 60, pp. 1454–1458. [Google Scholar] [CrossRef]
- Butavicius, M.; Taib, R.; Han, S.J. Why People Keep Falling for Phishing Scams: The Effects of Time Pressure and Deception Cues on the Detection of Phishing Emails. Comput. Secur. 2022, 123, 102937. [Google Scholar] [CrossRef]
- Baroni, P.; Cerutti, F.; Fogli, D.; Giacomin, M.; Gringoli, F.; Guida, G.; Sullivan, P. Self-Aware Effective Identification and Response to Viral Cyber Threats. In Proceedings of the 13th International Conference on Cyber Conflict (CyCon), Tallinn, Estonia, 25–28 May 2021; Jancarkova, T., Lindstrom, L., Visky, G., Zotz, P., Eds.; NATO CCD COE Publications; CCD COE: Tallinn, Estonia, 2021; Volume 2021, pp. 353–370. [Google Scholar] [CrossRef]
- Islam, C.; Babar, M.A.; Nepal, S. A Multi-Vocal Review of Security Orchestration. ACM Comput. Surv. 2019, 52, 1–45. [Google Scholar] [CrossRef]
- Pawlicka, A.; Pawlicki, M.; Kozik, R.; Choraś, R.S. A Systematic Review of Recommender Systems and Their Applications in Cybersecurity. Sensors 2021, 21, 5248. [Google Scholar] [CrossRef]
- Agyepong, E.; Cherdantseva, Y.; Reinecke, P.; Burnap, P. A Systematic Method for Measuring the Performance of a Cyber Security Operations Centre Analyst. Comput. Secur. 2023, 124, 102959. [Google Scholar] [CrossRef]
- Ofte, H.J.; Katsikas, S. Understanding Situation Awareness in SOCs, a Systematic Literature Review. Comput. Secur. 2023, 126, 103069. [Google Scholar] [CrossRef]
- Tilbury, J.; Flowerday, S. Humans and Automation: Augmenting Security Operation Centers. J. Cybersecur. Priv. JCP 2024, 4, 388–409. [Google Scholar] [CrossRef]
- Yang, W.; Lam, K.-Y. Automated Cyber Threat Intelligence Reports Classification for Early Warning of Cyber Attacks in Next Generation SOC. In Information and Communications Security: 21st International Conference; Zhou, J., Luo, X., Shen, Q., Xu, Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 11999, pp. 145–164. [Google Scholar] [CrossRef]
- Liu, J.; Zhang, R.; Liu, W.; Zhang, Y.; Gu, D.; Tong, M.; Wang, X.; Xue, J.; Wang, H. Context2Vector: Accelerating Security Event Triage via Context Representation Learning. Inf. Softw. Technol. 2022, 146, 106856. [Google Scholar] [CrossRef]
- John, A.; Isnin, I.F.B.; Madni, S.H.H.; Faheem, M. Cluster-Based Wireless Sensor Network Framework for Denial-of-Service Attack Detection Based on Variable Selection Ensemble Machine Learning Algorithms. Intell. Syst. Appl. 2024, 22, 200381. [Google Scholar] [CrossRef]
- Keating, C.B.; Padilla, J.J.; Adams, K. System of Systems Engineering Requirements: Challenges and Guidelines. Eng. Manag. J. 2008, 20, 24–31. [Google Scholar] [CrossRef]
- Kurtanovic, Z.; Maalej, W. Automatically Classifying Functional and Non-Functional Requirements Using Supervised Machine Learning. In Proceedings of the 2017 IEEE 25th International Requirements Engineering Conference (RE), Lisbon, Portugal, 4–8 September 2017; pp. 490–495. [Google Scholar]
- Eckhardt, J.; Vogelsang, A.; Fernández, D.M. Are “Non-Functional” Requirements Really Non-Functional? An Investigation of Non-Functional Requirements in Practice. In Proceedings of the 38th International Conference on Software Engineering, Austin, TX, USA, 14–22 May 2016; pp. 832–842. [Google Scholar]
- Tilbury, J.; Flowerday, S. The Rationality of Automation Bias in Security Operation Centers. J. Inf. Syst. Secur. 2024, 20, 87–107. [Google Scholar]
Technology Type | Number of Articles |
---|---|
AI/ML | 29 |
Not Classified (simple AI/ML) | 12 |
Neural Networks | 7 |
Visualization Modules | 4 |
Natural Language Processing | 3 |
Deep Learning | 2 |
Reinforcement Learning | 1 |
IDS | 3 |
SIEM/SOAR | 3 |
Other | 5 |
N/A | 8 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tilbury, J.; Flowerday, S. Automation Bias and Complacency in Security Operation Centers. Computers 2024, 13, 165. https://doi.org/10.3390/computers13070165
Tilbury J, Flowerday S. Automation Bias and Complacency in Security Operation Centers. Computers. 2024; 13(7):165. https://doi.org/10.3390/computers13070165
Chicago/Turabian StyleTilbury, Jack, and Stephen Flowerday. 2024. "Automation Bias and Complacency in Security Operation Centers" Computers 13, no. 7: 165. https://doi.org/10.3390/computers13070165
APA StyleTilbury, J., & Flowerday, S. (2024). Automation Bias and Complacency in Security Operation Centers. Computers, 13(7), 165. https://doi.org/10.3390/computers13070165