Next Article in Journal
Bayesian Mixture Copula Estimation and Selection with Applications
Next Article in Special Issue
occams: A Text Summarization Package
Previous Article in Journal
Spatiotemporal Data Mining Problems and Methods
Previous Article in Special Issue
The AI Learns to Lie to Please You: Preventing Biased Feedback Loops in Machine-Assisted Intelligence Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Preliminary Perspectives on Information Passing in the Intelligence Community

by
Jeremy E. Block
1,*,†,
Ilana Bookner
2,
Sharon Lynn Chu
1,
R. Jordan Crouser
3,
Donald R. Honeycutt
1,†,
Rebecca M. Jonas
4,†,
Abhishek Kulkarni
1,†,
Yancy Vance Paredes
5,† and
Eric D. Ragan
1
1
Department of Computer & Information Science & Engineering, University of Florida, Gainesville, FL 32611, USA
2
United States Department of Defense, Fort Meade, MD 20755, USA
3
Computer Science Department, Smith College, Northampton, MA 01063, USA
4
College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA
5
Department of Computer Science, North Carolina State University, Raleigh, NC 27606, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Analytics 2023, 2(2), 509-529; https://doi.org/10.3390/analytics2020028
Submission received: 31 January 2023 / Revised: 28 April 2023 / Accepted: 23 May 2023 / Published: 15 June 2023

Abstract

:
Analyst sensemaking research typically focuses on individual or small groups conducting intelligence tasks. This has helped understand information retrieval tasks and how people communicate information. As a part of the grand challenge of the Summer Conference on Applied Data Science (SCADS) to build a system that can generate tailored daily reports (TLDR) for intelligence analysts, we conducted a qualitative interview study with analysts to increase understanding of information passing in the intelligence community. While our results are preliminary, we expect that this work will contribute to a better understanding of the information ecosystem of the intelligence community, how institutional dynamics affect information passing, and what implications this has for a TLDR system. This work describes our involvement in and work completed during SCADS. Although preliminary, we identify that information passing is both a formal and informal process and often follows professional networks due especially to the small population and specialization of work. We call attention to the need for future analysis of information ecosystems to better support tailored information retrieval features.

1. Introduction

Intelligence analysts are tasked with a broad set of information-finding and sensemaking objectives to generate meaningful knowledge from lots of unstructured data [1,2]. Given the data-centric nature of analysis, computational support is essential to enable efficiency and quality in intelligence work. Although computation paired with big data promise to transform and improve the productivity of intelligence work, they also introduce new challenges, such as identifying knowledge gaps to infer threat’s intents or to communicate shared understanding across community members [2]. The vision of human–machine teaming [3] has highlighted an opportunity for new technological solutions that take advantage of artificial intelligence (AI) and machine learning (ML). For example, recommender systems can help analysts find relevant information they might not know to look for [4], and natural language text summarization can greatly reduce the time needed for human interpretation of large collections of text [5]. Complementary advancements in capabilities for identifying patterns, anomalies, and relationships among entities offer the promise of revealing intelligence findings that might otherwise have been missed [6,7,8]. Given these developments, the grand challenge of the Summer Conference on Applied Data Science (SCADS), an 8-week program hosted by the Laboratory for Analytic Sciences (LAS) at North Carolina State University, is to use AI/ML to generate tailored daily reports (TLDR) for intelligence analysts to increase the efficiency and efficacy of their work [9].
Research in HCI has repeatedly demonstrated that despite impressive computational capabilities, the practical value of new tools and technologies depends on the ability to integrate them into appropriate environments [10]. Practical integration is not trivial. Intelligence analysts conduct analysis within a complex sociotechnical system [11,12]. Understanding the information ecosystem intelligence analysts work within is crucial for the successful development and implementation of new tools for their workflow [10,13,14]. This involves gaining an awareness of what kinds of information they work with, with whom they work, where their information comes from, how it is transformed, and where it is distributed [10,15]. Analysts work with numerous tools and process varying types of information, and analysis also requires human judgment and creativity on how to best utilize the available tools [16]. At the same time, analysts also frequently communicate with others—either synchronously or asynchronously [11,17,18]. Requests for information can come in different forms and be initiated by different types of customers [11]. Aspects of the workflow can also be highly collaborative, where communications with colleagues or subject matter experts are essential for filling in knowledge gaps [11,19]. The high variability and complexity of analysis operations create a major challenge for researchers and software engineers to optimize the benefits of their contributions. One possible solution to this challenge is to develop a system-level model of intelligence analysts’ information ecosystem and the processes of information passing. In this work, we conducted an interview study with analysts to contribute toward this goal.
In this paper, we describe a study we conducted at the 2022 SCADS to develop an understanding of intelligence analysts’ information ecosystem. We aim to extend prior research that has studied individual sensemaking and decision-making processes [20,21] by focusing on characterizing information passing among multiple people, systems, and data sources within the context of intelligence analysis. Additionally, while prior work has often used participants without professional intelligence analysis experience as proxies [22], we had a unique opportunity to recruit participants with significant intelligence analyst experience in an unclassified environment, allowing us to validate and build upon prior findings. One of the main goals of the research is to understand the different types and units of information generated, processed, and shared through analysis operations. The research questions guiding our interview study are:
  • What kinds of information do intelligence analysts engage with in their analysis work?
  • How does information flow in the analysis work of intelligence analysts?
  • What factors influence how intelligence analysts engage with information in their analysis work?
Despite still being preliminary, this study contributes to the understanding and the modeling of both the interplay between information inputs and outputs, as well as the collaborative process of intelligence analysis. As this was the inaugural SCADS Conference, a primary goal was to generate data for use by future attendees at later editions of this summer conference. We understood going into the conference, that 8 weeks would be too tight for a full interview study—from idea conception to full data analysis—but that any work we did that was left in some “in progress” stage might be taken up by future attendees in pursuit of the multiyear grand challenge. We anticipate that this ultimate model will be useful in ensuring the appropriateness of the TLDR developed at future SCADS for intended users. We also expect that this model will contribute value to the broader community of intelligence researchers in its depiction of intelligence analysis information flows, based on empirical work conducted with actual intelligence analysts. In addition to these anticipated future contributions, we also compiled an annotated survey of existing literature about intelligence analysts and analytic workflows through this work.

2. Literature Survey

To broaden our understanding of intelligence analysis/analysts’ work environment, we conducted a survey of existing literature on these topics. This section describes the methodology that we used to identify papers, our system for organizing papers and annotations, a description of our annotations, our process for identifying and analyzing existing models of analysts’ work, and the findings of common themes in the literature that emerged from this exercise. These findings informed the research questions that guided the interview study we conducted. Our selection process was intentionally broad to expose ourselves to what was known about analysis workflows in the intelligence community and sensemaking at large. The annotated bibliography we compiled is also a contribution in itself as we believe it will be useful for future SCADS participants to familiarize themselves with a wide scope of the literature on intelligence analysis/analysts.

2.1. Search Strategy

With the intention to broaden the information we found in our literature survey, we used the following approaches:
  • Keyword search: We performed a brainstorming session and identified an initial set of keywords and terms to nonexhaustively represent the research space we wanted to explore. These terms were generated based on our existing knowledge of the research space as well as the terms we heard from the talks and discussions that occurred during the initial weeks of SCADS. These keywords were used as search terms on Google Scholar.
  • Author and Publication search: We created a list of key authors and publication venues that covered the identified topics. The publication record of these authors was examined and relevant papers were added to our spreadsheet.
  • Citation-chaining: Recorded papers were labeled as either relevant or not relevant by the researchers who read them, and papers of particular interest were flagged for the whole group to read. Papers that were identified as relevant were used to find further papers by exploring their references, the list of papers that cited them, and looking at other work the authors produced.
As this was an unstructured review, the relevance of each paper was determined on a subjective basis. In total, 143 papers were collected. Out of these, 82 were most relevant to the intelligence community and the themes about information passing we later uncovered.

2.2. Themes

The papers we collected ranged in publication year from 1991–2022 (mean 2012, median 2006). A spreadsheet was created to collate and analyze all the papers collected. A full description of as well as a link to the spreadsheet can be found in Table A2. We extracted the following citation data about each paper: Title, Author(s), Year published, Abstract, Citation, and URL. Before delving deeper into the synthesis of collected papers, researchers identified four key themes of individual interest to tag the papers with:
  • Information: Addresses information (or proxy information) that analysts deal with, how they deal with it, and how the information flows between different entities.
  • Process: Addresses workflows or processes that analysts use in their day-to-day work. Also includes workflows or processes that are specific to certain situations.
  • Job: Addresses how analysts perceive their jobs and occupational job scope.
  • Collaboration: Addresses collaboration among analysts, intelligence agencies, and other relevant individuals or groups.
The themes were developed after reviewing a subset of collected papers to identify common focal elements. The categories and their definitions were based on common features identified by comparing the properties of the papers (i.e., thematic analysis) and individual researcher interests. The goal was to identify what aspects of the intelligence ecosystem the literature was especially focused on. Each paper could be tagged with any number of themes as applicable. Any paper that we could not associate with one of the themes was deemed not applicable to the present research and was grayed out in our spreadsheet. The breakdown of the number of papers per theme is shown in Figure 1.

2.3. Patterns in the Literature

We now describe the patterns we identified from the literature survey. In the survey, we paid special attention to the visual models proposed by various authors. Section 2.3.2 describes the patterns identified from these models.

2.3.1. Insights Gained

As noted in Section 2.1, four key themes were identified to categorize and sort collected papers. Having tagged all papers with one of the four themes, researchers then analyzed the papers from each category. Below we describe some insights gained from this analysis:
There was relatively little prior work focused on the Information that analysts deal with in their work. Prior work surrounding this theme included contributions related to tacit knowledge that intelligence analysts develop [23], approaches to information parsing [24,25,26], and the development of ontologies for intelligence analysis [11,27]. We found that there was a significant gap in the literature in identifying information types and the ways that information flows in analysts’ work. Additionally, little prior work on the topic of information more broadly was conducted with real analysts. Therefore, we found that we could add significant value to the field by conducting empirical work with analyst participants on the topic of information types and flows. This aspect of intelligence analysts’ work also intertwined with the concept of the TLDR system since understanding the types of information that would need to be summarized and the information sources is crucial for developing an information summarization tool.
Papers about Process were the most common and included a broad range of participants, contributions, and methodologies. For example, papers uncovering and defining analysis processes included participant studies with practicing analysts [15,16,28,29], analysts in training [22], and nonanalyst proxy participants [30,31]. Papers included literature reviews, empirical studies, and conceptual work, resulting in contributions such as models of analytic processes [32,33], evaluations of analysis tools [8], design implications [22,34], and theoretical contributions [35,36]. While there are still unexplored research topics regarding analyst processes and workflows, we felt that we could have a more novel contribution by focusing on other areas, considering how much prior work has been conducted on this topic.
There were fewer papers addressing analysts’ Jobs, with only 13 papers significantly relating to this theme. We used this theme to identify papers that took a holistic approach to describe the broader work environment of intelligence analysts and the ways that they perceive themselves in the context of this larger environment [37,38,39,40]. We found that the scope of this research area was larger than our planned interview study’s allotted time would allow. Two of the works associated with this research question were large-scale ethnographic studies conducted at intelligence agencies [11,12]. Therefore, we felt that existing gaps in addressing this research area could likely not be satisfied by the kind of study that we would be able to conduct in the limited timeframe of the summer conference.
There were a significant number of papers addressing Collaboration; however, we found it was often a supplemental theme to another research question area. For example, some papers focused more heavily on analytic processes, including collaborative processes, but were not primarily focused on collaboration [8,16]. Other papers focused on limited aspects of collaboration, such as the relationship between analysts and their managers, rather than focusing on collaboration more holistically [41]. We found collaboration to be a common theme in work about intelligence analysis due to perceptions of relatively poor collaboration within the intelligence community. Thus, collaboration often came up as a finding relating to intelligence failures, where information silos between agencies stymied effective information sharing [11,12,19,42]. This led us to include supplemental questions about collaboration as it relates to how analysts work with information and the way that information flows in analysis workspaces in our interview study.

2.3.2. Model Identification

From the literature survey, we collected a collection of the visual models proposed and grouped them on a shared Google Jamboard. These visual models consisted of boxes and lines designed to communicate theoretical patterns. They communicate simplified understandings of something, (e.g., the sensemaking loop by Pirolli and Card [20], or the analyst workflow proposed by Ahrend, Jirotka, and Jones [23]). Once gathered, we began a typical qualitative review by affinity diagramming the models, focusing on aspects of the intelligence process. Affinity diagramming is the process by which details of interest percolate out of the data by laying out data visually with the freedom to rearrange and compare data points until groups at an appropriate level of detail are formed [43].
One observation we made during our literature survey was the high number of models presented and the wide variety of foci in the intelligence process. We identified three major distinctions between the identified models: study population, individual vs. general focus, and analysis methodology. For the study population, the main difference between the models was whether the study was performed using intelligence analysts [15,23,44], or a more generalized population [45,46]. For individual vs. general focus, we found that some models represented the process of an individual analyst [8,45,46,47,48], while others were intended to describe generalizable processes [23,37,44,49]. For analysis methodology, some models were based on empirical research [15,23,47,48,50], while other models represented the author’s synthesis of the space without performing empirical studies [8,42,45,51,52].
From this process, we found that nearly all of the models could be described as either being about sensemaking, workflow, or decision-making. The distributions of these topics were roughly similar across the different category groups and led us to understand analyst work practices more holistically.

3. Study Design

This section describes how our interview study was conceptualized, designed, and implemented in the eight-week span of SCADS.

3.1. Study Conceptualization

After identifying information passing as our area of research for the interviews, we first drafted up a study concept document, which can be found in Appendix A. This document helped us formalize the research question, identify desired outcomes, and inform the interview study and its questions.
Initially, the questions were divided into two broad themes: analysis background and information flow. After reviewing these questions, certain themes began to emerge, which provided a more clear grouping of questions. Thus, in the second iteration, nine themes were identified, namely, background, topic area/experience, cooperation, team formation, inputs, information types, interruptions, outputs, and feedback. However, the number of themes and associated questions would have taken too long to go through sufficiently in an hour-long interview. Thus, in the third and final iteration, the themes were prioritized and coalesced into an hour-long interview guide. This was accomplished by including all but the `interruptions’ theme from the previous iteration but had more focused questions and clear goals for each theme. An introduction and debrief script were added to the interview guide during this iteration as well to help cover important aspects of the approved consent form. The final interview guide can be found in Appendix D.

3.2. Interview Themes

The overarching goal of these interviews was to understand how information is passed along in an analyst’s workflow. Additionally, we aimed to look into information passing within and outside of an intelligence organization. This interview focus would help allow for comparisons in how information flows in the intelligence community and could inform templates to help “tailor” what analysts expect from a tool like the TLDR. For instance, without asking participants about expected features, we can collect data on what analysts work with regularly and build the TLDR to improve the current experience through iterative design.
Three key themes of this goal guided our interview, namely: information inputs, information outputs, and cooperation.
Inputs
One key aspect of understanding how information flows in analysts’ work is understanding the sources of triggers that lead to an analysis task and the interactions with those sources. Analysts’ “triggers”, or the things that cue them to begin an analysis task, could be considered a starting point of their information flows. Regarding triggers, we asked questions about where triggers come from, from whom, and in what form they come. We also asked about how analysts understand the intentions behind a given trigger and what someone may want from a given trigger that is not explicitly stated. For example, we asked how much interaction analysts have with those who send triggers, and how much reframing, redirection, and/or discussion is involved to understand what someone really wants to know based on their request. These questions were intended, in part, to help us map out whether information flows are largely linear or are more cyclical (i.e., complicated and go beyond a simple input–process–output model). We also wanted to explore any other information “inputs” that analysts receive that may not necessarily cue them to begin an analysis task but still enter their workflows.
Outputs
Thinking about the other end of the information flow, we wanted to understand the information outputs that analysts produce. We wanted to understand both formal outputs, such as finished intelligence reporting, and informal outputs, such as conversations where information is communicated between coworkers. Several outputs fall somewhere between formal and informal, such as notes that analysts take that are recorded and serialized in databases. We also asked questions regarding how analysts might tailor their information outputs based on what they know about the intended recipients of the information. For example, if an analyst is preparing the same information for three different recipients, how might that information output vary for each recipient? These questions will help us better understand how information is modified throughout an analyst’s information flow.
Cooperation
We wanted to investigate cooperation as its own theme, particularly how it pertains to an analyst’s workflow. As we understood from prior work and conversations with other analysts, there were barriers to cooperation among intelligence analysts [12]. Factors such as the frequency of disseminated reports and the amount of work completed with your name on it impact individual promotions, which may lead to more hostile and competitive work practices. Others commented on how many individuals were involved in their individual analysis process or the process of preparing raw intelligence for top-of-the-line products like the presidential daily brief. To more completely understand these anecdotal perspectives, we wanted to hear how analysts experienced working in cooperative ways in a formal setting.

3.3. Interview Methodology

All interviews were facilitated by an assigned pair of members from the research team. One member served as the interviewer while the other as the notetaker. Since no audio or video recordings were captured, it was important to have a dedicated notetaker present who could document our participants’ responses. To promote consistency in note-taking, a template (see Appendix E) was created using the questions and broad themes listed in the interview guide. This served as the starting point for the data collection in every interview. Additionally, we had the same pairs of interviewer and notetaker for the pilot and the actual interviews to enable teams to gain familiarity with each others’ interviewing and notetaking styles and preferences.
An interview was conducted only after explicit verbal consent from the participant. The participant also had the option of stopping the interview at any time with no consequences to them. The interviews were semistructured and were approximately an hour long. Our style of semistructured interviews leaned toward the more structured end of the spectrum as we had a detailed interview guide with prompts for each question (see Appendix D). Conducting our interview as semistructured allowed us to reword and reorder questions in response to the natural conversational flow of the interview, along with providing leeway to explore interesting tangents within the scope of the research question.

3.4. Pilot Interviews

To further calibrate the study design, we conducted six pilot interviews with SCADS participants who self-identified as nonanalysts but had worked adjacent to analysts and were relatively aware of an analyst’s work. We decided to ask participants who were somewhat familiar with Intelligence to help us find the appropriate words since the goal of these conversations was to ensure we had clear questions and sufficient time to ask about everything we wanted to ask about. From these pilot participants, we asked three to role-play as analysts during the interview based on their knowledge of analysts and to fabricate details and answers to our interview questions as necessary. We asked two participants to answer our questions as they relate to their own job. These approaches allowed us to test our questions both in the context of analyst work and in realistic scenarios with proxy participants. We conducted one additional pilot interview with a participant who had previously worked as an analyst but was outside of our recruitment pool. Based on the pilot interviews, we discovered the benefits of inviting participants to draw out the components of their analysis process. Thus, we included a prompt to ask interviewees to visualize their answers on a blank sheet of paper where possible. We also learned the language of the analysts better, which was reflected in the final interview guide.

3.5. Participant Recruitment

This research study was approved by North Carolina State University’s Institutional Review Board (IRB) and passed a review by the Department of Defense. We recruited participants by putting up advertisement flyers around the workspace and by directly reaching out to analysts at the LAS. Furthermore, participants at SCADS who had analyst experience were made aware of the study. We interviewed analysts who had worked in various roles, including participants with experience working stateside and those deployed internationally in the field. Descriptions of work roles included signals intelligence analyst, discovery analyst, language analyst, cyber intelligence analyst, and geospatial intelligence analyst. Participants’ experience working as analysts ranged from 6 months to 26 years.

3.6. Analysis Plan

As described in Section 3.3, two interviewer–notetaker pairs conducted the interviews and collected data in the form of digital notes. Immediately following the interview, each pair sat down to clean up the digital notes and add context where necessary. The cleaned-up and contextualized notes were then used for qualitative analysis.
Having a team of six researchers and the presence of time constraints, the analysis of interview data took a collaborative approach. To allow for a diverse perspective and enhanced evaluation of contextual understanding, we introduced a third party in the qualitative coding process. This third-party, although part of the research team, was not privy to the contents of the interview they were coding and saw the notes for the first time during the coding process. Thus, the six researchers shuffled around in pairs such that each pair had one person who had been part of the interview being coded (either interviewer or notetaker) and the other being the third party as defined before. This also ensured some control for notetaking styles among the two pairs that conducted the interviews.
The team followed an open coding approach. The codes applied to the notes took a descriptive form as an inductive approach was followed, meaning there was no preconceived notion of what the codes would be. Codes were allowed to emerge through the discussion between the two coders. During the period of SCADS, we were able to complete the first iteration of this coding process. Some themes identified through this iteration are reported in Section 4. However, these are preliminary findings. We expect to code the same data iteratively, as needed, to allow for more analytical rigor and the emergence of insightful themes. After completing the required iterations and merging codes, a common coding scheme will be developed, which we expect to be one of the meaningful contributions of this work.

4. Preliminary Findings

At the time of writing, analysis is still ongoing. Thus, we only describe a few common anecdotes as discussed in at least one interview, that may describe phenomena related to our research question. Full findings will be published once a systematic analysis of the interview data is completed. While there is a desire to quote direct conversations and participants in a formal way, this section only describes some of the features and common threads from our interviews with analysts and our first iteration of descriptive codes.
Firstly, analysts discussed both formal and informal types of information that they referenced to do their work. For example, some analysts relied on emails and formal requests to know what to work on in a day. On the other hand, others experience “hair on fire” walk-bys that reprioritize their work for the entire day. This is to say, that we captured a variety of “inputs” involved in analysis workflows. This diversity in workflows can be seen as a challenge to be supported by a tool like the TLDR.
We also heard repeatedly that there is too much information to consume. This was especially confirmed when considering how finished intelligence was delivered to the community. As described by our participants, formal reporting and tweetlike short summaries (called Squawks) are serialized and sometimes delayed in bureaucracy and secondary checks. Additionally, many reports “die” in the dissemination phase since they exist in a database and notifications do not go out to or are missed by interested parties. Some people never know that new, and, more importantly, relevant information has been released. On more than one occasion, we heard that finished intelligence was lost and often required a follow-up phone call or conversation to make sure the right people were aware when they needed to be.
Finally, this reinforces our identification of the importance of informal conversations within the intelligence community. We heard that information sharing appears to follow professional networks. For example, informal conversations (e.g., phone calls, discussions across cubicle walls) occur within the community, and individuals will verbally notify others when information changes or updates become available. Partially, since the intelligence community is so small, many analysts know each other and rely on this understanding to know what others are interested in. The small community also helps with distributing information accurately because analysts will call interested parties and walk them through released reports and help explain sections of documents to each other. Unfortunately, we also found that without these informal conversations, some reports are missed by the people they would be relevant to.
Throughout our involvement with SCADS, we were told that every analyst’s activity is unique, often quoting “If you speak to one analyst, you have spoken to one analyst”. While intended to be an off-hand witticism about the diversity and unique methods analysts employ to complete their work, our preliminary results appear to shed light on potential paradoxes established by this de facto expectation. For example, while many analysts insisted that they worked alone and had their own unique methods, our preliminary analysis reveals that the intelligence community relies on many informal conversations to disseminate information, and there are some common experiences around how analysts begin and end their days.We confirm that there is a variety with which analysts begin and adjust their day. This emphasizes the need for a TLDR tool to be adaptable to many different work practices and especially tuned to augment or support important informal conversations between analysts.

5. Conclusions and Future Directions

In summary, this report detailed a two-part effort undertaken by the HCI team at SCADS, namely the annotated bibliography and the interview study. Synthesis of the literature survey led to the formation of the research questions, guiding the design of the interview study. The findings indicate possible valuable outcomes in understanding information flows and cooperation in the intelligence community. However, we acknowledge that these findings are indeed preliminary where the interviews were conducted with a limited sample of intelligence analysts who may not represent the entire intelligence community. Furthermore, we also did not investigate the entire intelligence analysis organization and the interplay between various parties, which limits the scope of our findings. Considering these limitations, there is still much to be learned about the intelligence community. There is a desire to understand how technologies, like the imagined TLDR, could influence user behaviors [53]. Future work would benefit from the examination of how new technology can impact user work and how job roles may shift and change when technologies are introduced. In the general population there are hesitations around the potential harm that Artificial Intelligence tools could have on human work [54]. Having a better understanding of information passing in the current intelligence ecosystem, can help serve as a marker to contrast later work, and could consider techniques and factors that can ameliorate concerns regarding future technology use cases.
Furthermore, we have identified potential future directions for the study. First, given the exploratory approach undertaken in this study, another iteration of qualitative data analysis needs to be conducted to draw substantiated findings. This will consist of thematic analysis to elicit more specific and attributable understandings of how intelligence operators create and share information in the intelligence ecosystem. Second, given the uniqueness of the coded interview notes dataset, other research questions can be developed. Future SCADS participants will have the data available for further analysis, and we are interested to see what additional questions can be answered from this data. Finally, there is potential to conduct data triangulation by using the interview data and other related datasets. Given the existence of visual analytic datasets such as the annual VAST challenge datasets (e.g., [55,56,57]) and the UKentucky [58] dataset, questions can be asked to bridge the gap between understanding how analysts behave in a controlled environment compared to their described workflow. We hope this report catalyzes further research in understanding the human side of intelligence analysts and leads to a significant contribution toward building the TLDR.

Author Contributions

Conceptualization, J.E.B., I.B., S.L.C., R.J.C., D.R.H., R.M.J., A.K., Y.V.P. and E.D.R.; methodology, J.E.B., I.B., S.L.C., R.J.C., D.R.H., R.M.J., A.K., Y.V.P. and E.D.R.; formal analysis, J.E.B., I.B., D.R.H., R.M.J., A.K. and Y.V.P.; investigation, I.B., D.R.H., R.M.J. and A.K.; resources, J.E.B.; data curation, J.E.B., I.B., D.R.H., R.M.J., A.K. and Y.V.P.; writing—original draft preparation, A.K. and Y.V.P.; writing—review and editing, J.E.B., R.M.J. and A.K.; visualization, A.K.; supervision, S.L.C., R.J.C. and E.D.R.; project administration, J.E.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Laboratory for Analytic Sciences during the 2022 Summer Conference on Applied Data Science at North Carolina State University.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of North Carolina State University (protocol code 25181 initially approved on 15 July 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Restrictions apply to the availability of these data. Data was obtained from participating in the Summer Conference for Applied Data Science and are available to future researchers with the permission of the Laboratory for Analytic Sciences.

Acknowledgments

We would like to thank all the participants who took the time to share their experiences in the intelligence community, including those who offered feedback during the pilot study phases. We are also grateful to Elizabeth, Jascha, Christine, Sean, Stephen, Emily, Brent, and those who advised on getting this work approved by the North Carolina State University’s Institutional Review Board and the Human Resources within the Department of Defense. This material is based upon work done, in whole or in part, in coordination with the Department of Defense (DoD). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DoD and/or any agency or entity of the United States Government.

Conflicts of Interest

One of the authors (R. Jordan Crouser) is guest editing this special issue and will recuse himself from any adjudication of this work. The funders had no role in the design of the study; in the analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. This unique opportunity for data collection was made possible by the funders (LAS).

Appendix A. Study Design Document

The study design document outlines the motivation for and results from an iterative discussion about the interview study. The specific goals of the study, the target population, the research questions, the sample size, the analysis plan, and the expected outcomes are all specified. This document can serve as grounds for the interview study and may be useful for future SCADS participants to gain some context about our work.
  • Problem/Motivation:
  • Intelligence analysts work with a plethora of information coming from various sources.
  • This information informs the outcomes of their sensemaking process and may have actionable insights that get passed forward.
  • IAs need to communicate different kinds of information to different kinds of audiences, formally and informally.
  • Assembling and preparing information to be communicated to other parties requires nontrivial efforts from the IAs.
  • It is thus critical to understanding how the information flows from different sources in and out of an analyst’s workflow.
  • Understanding what types of and how (even in abstract terms fit for an unclassified scenario) information is acquired and passed on in the intelligence analysis community may help to design systems that can help to ease the burden of information preparation for communication.
  • For instance, identifying what format of input information leads to what format of output information can ultimately help recommend relevant information through the TLDR.
  • In the literature, there has been a focus on collaborative practices, or the lack thereof; however, there is a gap in understanding how cooperative or asynchronous collaboration happens in the intricate hierarchy of an intelligence community.
  • Specific Study Goals:
  • Through this study, we aim to interview analysts to:
  • Understand their perceptions of their work in relation to others,
  • Generalize in how information flows, both into their work and outward to others (a more holistic view)
  • Understand the impacts of various information types, trustworthiness, completeness, etc. on analyst workflows (treating the analyst as the central node of the information flow)
We hope this helps us address the gap in the literature and enhance the stakeholders’ understanding of cooperative work/information passing in the intelligence community.
  • Population: Intelligence Analysts
  • This will need to be more specific when we find out more about the participants
  • Details may include years of experience, how long since they’ve last worked as an analyst, area of focus (HumInt, SigInt, GeoInt, etc.)? Other common performance indicators of the individual? Personality factors?
  • What agency were they affiliated with (if we are allowed to get that information)?
  • Research Questions/Objective:
  • What are the critical dimensions of personalization for information passing requirements of intelligence analysts?
  • Situated context of analysis and how the people share information around the network.
  • Please add other peripheral areas of foci if appropriate.
  • Sample Size:
N = 8–10
  • Analysis Plan
How do you plan to answer each research question?
  • Grounded theory approach to help us start analysis as and when we get data
  • Expected Outcomes
  • Greater understanding of an analysts workflow in the context of the information that they work with
  • A holistic representation of how information flows through the hierarchy of a (to be determined) intelligence community/organization
  • Add more if you think we can get anything more from the analysis—might be good to let certain contributions surface on their own
  • Triangulation of findings across the literature of the common design considerations (thematic?) and with what we find out from the interviews. See if it contributes further to identifying a list of design specifications when we know to come up with the envisioned TLDR system.

Appendix B. Project Provenance

This section aims to explain the provenance of our research projects and the critical decision points that helped inform how we arrived at our final output. As with any large research project, plans are bound to change and adapt throughout the time frame. This section helps to tell the story of how the research was conducted. An overview of all the activities is listed in Table A1.
Table A1. HCI team’s SCADS timeline.
Table A1. HCI team’s SCADS timeline.
WeekGoalOutcome
1 (6/13–6/17)Settle into NC
Day in the life conversations with analysts
Learning about the intelligence community
2 (6/20–6/24)Review interface designs by design students
Day in the life conversations with analysts
Team formation
An initial study design (Appendix A) proposed
Literature to read identified
3 (6/27–7/1)Research literature discussion and area identificationLiterature survey spreadsheet created
IRB submitted
4 (7/4–7/8)Review literature models for patterns (v1)
5 (7/11–7/15)Review literature models (v2 & v3)
Conversations with analyst researchers
Finding patterns in literature models
Interview guide draft v1
6 (7/18–7/22)Pilot testing interview guide (v1–v5)IRB Approval to run study
7 (7/25–7/29)Running interviews/Collecting dataFinished majority of interviews
8 (8/01–8/05)Complete interviews and write reportReport written

Appendix B.1. Week 1

In the summer of 2022, the authors were invited to the Summer Conference on Applied Data Science at North Carolina State University. During this week, we were introduced to the grand challenge: a Tailored Daily Report (also referred to in the program as TLDR for “too long; didn’t read”) for analysts to have the information they need, when they need it, to make their work more efficient and reliable. The early programming for the summer conference was filled with group meetings with key users (i.e., Day in the Life Briefings from Analysts), and descriptions of the datasets gathered for us, to better understand the work that analysts do and the challenges we were attempting to solve with the grand challenge solution. This week, a shared document was created and distributed to invite people to write biographies to describe their personal interests and experiences. By the end of this “introduction by fire-hose” week, we were more familiar with the Intelligence community and the possible roles and ways information was being shared.

Appendix B.2. Week 2

In the second week, we started to do some team-building activities by reviewing the biographies individuals shared about what experiences they brought to the conference and what they wanted to work on. We also had design students give a presentation about their work uncovering challenges faced by analysts. These presentations were the culmination of a semester of paper prototype and artifact interviews with Laboratory for Analytic Sciences (LAS) analysts to arrive at high-fidelity prototypes and personas for a TLDR. These presentations were helpful in understanding what people had already asked analysts and led to more conversation about what we might want to do for our remaining six-week conference. This week, we also formalized our group and started developing an IRB Request for running our proposed research based on our preliminary study design (see Appendix A). The IRB request was focused on running interviews with LAS analysts and helping to extract what kinds of work they worked with. We also used this week to hear from Deborah Littlejohn, an associate professor of Graphic & Experience Design at NCSU and the coordinator of the design students’ projects, to better understand what design research has already been done with analysts at LAS.

Appendix B.3. Week 3

Continuing to identify what had been done, we started exploring the literature. By the third week, we had developed a spreadsheet structure to organize our annotations and the research we were reading.
We submitted the IRB for the interview study at the end of week 3. We met with researchers from LAS, SCADS attendees from academia, and LAS analysts and workshop coordinators throughout the week to finalize components of the IRB. For example, LAS workshop coordinators helped us craft example interview questions that we submitted for the IRB interview guide at the appropriate level of detail we should expect to discuss in a declassified environment.

Appendix B.4. Week 4

We identified and extracted visual models from the existing literature that we gathered (see Table A2). We compiled these models into a Google Jamboard, a virtual whiteboard tool, that allowed us to collaboratively arrange images of models and aid in the identification of common patterns across models. We also marked on the literature survey spreadsheet if we had found a model in the paper to maintain a link between the spreadsheet and Jamboard. The models we extracted included visual representations and tables that described different aspects of analysts’ workflow, process, and working environment. We analyzed these models, tagging and categorizing them based on what they represented. For example, we found that many represented processes or categories related to analysts’ work. We also differentiated which models were pulled from work that focused specifically on analysts, as compared to work that focused on analysis work in a more general sense. Within those categories, we subcategorized models as representative of either workflow, sensemaking, or decision-making. We also identified if the models were based on empirical or theoretical work. This work informed the discussion found in Section 2.3.2. This helped us to understand how saturated the existing field of literature was on these different aspects of analysts’ work specifically, as we see significant value in our intended interview study being empirical work with actual analysts.

Appendix B.5. Week 5

We continued our review and synthesis of models found in the literature and, from this, developed new paths to follow for additional literature collection. For example, some of the additional search terms we decided warranted more exploration from this review were: “cooperative/collaborative analysis”, “information/knowledge sharing”, and “distributed/asynchronous collaboration”. These additional literature synthesis and exploration activities helped us focus on the specific goals and intentions of our planned interview study. This led us to develop a more specific version of the interview guide, building upon the example guide prepared for IRB approval. We also met with other researchers and LAS affiliates with experience working with analysts to gain additional insight into what may be most valuable for us to explore in our interviews with analysts. Finally, during this week, we came up with three options for contingency plans to follow in case the IRB was not approved in time for us to run our planned interview study. These plans were:
  • A more systematic literature review
  • Analyst interaction log visualization and representation
  • A design study to develop design goals for the TLDR
A systematic literature review would involve formalizing the literature review process we had already begun by documenting search terms and inclusion/exclusion criteria. This would result in a comprehensive and systematic overview of existing literature on intelligence analysts/analysis.
Interaction log analysis would involve analyzing interaction logs of people engaging in analysis tasks from existing datasets provided as a part of SCADS (e.g., UKentucky [58], VAST [55,56,57]).
A design study would involve conducting a study with LAS and SCADS participants with analyst experience wherein the subject of the study is an artifact they create, such as a design of a tool, which would not warrant IRB review.
We did not move forward with these contingency plans as our study was approved, but these plans may be interesting to pursue in future years of SCADS.

Appendix B.6. Week 6

We continued to refine our interview guide, and we ran six pilot interviews with SCADS participants and LAS affiliates who had experience working alongside analysts, but who were not eligible to participate in our study. These pilot interviews allowed us to rehearse our interviewing techniques, and test out our interview questions to identify any that did not result in responses that related to our study goals. These six pilot interviews helped us to refine our interview guide into a final version, which ended up being our fifth iteration of the guide.
By the end of this week, we received approval to run the study and began scheduling interviews for week 7.

Appendix B.7. Week 7

We ran 14 interviews with LAS employees and SCADS attendees with experience working as analysts. Two to four interviews occurred each day of this week. A dedicated member of the research team coordinating scheduling with participants and interview pairs. Two interviewer/notetaker pairs from our team ran these interviews. Interviews were one hour long and were immediately followed by the interviewer/notetaker pair reviewing the notes for accuracy. The rapidity of getting approval for the study and coordinating schedules with experts was made possible by the academic and government collaborative opportunity, unique to a program like SCADS.
During this week, we also finalized our analysis process for our first round of qualitative coding on the interview notes. We decided to do open coding on the notes, conducted by one person (either interviewer or notetaker) who was present at the interview to provide context, and one outside person who was not present in the interview (any of the other four team members) to provide an outside perspective into the interview content. We began this open coding process in week 7 and completed the first round of analysis on approximately half of our data.

Appendix B.8. Week 8

We ran two additional interviews early in week 8, bringing our total participant count up to 16. By the end of this week, we finished the first round of coding the notes from all 16 interviews. We also had these notes deidentified to facilitate sharing with future SCADS researchersand laid the groundwork for how we could continue to work on the data after SCADS when we are no longer affiliated with NCSU (where the IRB was approved). Finally, during the last week of SCADS, we finalized our report and presentation.

Appendix C. Literature Survey Spreadsheet Fields

Table A2. The following table enumerates and describes the metadata extracted from the papers during the literature survey. We report on the status of the survey as completed by the group to open up the possibility for future SCADS conferences to build upon the work. At the end of SCADS 2022, 52 papers were completely read and analyzed, 34 were read and annotated but not fully analyzed, and 55 were left to be reviewed. The papers in the “To read” category have value for future SCADS participants as an already compiled list of relevant papers on the broad topic of intelligence analysis/analysts’ work. The papers in the “Complete” and “To review” categories have additional value to future participants as they also include our annotations. We labeled 18 papers as recommended for all members of our team to read because they were deemed to be critical to informing our research. The list of papers can be accessed at https://docs.google.com/spreadsheets/d/1ri_JC0fu98o2Q3EYzVKOI_e5OjGZLBTW4h37yhIT1OM (accessed on 1 June 2023).
Table A2. The following table enumerates and describes the metadata extracted from the papers during the literature survey. We report on the status of the survey as completed by the group to open up the possibility for future SCADS conferences to build upon the work. At the end of SCADS 2022, 52 papers were completely read and analyzed, 34 were read and annotated but not fully analyzed, and 55 were left to be reviewed. The papers in the “To read” category have value for future SCADS participants as an already compiled list of relevant papers on the broad topic of intelligence analysis/analysts’ work. The papers in the “Complete” and “To review” categories have additional value to future participants as they also include our annotations. We labeled 18 papers as recommended for all members of our team to read because they were deemed to be critical to informing our research. The list of papers can be accessed at https://docs.google.com/spreadsheets/d/1ri_JC0fu98o2Q3EYzVKOI_e5OjGZLBTW4h37yhIT1OM (accessed on 1 June 2023).
FieldDescription
Paper typeOne of the following categories:
Application/Design Study, System, Technique, Evaluation, Dissertation/Thesis, Review, Theory/Model, Book.
Note: These categories are based on the VIS Area Model used to characterize the types of visualization research
contributions of papers (http://ieeevis.org/year/2021/info/call-participation/area-model (accessed on 27 June 2022).
Thrust/ThesisA short summary (less than 100 characters) about the paper/what they did. Takeaway column is more about contributions.
DetailsMore relevant details from the paper.
PopulationIf empirical work, who and how big was the sampled population?
Note: We added this column later on in our process of collecting these papers as it became apparent to us that
it would be useful to distinguish which papers used an analyst population in their study and which did not.
Relevant to RQWhich of our four research question areas (Info, Process, Job, Collab) this paper was most relevant to?
How Relevant to RQA brief description of how the paper is relevant to the selected RQ(s).
ContributionWhat the authors claim to contribute with their work/Key takeaways.
LimitationsAreas where we believe the paper was limited in its contribution, methodology, scope, etc., particularly as compared to our
intended work (not necessarily the limitations that a paper may list).
TakeawayA short summary of the main lessons learned from this work. Trust/Thesis column is more about motivation and method.
Unique IDAn index number to refer to citations within team discussions.

Appendix D. Key Interview Questions

Table A3. The following table lists sample questions used to structure our semi-structured interview. The interview guide is available in Appendix D.
Table A3. The following table lists sample questions used to structure our semi-structured interview. The interview guide is available in Appendix D.
ThemesGoalsSuggested Questions
BackgroundUnderstand where the analyst is coming from and how his/her perspectives may have been formedHow long have you been with LAS?How long have you been an analyst?When did you last work as an analyst?
Topic Area/ExperienceUnderstand what the analyst does/did at a general level, as well as on a day-to-day basisWhat types of analysis work have you done/do you currently do?What were your role(s)?
Visualizing Information FlowModel the flow of analyst information inputs/outputsDrawing yourself in the center, we are going to create a diagram as a discussion aid.
InputsUnderstand the sources of triggers that lead to an analysis task and the interactions with those sourcesPlease draw and describe the things that may cue you to begin an analysis task.Who do these triggers typically come from?<if customer>At a high level, what types of customers are you serving?In what form do these triggers usually come in?Imagine that you received a [mentioned trigger]. How do you go about understanding what someone may want from that [trigger]?How much interaction takes place between [requester] and the analyst in the process?
OutputsUnderstand the outputs and how they are delivered from an analysis taskDescribe the form(s) your output(s) take.How is it communicated?Imagine you are providing the same information to three different recipients—how do you change your information output for each recipient?
CooperationUnderstand the kinds of interactions that the analyst has/had with others. Identify types, frequency, amount, and targets of interaction (and any other features of the interactions that may seem relevant)Explain how your position involves coordinating with others.What methods do you currently use to understand the work being done by other analysts?
Information Types/ProcessUnderstand the data used in the processes used when doing an analysis taskIn a general sense, what forms of data do you work with to answer a [request/customer]?Why do you work with these particular forms of data?
Process VariabilityUnderstand the dynamic information flow during the sense-making processTell us about your workflow—when information comes in, is it one static piece or a continuous stream?What kinds of factors impact your process? In what ways?
Feedback with RequesterUnderstand the interactions with the recipients of analysis outputsWhat form does feedback take?Does it impact your outputs?Do you ever send feedback to your [requester]?—To improve the efficiency of future analysis.

Appendix E. Notetaking Template

The following was used to help the notetaker track which question was being asked by the interviewer and to help organize notes as the interview took place.
  • Interviewer:
  • Notetaker:
  • Interview ID:
1a. How long have you been at LAS?
1b. How have you been enjoying SCADS?
1c. How long have you been an analyst?
1d. When did you last work as an analyst?
2a. What types of analysis work have you done/do you currently do? (e.g., working as an analyst or managing analysts)
2b. What were your roles?
2c. Did you manage other analysts?
4a. Please describe the triggers that start an analysis process.
4b. Where or who do these triggers typically come from?
4c. <If customer> At a high level, what types of customers are you serving?
4d. Are they typically external customers, entire institutions, or other analysts? What is the ratio?
4e. In what form do these triggers usually come in?
4f. Imagine that you received a [mentioned trigger]. How do you go about understanding what someone may want from that [trigger]?
4g. How much reframing is involved in the process to understand what the client really wants to know based on their request?
4h. How much interaction takes place between [requester] and the analyst in the process?
4i. How much redirection and discussion is there, if any?
7a. Describe the form(s) your output(s) (finished intelligence) take?
7b. Small pieces or all at once?
7c. How is it communicated?
7d. Is it structured, unstructured, textual, conversational?
7e. Imagine you are providing the same information to three different recipients—how do you change information output for each recipient?
7f. How do you adapt your output based on the specific needs or intentions of the customer?
3a. Explain how your position involves coordinating with others.
3b. For example, did analysts work in teams?
3c. If not, why did you work independently?
3d. What are the challenges you faced in coordination with others?
3e. Are these collaborations self-initiated or directed?
3f. Is there overlap with similar requests with other analysts?
3g. What methods do you currently use to understand the work being done by other analysts?
5a. In a general sense, what forms of data do you work with to answer a [request/customer]?
5b. Why do you work with these particular forms of data?
5c. What are the types of information that you deal with during your work?
5d. Textual? Physical and virtual conversations? Signals? HUMINT?
<OPEN NOTES FOR PILOT ONLY WHILE WE WORK OUT WHAT THIS IS>
6a. How dynamic is the information flow in your analysis process?
6b. How does information come in? Is it one big thing or lots of little things within one task?
6c. Is there a typical sequence for the process from a [trigger] to output? If not, how does it vary?
8a. What form does feedback take?
8b. How does it impact your outputs?
8c. Do you ever send feedback to your [requester]? (e.g., to improve the efficiency of future analysis).

References

  1. Patterson, E.; Woods, D.; Tinapple, D.; Roth, E.; Finley, J.; Kuperman, G. Aiding the Intelligence Analyst: From Problem Definition to Design Concept Exploration; Report AFRL-HE-WP-TR-2001-0116; Ohio State Univeristy: Columbus, OH, USA, 2001; Available online: https://apps.dtic.mil/sti/pdfs/ADA397621.pdf (accessed on 12 July 2022).
  2. Symon, P.B.; Tarapore, A. Defense Intelligence Analysis in the Age of Big Data. Jt. Force Q. 2015, 79, 8. [Google Scholar]
  3. Wynne, K.T.; Lyons, J.B. An integrative model of autonomous agent teammate-likeness. Theor. Issues Ergon. Sci. 2018, 19, 353–374. [Google Scholar] [CrossRef]
  4. Gadepally, V.N.; Hancock, B.J.; Greenfield, K.B.; Campbell, J.P.; Campbell, W.M.; Reuther, A.I. Recommender systems for the department of defense and intelligence community. Linc. Lab. J. 2016, 22, 74–89. [Google Scholar]
  5. Papadopoulos, T.; Charalabidis, Y. What do governments plan in the field of artificial intelligence? Analysing national AI strategies using NLP. In Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance, Athens, Greece, 23–25 September 2020; pp. 100–111. [Google Scholar]
  6. Komlodi, A.; Rheingans, P.; Ayachit, U.; Goodall, J.; Joshi, A. A user-centered look at glyph-based security visualization. In Proceedings of the IEEE Workshop on Visualization for Computer Security (VizSEC 05), Minneapolis, MN, USA, 26 October 2005; pp. 21–28. [Google Scholar] [CrossRef]
  7. Walchshofer, C.; Hinterreiter, A.; Xu, K.; Stitz, H.; Streit, M. Provectories: Embedding-based Analysis of Interaction Provenance Data. IEEE Trans. Vis. Comput. Graph. 2021. [Google Scholar] [CrossRef] [PubMed]
  8. Pioch, N.J.; Everett, J.O. POLESTAR: Collaborative knowledge management and sensemaking tools for intelligence analysts. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management—CIKM ’06, Arlington, VA, USA, 6–11 November 2006; p. 513. [Google Scholar] [CrossRef]
  9. Kershaw, K. Creating a “TLDR” for Knowledge Workers—Ncsu-las.org. 2022. Available online: https://ncsu-las.org/2022/08/scads-tldr-for-knowledge-workers/ (accessed on 25 January 2023).
  10. Hepenstal, S.; Zhang, L.; William Wong, B.L. An analysis of expertise in intelligence analysis to support the design of Human-Centered Artificial Intelligence. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021; pp. 107–112. [Google Scholar] [CrossRef]
  11. Johnston, R. Analytic Culture in the US Intelligence Community: An Ethnographic Study; Number 14; Central Intelligence Agency: Washington, DC, USA, 2005. [Google Scholar]
  12. Nolan, B.R. Information Sharing and Collaboration in the United States Intelligence Community: An Ethnographic Study of the National Counterterrorism Center. Ph.D. Thesis, University of Pennsylvania, Philadelphia, PA, USA, 2013. [Google Scholar]
  13. Fink, G.A.; North, C.L.; Endert, A.; Rose, S. Visualizing cyber security: Usable workspaces. In Proceedings of the 2009 6th International Workshop on Visualization for Cyber Security, Atlantic City, NJ, USA, 11 October 2009; pp. 45–56. [Google Scholar] [CrossRef] [Green Version]
  14. Varga, M.; Winkelholz, C.; Träber-Burdin, S. An Exploration of Cyber Symbology. In Proceedings of the 2019 IEEE Symposium on Visualization for Cyber Security (VizSec), Vancouver, BC, Canada, 23 October 2019; pp. 1–5. [Google Scholar] [CrossRef]
  15. Groenewald, C.; Wong, B.L.W.; Attfield, S.; Passmore, P.; Kodagoda, N. How Analysts Think: How Do Criminal Intelligence Analysts Recognise and Manage Significant Information? In Proceedings of the 2017 European Intelligence and Security Informatics Conference (EISIC), Athens, Greece, 11–13 September 2017; pp. 47–53. [Google Scholar] [CrossRef]
  16. Chin, G.; Kuchar, O.A.; Wolf, K.E. Exploring the analytical processes of intelligence analysts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; ACM: Boston, MA, USA, 2009; pp. 11–20. [Google Scholar] [CrossRef]
  17. Straus, S.; Parker, A.; Bruce, J. The Group Matters: A Review of Processes and Outcomes in Intelligence Analysis. Group Dyn. Theory Res. Pract. 2011, 15, 128–146. [Google Scholar] [CrossRef]
  18. Keel, P.E. Collaborative Visual Analytics: Inferring from the Spatial Organization and Collaborative Use of Information. In Proceedings of the 2006 IEEE Symposium on Visual Analytics Science and Technology, Baltimore, MD, USA, 31 October–2 November 2006; pp. 137–144. [Google Scholar] [CrossRef]
  19. Vogel, K.M.; Tyler, B.B. Interdisciplinary, cross-sector collaboration in the US Intelligence Community: Lessons learned from past and present efforts. Intell. Natl. Secur. 2019, 34, 851–880. [Google Scholar] [CrossRef]
  20. Pirolli, P.; Card, S. The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In Proceedings of the International Conference on Intelligence Analysis, McLean, VA, USA, 2–6 May 2005. [Google Scholar]
  21. Klein, G.; Phillips, J.K.; Rall, E.L.; Peluso, D.A. A data-frame theory of sensemaking. In Expertise out of Context: Proceedings of the Sixth International Conference on Naturalistic Decision Making; Lawrence Erlbaum Associates Publishers: Mahwah, NJ, USA, 2007; pp. 113–155. [Google Scholar]
  22. Kang, Y.; Stasko, J. Characterizing the intelligence analysis process: Informing visual analytics design through a longitudinal field study. In Proceedings of the 2011 IEEE Conference on Visual Analytics Science and Technology (VAST), Providence, RI, USA, 23–28 October 2011; pp. 21–30. [Google Scholar] [CrossRef] [Green Version]
  23. Ahrend, J.M.; Jirotka, M.; Jones, K. On the collaborative practices of cyber threat intelligence analysts to develop and utilize tacit Threat and Defence Knowledge. In Proceedings of the 2016 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), London, UK, 13–14 June 2016; pp. 1–10. [Google Scholar] [CrossRef]
  24. Connors, E.S.; Craven, P.L.; McNeese, M.D.; Jefferson, T., Jr.; Bains, P., Jr.; Hall, D.L., Jr. An application of the AKADAM approach to intelligence analyst work. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, New Orleans, LA, USA, 20–24 September 2004; SAGE Publications Sage: Los Angeles, CA, USA, 2004; Volume 48, pp. 627–630. [Google Scholar]
  25. Hammell, R.J.; Hanratty, T.; Heilman, E. Capturing the value of information in complex military environments: A fuzzy-based approach. In Proceedings of the 2012 IEEE International Conference on Fuzzy Systems, Brisbane, Australia, 10–15 June 2012; pp. 1–7. [Google Scholar] [CrossRef] [Green Version]
  26. Newcomb, E.A.; Hammell, R.J. Examining the Effects of the Value of Information on Intelligence Analyst Performance. In Proceedings of the Conference on Information Systems Applied Research ISSN, New Orleans, LA, USA, 1–4 November 2012. [Google Scholar]
  27. Dragos, V. Developing a core ontology to improve military intelligence analysis. Int. J. Knowl.-Based Intell. Eng. Syst. 2013, 17, 29–36. [Google Scholar] [CrossRef] [Green Version]
  28. Dhami, M.K.; Careless, K. Intelligence analysts’ strategies for solving analytic tasks. Mil. Psychol. 2019, 31, 117–127. [Google Scholar] [CrossRef]
  29. Wright, W.; Schroh, D.; Proulx, P.; Skaburskis, A.; Cort, B. The Sandbox for analysis: Concepts and methods. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’06, Montreal, QC, Canada, 22–27 April 2006; Association for Computing Machinery: New York, NY, USA, 2006; pp. 801–810. [Google Scholar] [CrossRef]
  30. Browne, G.J.; Rogich, M.B. An Empirical Investigation of User Requirements Elicitation: Comparing the Effectiveness of Prompting Techniques. J. Manag. Inf. Syst. 2001, 17, 223–249. [Google Scholar] [CrossRef]
  31. Kang, Y.a.; Gorg, C.; Stasko, J. Evaluating visual analytics systems for investigative analysis: Deriving design principles from a case study. In Proceedings of the 2009 IEEE Symposium on Visual Analytics Science and Technology, Atlantic City, NJ, USA, 12–13 October 2009; pp. 139–146. [Google Scholar] [CrossRef]
  32. Chopra, K.; Haimson, C. Information Fusion for Intelligence Analysis. In Proceedings of the 38th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 3–6 January 2005; p. 111a. [Google Scholar] [CrossRef]
  33. Ayoub, P.J.; Petrick, I.J.; McNeese, M.D. Weather Systems: A New Metaphor for Intelligence Analysis. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2007, 51, 313–317. [Google Scholar] [CrossRef]
  34. Franklin, L.; Pirrung, M.; Blaha, L.; Dowling, M.; Feng, M. Toward a visualization-supported workflow for cyber alert management using threat models and human-centered design. In Proceedings of the 2017 IEEE Symposium on Visualization for Cyber Security (VizSec), Phoenix, AZ, USA, 2 October 2017; pp. 1–8. [Google Scholar] [CrossRef]
  35. Moore, D.T. Critical Thinking and Intelligence Analysis; National Defense Intelligence College, Center for Strategic Intelligence Research: Washington, DC, USA, 2007. [Google Scholar]
  36. Dimara, E.; Stasko, J. A Critical Reflection on Visualization Research: Where Do Decision Making Tasks Hide? IEEE Trans. Vis. Comput. Graph. 2022, 28, 1128–1138. [Google Scholar] [CrossRef] [PubMed]
  37. Buchanan, L.; D’Amico, A.; Kirkpatrick, D. Mixed method approach to identify analytic questions to be visualized for military cyber incident handlers. In Proceedings of the 2016 IEEE Symposium on Visualization for Cyber Security (VizSec), Baltimore, MD, USA, 24 October 2016; pp. 1–8. [Google Scholar] [CrossRef]
  38. Madanagopal, K.; Ragan, E.D.; Benjamin, P. Analytic Provenance in Practice: The Role of Provenance in Real-World Visualization and Data Analysis Environments. IEEE Comput. Graph. Appl. 2019, 39, 30–45. [Google Scholar] [CrossRef] [PubMed]
  39. Toniolo, A.; Preece, A.D.; Webberley, W.; Norman, T.J.; Sullivan, P.; Dropps, T. Conversational intelligence analysis. In Proceedings of the 17th International Conference on Distributed Computing and Networking, ICDCN’16, Singapore, 4–7 January 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  40. Wen, Z.; Zhou, M.X.; Aggarwal, V. Context-Aware, adaptive information retrieval for investigative tasks. In Proceedings of the 12th International Conference on Intelligent User Interfaces, IUI’07, Honolulu, HI, USA, 28–31 January 2007; Association for Computing Machinery: New York, NY, USA, 2007; pp. 122–131. [Google Scholar] [CrossRef]
  41. Gentry, J.A. Managers of Analysts: The Other Half of Intelligence Analysis. Intell. Natl. Secur. 2016, 31, 154–177. [Google Scholar] [CrossRef]
  42. Schmidt, M.; Vogel, K.M. Algorithms that Empower? Platformization in U.S. Intelligence Analysis. In Proceedings of the 2020 IEEE International Symposium on Technology and Society (ISTAS), Virtual, 12–15 November 2020; pp. 1–9. [Google Scholar] [CrossRef]
  43. Harrington, H.J. Affinity Diagrams. In The Innovation Tools Handbook; Productivity Press: New York, NY, USA, 2016; Volume 2, pp. 45–54. [Google Scholar]
  44. Hossain, M.S.; Andrews, C.; Ramakrishnan, N.; North, C. Helping intelligence analysts make connections. In Proceedings of the Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 7–11 August 2011. [Google Scholar]
  45. Blaha, L.M. Interactive OODA processes for operational joint human-machine intelligence. In Proceedings of the NATO IST-160 Specialist’s Meeting: Big Data and Military Decision Making, Bordeaux, France, 30 May–1 June 2018. [Google Scholar]
  46. Antunes, P.; Herskovic, V.; Ochoa, S.F.; Pino, J.A. Reviewing the quality of awareness support in collaborative applications. J. Syst. Softw. 2014, 89, 146–169. [Google Scholar] [CrossRef]
  47. Elm, W.; Potter, S.; Tittle, J.; Woods, D.; Grossman, J.; Patterson, E. Finding decision support requirements for effective intelligence analysis tools. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Orlando, FL, USA, 26–30 September 2005; SAGE Publications Sage CA: Los Angeles, CA, USA, 2005; Volume 49, pp. 297–301. [Google Scholar]
  48. Roth, E.M.; Pfautz, J.D.; Mahoney, S.M.; Powell, G.M.; Carlson, E.C.; Guarino, S.L.; Fichtl, T.C.; Potter, S.S. Framing and contextualizing information requests: Problem formulation as part of the intelligence analysis process. J. Cogn. Eng. Decis. Mak. 2010, 4, 210–239. [Google Scholar] [CrossRef]
  49. Johnston, R. Developing a Taxonomy of Intelligence Analysis Variables; Technical Report; Central Intelligence Agency Center for the Study of Intelligence: Washington, DC, USA, 2003. [Google Scholar]
  50. Wong, B.W.; Kodagoda, N. How analysts think: Inference making strategies. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2015, 59, 269–273. [Google Scholar] [CrossRef]
  51. Kustermann, A. Using Information-Sharing Exchange Techniques from the Private Sector to Enhance Information Sharing between Domestic Intelligence Organizations; Technical report; Naval Postgraduate School: Monterey, CA, USA, 2013. [Google Scholar]
  52. Chang, W.; Berdini, E.; Mandel, D.R.; Tetlock, P.E. Restructuring structured analytic techniques in intelligence. Intell. Natl. Secur. 2018, 33, 337–356. [Google Scholar] [CrossRef]
  53. Döbler, N.; Bartnik, C. Normative Affordances Through and By Technology: Technological Mediation and Human Enhancement. Int. J. Interact. Multimed. Artif. Intell. 2022, 7, 14–23. [Google Scholar] [CrossRef]
  54. López Rivero, A.J.; Beato, M.E.; Muñoz Martínez, C.; Cortiñas Vázquez, P.G. Empirical Analysis of Ethical Principles Applied to Different AI Uses Cases. Int. J. Interact. Multimed. Artif. Intell. 2022, 7, 105. [Google Scholar] [CrossRef]
  55. Whiting, M.; Cook, K.; Grinstein, G.; Liggett, K.; Cooper, M.; Fallon, J.; Morin, M. VAST challenge 2014: The Kronos incident. In Proceedings of the 2014 IEEE Conference on Visual Analytics Science and Technology (VAST), Paris, France, 25–31 October 2014; pp. 295–300. [Google Scholar] [CrossRef]
  56. VAST Challenge 2019: Disaster at St. Himark! Virtual. 21 October 2019. Available online: https://vast-challenge.github.io/2019/index.html (accessed on 2 January 2023).
  57. Cook, K.; Grinstein, G.; Whiting, M. The VAST Challenge: History, scope, and outcomes: An introduction to the Special Issue. Inf. Vis. 2014, 13, 301–312. [Google Scholar] [CrossRef]
  58. Harrison, B.; Vinogradov, A.; Li, C.; Ware, S.G. Insider Threat Analyst Workflow Dataset. December 2020. Available online: https://s3.amazonaws.com/las.scads.public/6G4CDK8SYD/UKy_Data.zip (accessed on 2 January 2023).
Figure 1. We show the number of papers focusing on different aspects of the intelligence workflow. Based on our definitions in Section 2.2, we classified the papers from our survey. While many papers from the survey were not categorized, the majority we labeled described a process in the intelligence ecosystem.
Figure 1. We show the number of papers focusing on different aspects of the intelligence workflow. Based on our definitions in Section 2.2, we classified the papers from our survey. While many papers from the survey were not categorized, the majority we labeled described a process in the intelligence ecosystem.
Analytics 02 00028 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Block, J.E.; Bookner, I.; Chu, S.L.; Crouser, R.J.; Honeycutt, D.R.; Jonas, R.M.; Kulkarni, A.; Paredes, Y.V.; Ragan, E.D. Preliminary Perspectives on Information Passing in the Intelligence Community. Analytics 2023, 2, 509-529. https://doi.org/10.3390/analytics2020028

AMA Style

Block JE, Bookner I, Chu SL, Crouser RJ, Honeycutt DR, Jonas RM, Kulkarni A, Paredes YV, Ragan ED. Preliminary Perspectives on Information Passing in the Intelligence Community. Analytics. 2023; 2(2):509-529. https://doi.org/10.3390/analytics2020028

Chicago/Turabian Style

Block, Jeremy E., Ilana Bookner, Sharon Lynn Chu, R. Jordan Crouser, Donald R. Honeycutt, Rebecca M. Jonas, Abhishek Kulkarni, Yancy Vance Paredes, and Eric D. Ragan. 2023. "Preliminary Perspectives on Information Passing in the Intelligence Community" Analytics 2, no. 2: 509-529. https://doi.org/10.3390/analytics2020028

APA Style

Block, J. E., Bookner, I., Chu, S. L., Crouser, R. J., Honeycutt, D. R., Jonas, R. M., Kulkarni, A., Paredes, Y. V., & Ragan, E. D. (2023). Preliminary Perspectives on Information Passing in the Intelligence Community. Analytics, 2(2), 509-529. https://doi.org/10.3390/analytics2020028

Article Metrics

Back to TopTop