You are currently viewing a new version of our website. To view the old version click .
Future Internet
  • Review
  • Open Access

18 May 2021

Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies

IT Innovation, Electronics and Computing, University of Southampton, University Road, Southampton SO17 1BJ, UK
This article belongs to the Special Issue Human–Computer Interaction Models and Experiences for Internet of Things Systems and Edge Computing

Abstract

To use technology or engage with research or medical treatment typically requires user consent: agreeing to terms of use with technology or services, or providing informed consent for research participation, for clinical trials and medical intervention, or as one legal basis for processing personal data. Introducing AI technologies, where explainability and trustworthiness are focus items for both government guidelines and responsible technologists, imposes additional challenges. Understanding enough of the technology to be able to make an informed decision, or consent, is essential but involves an acceptance of uncertain outcomes. Further, the contribution of AI-enabled technologies not least during the COVID-19 pandemic raises ethical concerns about the governance associated with their development and deployment. Using three typical scenarios—contact tracing, big data analytics and research during public emergencies—this paper explores a trust-based alternative to consent. Unlike existing consent-based mechanisms, this approach sees consent as a typical behavioural response to perceived contextual characteristics. Decisions to engage derive from the assumption that all relevant stakeholders including research participants will negotiate on an ongoing basis. Accepting dynamic negotiation between the main stakeholders as proposed here introduces a specifically socio–psychological perspective into the debate about human responses to artificial intelligence. This trust-based consent process leads to a set of recommendations for the ethical use of advanced technologies as well as for the ethical review of applied research projects.

1. Introduction

Although confusion over informed consent is not specific to a public health emergency, the COVID-19 pandemic has brought into focus issues with consent across multiple areas often affecting different stakeholders. Consent, or Terms of Use for technology artefacts including online services, is intended to record the voluntary willingness to engage. Further, it is assumed to be informed: that individuals understand what is being asked of them or that they have read and understood the Terms of Use. It is often unclear, however, what this entails. For the user, how voluntary is such consent, and for providers, how much of their technology can they represent to their users? As an example from health and social care, contact tracing—a method to track transmission and help combat COVID-19—illustrates some of the confusion. Regardless of the socio-political implications of non-use, signing up for the app would imply a contract between the user and the service provider based on appropriate use of the app and limiting the liability of the provider. However, since it would typically involve the processing of personal data, there may also be a request for the user (now a data subject) to agree to that processing. In the latter case, consent is one possible legal basis under data protection law for the collection and exploitation of personal data. In addition, though, the service provider may collaborate with researchers and wish to share app usage and user data with them. This too is referred to as (research) consent, that is the willingness to take part in research. Finally, in response to an indication that the app user has been close to someone carrying the virus, they may be invited for a test; they would need to provide (clinical) consent for the clinical intervention, namely undergoing the test. It is unclear whether individuals are aware of these different, though common, meanings of consent, or of the implications of each. Added to that, there may be a societal imperative for processing data about individual citizens, which implies that there is a balance to be struck between individual and community rights.
Such examples emerge in other domains as well. Big Tech companies, for instance, may request user consent to process their personal data under data protection law. They may intend to share that data with third parties, however, to target advertising which involves some degree of profiling, which is only permitted under European data protection regulation in specific circumstances. Although legally responsible for the appropriate treatment of their users’ data, the service provider may not understand enough of the technology to meet their obligations. Irrespective of technology, the user too may struggle to identify which purpose or purposes they are providing consent for. With social media platforms, the platform provider must similarly request data protection consent to store and process their users’ personal data. They may also offer researchers access to the content generated on their platform or to digital behaviour traces for research purposes. This would come under research consent rather than specifically data protection consent. In these two cases, first the user must identify different purposes under the same consent that their (service) data may be used for, but secondly they may need to review different types of consent regarding their data as used for providing the service versus content they generate or activities they engage in used for research.
In this paper, I will explore the confusions around consent in terms of common social scientific models. This provides a specifically behavioural conception of the dialogue associated with consent contextualised within an ecologically valid presentation of the underlying mechanisms. As such, it complements and extends the discussion on explainable artificial intelligence (AI). Instead of focusing on specific AI technology, though, this discussion centres on the interaction of users with technologies from a perspective of engagement and trust rather than specifically focusing on explainability.

Overview of the Discussion

The following discussion is organised as follows. Section 2 provides an overview of responsible and understandable AI as perceived by specific users, and in related government attempts to guide the development of advanced technologies. In Section 3, I introduce behavioural models describing general user decision forming and action. Section 4 covers informed consent, including how it applies in research ethics in Section 4.1 (For the purpose of this article, ethics is used as an individual, subjective notion of right and wrong; moral, by contrast, would refer to more widely held beliefs of what is acceptable versus what is not []). Specific issues with consent in other areas are described in Section 4.2, including technology acceptance (Section 4.3). Section 4 finishes with an introduction to trust in Section 4.4 which I develop into an alternative to existing Informed Consent mechanisms.
Having presented different contexts for consent, Section 5 considers a trust-based approach applied to three different scenarios: Contact Tracing, Big Data Analytics and Public Health Emergencies. These scenarios are explored to demonstrate trust as an explanatory mechanism to account for the decision to engage with a service, share personal data or participate in research. As such, unlike existing consent-based mechanisms, a trust-based approach introduces an ecologically sound alternative to Informed Consent free from any associated confusion, and one derived from an ongoing negotiated agreement between parties.

2. Responsible and Explainable AI

Technological advances have seen AI components introduced across multiple domains such as transportation, healthcare, finance, the military and legal assessment []. At the same time, user rights to interrogate and control their personal data as processed by these technologies (Art 18, 21 and 22 []) call for a move away from a black-box implementation [,] to greater transparency and explainability (i.a., []). Explainable AI as a concept has been around at least since the beginning of the millennium [] and has been formalised in programs such as DARPA in the USA []. In this section, I will consider some of the research and regulatory aspects as they relate to the current discussion.
The DARPA program defines explainable AI in terms of:
"… systems that can explain their rationale to a human user, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future"
[]
Anthropomorphising technology in this way has implications for technology acceptance (see []; and Section 4.3 below). Surveys by Adadi and Berrada [] and Arrieta and his colleagues [] focus primarily on mapping out the domain from recent research. Although both conclude there is a lack of consistency, common factors include explainability, transparency, fairness and accountability. Whilst they recognise those affected by the outcomes, those using technology for decision support, regulators and managers all as important stakeholders, Arrieta et al. focus ultimately on developers and technologists with a call to produce “Responsible AI” []. Much of this was found previously in our own 2018 Delphi consultation with domain experts. In confirming accountability in technology development and use, however, experts also called for a new type of ethics and encouraged ethical debate []. Adadi and Berrada meanwhile emphasise instead the different motivations and types of explainability: for control, to justify outcomes, to enable improvement, and finally to provide insights into human behaviours (control to discover) []. Došilović and his colleagues highlight a need for formalised measurement of subjective responses to explainability and interpretability [], whereas Samek et al. propose a formalised, objective method to evaluate at least some aspects of algorithm performance []. Meanwhile, Khrais sought to investigate the research understanding of explainability, discovering not only terms like explanation, model and use which might be expected, but also more human-centric concepts like emotion, interpret and control [].
Looking not at the interpretability of AI technologies, other studies seek to explore the implications of explainability on stakeholders, and especially on those dependent on its output (for instance, patients and clinicians using an AI-enabled medical decision-support system). The DARPA program seeks to support “explanation-informed acceptance” via an understanding of the socio-cognitive context of explanation []. Picking up on such a human-mediated approach, Weitz and her colleagues demonstrate how even simple methods, in their case the use of an avatar-like component, encourage and enhance perceptions of understanding the technology []. Taking this further and echoing [] on trust, Israelsen and Ahmed, meanwhile, focus on trust-enhancing “algorithmic assurances” which echo traditional constructs like trustworthiness indicators in the trust literature (see Section 4.4) []. All of this comes together as positioning AI explainability as a co-construction of understanding between explainer (the advanced AI-enabled technology) and explainee (the user) []. This ongoing negotiation around explanability echoes my own trust-based alternative to the dialogue around informed consent below (Section 4.4).
Much of the research above makes explicit a link between the motivation towards explainable or responsible AI with regulation and data subject rights [,,,,,]. With specific regard to big data, the Toronto Declaration puts the onus on data scientists and to some degree governance structures to protect individual rights []. However, human rights conventions often balance individual rights with what is right for the community. For example, although the first paragraph of Art. 8 on privacy upholds individual rights and expectations, the second provides for exceptions where required by the community []. Individual versus community rights are significant for contact tracing and similar initiatives associated with the pandemic. While calling upon technologists for transparency and fairness in their use of data, the UK Government Digital Services guidance also tries to balance community needs with individual human rights []. The UK Department of Health and Social Care introduces the idea that both clinicians and patients, that is multiple stakeholders, need to be involved and to understand the technology []. Similarly, the EU stresses that developers, organisations deploying a given technology, and end-users should all share some responsibility in specifying and managing AI-enabled technologies, without considering how such technologies might disrupt existing relationships [].
The focus on transparency and explainability within the (explainable) AI literature is relevant to the idea that consent should be informed. Although the focus is often on technologists [,], this assumes that all stakeholders—those affected by the output of the AI component, those using it for decision support, and those developing it (cf. [])—share responsibility for the consent process. Even where studies have focused on stakeholder interactions and the co-construction of explainability [], there is an evident need to consider the practicalities of the negotiation between parties to that process. For contact tracing, for example, who is responsible to the app user for the use and perhaps sharing of their data? Initially, the service provider would assume this role and make available appropriate terms of use, a privacy notice and privacy policy. However, surely the data scientist providing algorithms or models for such a system at least needs to explain the technology? A Service Level Agreement (SLA) for a machine-learning component would not typically involve detail about how a model was generated or its longer term performance. If it is not clear what stakeholders are responsible for, it becomes problematic to identify who should be informing the app user or participant of what to expect. Further, with advanced, AI-enabled technologies, not all stakeholders may be able to explain the technology. A clinician, for instance, is focused on care for their patients; they would not necessarily know how a machine-learning model had been generated or what the implications would be. There would have to be a paradigm shift perhaps before they consider trying to understand AI-technologies.
Leading on from studies which situate explainable AI within a behavioural context ([,,]), I take the more general discussion about the use and effects of advanced technologies into the context of planned behaviour (in Section 3) and extend discussions of trust [,] into the practical consideration of informed consent in a number of different domains. Starting with contact tracing and similar applications of AI technologies (Section 5), this discussion seeks to explore the confusion around consent in a practical context, evaluate the feasibility of transparency, and review responsible stakeholders for different scenarios. Consent to engage with advanced technologies highlights, therefore, the impact of AI rather than specifically on how explainable the technology might be. Focusing on a new kind of ethics, this leads to the proposal for a trust-based alternative to consent.

3. Behaviour and Causal Models

The Theory of Planned Behavior (TPB) assumes that a decision to act precedes the action or the actual activity itself. The separation between a decision to act and the action itself is important: we may decide to do something, but not actually do it. The decision to act is the result of a response to a given context. This is summarised in Figure 1. The context construct may include characteristics of the individual, of the situation itself, of the activity they are evaluating or of any other background factors. For instance, Figure 2 provides interpretations of Terms of Use ((a) the upper half of the figure) and Research Consent ((b) in the lower half) as behavioural responses.
Figure 1. Schematic representation of the Theory of Planned Behavior [].
Figure 2. TPB Interpretation of Behaviours associated with (a) Terms of Use and (b) Research Consent.
Someone wishing to sign up to an online service, for instance, would be presented with the Choice to use the service or not, which may depend on the Information they are provided about the service provider and the perceived Utility they might derive from using the service. The context for Terms of Use therefore comprises Choice, Information and Utility. By contrast, a potential research participant would decide whether or not to take part (develop a Willingness to Engage) based on Respect shown to them by the researcher, whether the researcher is well disposed towards them (Benevolence), and that research outcomes will be shared equitably across the community (Justice).

5. Scenarios

Notwithstanding any legal obligations under clinical practice and data protection regulations, to evaluate the concept of trust-based research consent, this section considers three different scenarios with specific relevance to the COVID-19 pandemic and beyond. Throughout the discussion above, I have used contract tracing as a starting point. Identifying the transmission paths of contagious diseases with such technology has been around as a concept for some time []. However, this has potential implications for privacy including the inadvertent and unconsented disclosure of third parties from a consent perspective. It has been noted that human rights instruments make provision for when community imperatives supersede individual rights (Art. 8 §2, []); and trust in government has had implications for the acceptance and success of tracing applications, for instance [,]. For representative research behaviours, therefore, it is important to consider the implications of current informed consent procedures in research ethics as well as from the perspective of trust-based consent introduced above.
  • Contact tracing: During the COVID-19 pandemic, there has been some discussion about the technical implementation [] and how tracing fits within a larger socio-technical context []. Introduction of such applications is not without controversy in socio-political terms [,]. At the same time, there is a balance to be struck between individual rights and the public good []; in the case of the COVID-19 pandemic, the social implications of the disease are almost as important as its impact on public and individual health []. Major challenges include:
    Public Opinion;
    Inadvertent disclosure of third party data;
    Public/Individual responses to alerts.
  • Big Data Analytics: this includes exploiting the vast amounts of data available typically via the Internet to attempt to understand behavioural and other patterns [,]. Such approaches have already shown much promise in healthcare [], and with varying degrees of success for tracing the COVID-19 pandemic []. There are, however, some concerns about the impact of big data on individuals and society [,]. Major challenges include:
    Identification of key actors;
    Mutual understanding between those actors;
    Influence of those actors on processing (and results).
  • Public Health Emergency Research: multidisciplinary efforts to understand, inform and ultimately control the transmission and proliferation of disease (see for instance []) as well as social impacts [,], and to consider the long-term implications of the COVID-19 pandemic and other PHEs []. Major challenges include:
    Changes in research focus;
    Changes introduced as research outcomes become available;
    Respect for all potential groups;
    Balancing individual and community rights;
    Unpredicted benefits of research data and outcomes (e.g., in future).
Table A1 in Appendix A summarises perspectives relating to informed consent and trust-based consent relating to the three related activities: contact tracing, big data and issues pertinent to research during a PHE as described above. Each of these scenarios needs to be contextualised within different perspectives: the broader socio-political context, the wider delivery ecosystem, and historical and community-benefit aspects, respectively. Traditional informed consent for research would be problematic for different reasons in each case as summarised. If run in connection with or as part of data protection informed consent, any risk of research participants stopping their participation may result in withdrawal of research data unless a different legal basis for processing can be found.
In all three cases, it is apparent that a simple exchange between researcher and research participant is not possible. There are other contextual factors which must be taken into account and which may well introduce additional stakeholders. There are also external factors—contemporary context, a relevant underlying ecosystem setting expectations, and a dynamic and historical perspective which may introduce both types of factors from the other two scenarios—which would indicate at the very least that each contextualised agreement must be re-validated, and that the consent cannot be assumed to remain stable as external factors influence the underlying perceptions of the actors involved. Trust would allow for such contextualisation and implies a continuous negotiation.

6. Discussion and Recommendations

The existing informed consent process clearly poses several problems, not least the potential to confuse research participants about what they are agreeing to: use of an app, the processing of their personal data, undergoing treatment, or taking part in a research study. This situation would be exacerbated where several such activities co-occur. Indeed, it is not unusual for research studies to include collection of personal data as part of the research protocol. However, there are more challenging issues. Where the researcher is unable to describe exactly what should happen, what the outcomes might be, and how data or participant engagement will be used, then it is impossible to provide sufficient information for any consent to be fully informed. The literature in this area provides some evidence too that research participants may well wish to engage without being overwhelmed with detail they do not want or may not understand. There is an additional complication where multiple stakeholders, not just the researcher, may be involved in handling and interpreting research outcomes. Any such stakeholders should be involved in or at least represented as part of the discussion with the research participant. All of this suggests that there needs to be some willingness to accept risk: participants must trust researchers and their intentions.

6.1. Recommendations for Research Ethics Review

Such a trust-based approach would, however, affect how RECs/IRBs review research submissions. Most importantly, reviewers need to consider the main actors involved in any research and their expectations. This suggests a number of main considerations during review:
  • The research proposal should first describe in some detail the trustworthiness basis for the research engagement. I have used characteristics from the literature—integrity, benevolence, and competence—though others may be more appropriate such as reputation and evidence of participant reactions in related work.
  • The context of the proposed research should be disclosed, including the identification of the types of contextual effects which might be expected. These may include the general socio-political environment, existing relationships that the research participant might be expected to be aware of (such as clinician–patient), and any dynamic effects, such as implications for other cohorts, including future cohorts. Any such contextual factors should be explained, justified and appropriately managed by the researcher.
  • The proposed dialogue between researcher and research participant should be described, how it will be conducted, what it will cover, and how frequently the dialogue will be repeated. This may depend, for example, on when results start to become available. The frequency and delivery channel of this dialogue should be simple for the potential research participant. This must be justified, and the timescales realistic. This part of the trust-based consent process might also include how the researcher will manage research participant withdrawal.
The intention with such an approach would be to move away from the burdensome governance described in the literature (see [,], for instance), instead focusing on what is of practical importance to enter into a trust relationship and what might encourage a more natural and familiar communicative exchange with participants. Traditional information such as the assumed benefits of the research outcomes should be confined to the research ethics approval submission; it may not be clear to a potential research participant how relevant that may be for them to make a decision to engage. Review ultimately must consider the Context (see Section 3 above) within which a participant develops a Willingness to Engage.
The ethics review process thereby becomes an evaluation not only a consideration of the typical cost–benefit to the research participant, but rather of how researcher and research participant are likely to engage with one another to collaborate effectively on an equal footing and sharing the risks of failure. The participant then becomes a genuine actor within the research protocol rather than simply a subject of observation.

6.2. Recommendations for the Ethical Use of Advanced Technologies

Official guidance tends to focus on data governance [,] or on the obligations of technologists to provide robust, reliable and transparent operation [,]. However, I have emphasised in the previous discussion that it is essential to consider the entire ecosystem where advanced, AI-enabled technologies are deployed. These technologies are an integral part of a broader socio-technical system.
The data scientist providing the technology to a service provider and the service provider themselves must take into account a number of factors:
  • Understand who the main actors are. Each domain (healthcare, eCommerce, social media, and so forth) will often be regulated with specific obligations. More importantly though, I maintain, would be the interaction between end user and provider, and the reliance of the provider on the data scientist or technologist. These actors would all influence the trust context. So how they contribute needs to be understood.
  • Understand what their expectations are. Once the main actors have been identified, their individual expectations will influence how they view their own responsibilities and how they believe the other actors will behave. This will contextualise what each expects from the service or interaction, and from one another.
  • Reinforce competence, integrity and benevolence (from []). As the defining characteristics of a trust relationship outlined above, each of the actors has a responsibility to support that relationship, and to avoid actions which would affect trust. Inadvertent or unavoidable problems can be dealt with ([,]). Further, occasional (though infrequent []) re-affirmation of the relationship is advantageous. So, ongoing communication between the main actors is important in maintaining trust (see also []).
Just as a trust-based approach is proposed as an alternative to the regulatory constraint of existing deontological consent processes, I suggest that the main actors share a responsibility to invest in a relationship. In ethical terms, this is more consistent with Floridi’s concept of entropy []: each actor engages with the high-level interaction (e.g., contact tracing) in support of common beliefs. Rather than trying to balance individual rights and the common good, this assumes engagement by the main actors willing to expose themselves to vulnerability (because outcomes are not necessarily predictable at the outset) and therefore invest jointly towards the success of the engagement.

7. Future Research Directions

Based on existing research across multiple domains, I have presented here a trust-based approach to consent. This assumes an ongoing dialogue between trustor (data subject, service user, research participant, patient) and trustee (data controller, service provider, researcher, clinician). To a large extent, this echoes what Rohlfing and her colleagues describe as a co-constructed negotiation around explainability in AI between explainer and explainee []. However, my trust-based approach derives from social psychological terms and therefore accepts vulnerability. None of the stakeholders are assumed to be infallible. Any risk to the engagement is shared across them all. This would now benefit from empirical validation.
Firstly, and following some of the initial work by Wiles and her colleagues [], trustors of different and representative categories could provide at least two different types of responses: their attitudes and perceptions of current consent processes, backed up with ethnographic observation of how they engage with those processes currently. Secondly, expanding on proposals by Richards and Hartzog [] as applied not only in the US but also in other jurisdictions, engaging with service providers, researchers and clinicians asked to provide their perspective on how they currently use the consent process and what a trust-based negotiation would mean to them in offering the services or engaging with trustors as described here. Third, it is important to compare the co-construction of explainability for AI technologies (which assumes understanding is enough for acceptability) and the negotiation of shared risk implied by a trust-based approach to consent. If understanding the technology alone proves insufficient, then informed consent to formalise the voluntary agreement to engage is not enough either.
Synthesising these findings would provide concrete proposals for policy makers, as well as a basis to critically evaluate existing guidance on data sharing and the development and deployment of advanced technologies.

8. Conclusions

In this paper, I have suggested a different approach to negotiating ongoing consent (including terms of use) from the traditional process of informed consent or unwitting acceptance of terms of use, based on the definition of trust from the social psychology literature pertaining to person-to-person interactions. This was motivated by four sets of observations: firstly, that informed consent has different implications in different situations such as data protection, clinical trials or interventions, or research, and known issues with terms of use for online services. Secondly, the research literature highlights multiple cases where the assumptions relating to informed consent do not hold, and terms of use are typically imposed rather than informed and freely given. Thirdly, there may be contexts which are more complex than a simple exchange between two actors: researcher and research participant, or service user and service provider. Finally, even explainability for AI technologies may rely on a co-constructed understanding of outputs between the main stakeholders. Reviewing common activities during the COVID-19 pandemic, but also relevant to any Public Health Emergency, I have stressed that the broader socio-political context, the socio-technical environment within which big data analytics are implemented, and the historical relevance of PHE research complicates a straight-forward informed consent process. Further, researchers may simply not be in a position to predict or guarantee expected research outcomes making fully informed consent problematic. I suggest that this might better be served by a trust-based approach. Trust, in traditional definitions in the behavioural sciences, is based on an acceptance of vulnerability to unknown outcomes, a shared responsibility for those outcomes. In consequence, a more dynamic trust-based negotiation in response to situational changes over time is called for. This, I suggest, could be handled with a much more communication-focused approach, with implications for research ethics review, as well as AI-enhanced services. Moving forward, there needs to be discussion with relevant stakeholders, especially potential research participants and researchers themselves, to understand their expectations and thereby validate the arguments presented here exploring how a trust-based consent process might meet their requirements. Finally, although I have contextualised the discussion here against the background of the coronavirus pandemic, other test scenarios need to be explored to evaluate whether the same factors apply.

Funding

This work was funded in part by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 780495 (project BigMedilytics). Disclaimer: Any dissemination of results here presented reflects only the author’s view. The Commission is not responsible for any use that may be made of the information it contains. It was also supported, in part, by the Bill & Melinda Gates Foundation [INV-001309]. Under the grant conditions of the Foundation, a Creative Commons Attribution 4.0 Generic License has already been assigned to the Author Accepted Manuscript version that might arise from this submission.

Data Availability Statement

Not Applicable, the study does not report any data.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Table A1. Summary of Issues across Domains.
Table A1. Summary of Issues across Domains.
DomainChallengesInformed ConsentTrust-Based Consent
Contact TracingThe socio-political context within which the app is used or research is carried out. Media reporting, including fake news, can influence public confidenceOne-off consent on research engagement or upon app download may not be sufficient as context changes. Retention may be challenging depending on trustworthiness perceptions of public authorities and responses to media reports leading to app/research study abandonment (i.e., the impact and relevance of context which may have nothing to do with the actual app/research)Researchers (app developers) may need to demonstrate integrity and benevolence on an ongoing basis, and specifically when needed in response to any public concerns around data protection, and to any misuse or unforeseen additional use of data. Researchers must therefore communicate their own trustworthiness and position themselves appropriately within a wider socio-political context for which they may feel they have no responsibility. It is their responsibility, however, to maintain the relationship with relevant stakeholders, i.e., to develop and maintain trust.
Big Data AnalyticsThe potential disruption to an existing ecosystem—e.g., the actors who are important for delivery of service, such as patient and clinician for healthcare, or research participant and researcher for Internet-based research. Technology may therefore be disruptive to any such existing relationship. Further, unless the main actors are identified, it would be difficult to engage with traditional approaches to consent.Researcher (data scientist) may not be able to disclose all information necessary to make a fully informed decision, not least because they may only be able to describe expected outcomes (and how data will be used) in general terms. The implications of supervised and unsupervised learning may not be understood. Not all beneficiaries can engage with an informed consent process (e.g., clinicians would not be asked to consent formally to data analytics carried out on their behalf; for Internet-based research, it may be impractical or ill-advised for researchers to contact potential research participants).Data scientists need to engage in the first instance with domain experts in other fields who will use their results (e.g., clinicians in healthcare; web scientists etc. for Internet-based modelling; etc.) to understand each other’s expectations and any limitations. For a clinician or other researcher dependent on the data scientist, this will affect the perception of their own competence. This will also form part of trust-based engagement with a potential research participant. Ongoing communication between participants, data scientists and the other relevant domain experts should continue to maintain perceptions of benevolence and integrity.
Public Health EmergencyThe difficulty in identifying the scope of research (in terms of what is required and who will benefit now, and especially in the future) and therefore identify the main stakeholders, not just participants providing (clinical) data directlyThe COVID-19 pandemic has demonstrated that research understanding changed significantly over time: the research community, including clinicians, had to adapt. Policy decisions struggled to keep pace with the results. Informed consent would need constant review and may be undermined if research outcomes/policy decisions are not consistent. In the latter case, this may result in withdrawal of research participants. Further, research from previous pandemics was not available to inform current research activitiesA PHE highlights the need to balance individual rights and the imperatives for the community (the common good). As well as the effects of fake news, changes in policy based on research outcomes may lead to concern about competence: do the researchers know what they are doing? However, there needs to be an understanding of how the research is being conducted and why things do change. So, there will also be a need for ongoing communication around integrity and benevolence. This may advantageously extend existing public engagement practices, but would also need to consider future generations and who might represent their interests. There is a clear need for an ongoing dialogue including participants where possible, but also other groups with a vested interest in the research data and any associated outcomes, including those who may have nothing to do with the original data collection or circumstances.

References

  1. Walker, P.; Lovat, T. You Say Morals, I Say Ethics—What’s the Difference? In The Conversation; IMDb: Seattle, WA, USA, 2014. [Google Scholar]
  2. Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  3. European Commission. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, 2016; European Commission: Brussels, Belgium, 2016. [Google Scholar]
  4. Samek, W.; Wiegand, T.; Müller, K.R. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. arXiv 2017, arXiv:1708.08296. [Google Scholar]
  5. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  6. Gunning, D.; Aha, D.W. DAPRA’s Explainable Artificial Intelligence Program. AI Mag. 2019, 40, 44–58. [Google Scholar]
  7. Weitz, K.; Schiller, D.; Schlagowski, R.; Huber, T.; André, E. “Do you trust me?”: Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the IVA ’19: 19th ACM International Conference on Intelligent Virtual Agents, Paris, France, 2–5 July 2019; ACM: New York, NY, USA, 2019; pp. 7–9. [Google Scholar] [CrossRef]
  8. Taylor, S.; Pickering, B.; Boniface, M.; Anderson, M.; Danks, D.; Følstad, A.; Leese, M.; Müller, V.; Sorell, T.; Winfield, A.; et al. Responsible AI—Key Themes, Concerns & Recommendations For European Research and Innovation; HUB4NGI Consortium: Zürich, Switzerland, 2018. [Google Scholar] [CrossRef]
  9. Došilović, F.K.; Brčić, M.; Hlupić, N. Explainable Artificial Intelligence: A Survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 0210–0215. [Google Scholar] [CrossRef]
  10. Khrais, L.T. Role of Artificial Intelligence in Shaping Consumer Demand in E-Commerce. Future Internet 2020, 12, 226. [Google Scholar] [CrossRef]
  11. Israelsen, B.W.; Ahmed, N.R. “Dave …I can assure you …that it’s going to be all right …” A Definition, Case for, and Survey of Algorithmic Assurances in Human-Autonomy Trust Relationships. ACM Comput. Surv. 2019, 51, 113. [Google Scholar] [CrossRef]
  12. Rohlfing, K.J.; Cimiano, P.; Scharlau, I.; Matzner, T.; Buhl, H.M.; Buschmeier, H.; Eposito, E.; Grimminger, A.; Hammer, B.; Häb-Umbach, R.; et al. Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Trans. Cogn. Dev. Syst. 2020. [Google Scholar] [CrossRef]
  13. Amnesty Internationl and AccessNow. The Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems. 2018. Available online: https://www.accessnow.org/the-toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-learning-systems/ (accessed on 14 May 2021).
  14. Council of Europe. European Convention for the Protection of Human Rights and Fundamental Freedoms, as Amended by Protocols Nos. 11 and 14; Council of Europe: Strasbourg, France, 2010. [Google Scholar]
  15. UK Government Digital Services. Data Ethics Framework. 2020. Available online: https://www.gov.uk/government/publications/data-ethics-framework (accessed on 14 May 2021).
  16. Department of Health and Social Care. Digital and Data-Driven Health and Care Technology; Department of Health and Social Care: London, UK, 2021.
  17. European Commission. Ethics Guidelines for Trustworthy AI; European Commission: Brussels, Belgium, 2019. [Google Scholar]
  18. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  19. Murray, P.M. The History of Informed Consent. Iowa Orthop. J. 1990, 10, 104–109. [Google Scholar]
  20. USA v Brandt Court. The Nuremberg Code (1947). Br. Med. J. 1996, 313, 1448. [Google Scholar] [CrossRef]
  21. World Medical Association. WMA Declaration of Helsinki—Ethical Principles for Medical Research Involving Human Subjects; World Medical Association: Ferney-Voltaire, France, 2018. [Google Scholar]
  22. Lemley, M.A. Terms of Use. Minn. Law Rev. 2006, 91, 459–483. [Google Scholar]
  23. Richards, N.M.; Hartzog, W. The Pathologies of Digital Consent. Wash. Univ. Law Rev. 2019, 96, 1461–1504. [Google Scholar]
  24. Luger, E.; Moran, S.; Rodden, T. Consent for all: Revealing the hidden complexity of terms and conditions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 2687–2696. [Google Scholar]
  25. Belmont. The Belmont Report: Ethical Principles and Guidelines for The Protection of Human Subjects of Research; American College of Dentists: Gaithersburg, MD, USA, 1979. [Google Scholar]
  26. Beauchamp, T.L. History and Theory in “Applied Ethics”. Kennedy Inst. Ethics J. 2007, 17, 55–64. [Google Scholar] [CrossRef] [PubMed]
  27. Muirhead, W. When four principles are too many: Bloodgate, integrity and an action-guiding model of ethical decision making in clinical practice. Clin. Ethics 2011, 38, 195–196. [Google Scholar] [CrossRef]
  28. Rubin, M.A. The Collaborative Autonomy Model of Medical Decision-Making. Neuro. Care 2014, 20, 311–318. [Google Scholar] [CrossRef]
  29. The Health Service (Control of Patient Information) Regulations 2002. 2002. Available online: https://www.legislation.gov.uk/uksi/2002/1438/contents/made (accessed on 14 May 2021).
  30. Hartzog, W. The New Price to Play: Are Passive Online Media Users Bound By Terms of Use? Commun. Law Policy 2010, 15, 405–433. [Google Scholar] [CrossRef]
  31. Beauchamp, T.L.; Childress, J.F. Principles of Biomedical Ethics, 8th ed.; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
  32. OECD. Frascati Manual 2015; OECD: Paris, France, 2015. [Google Scholar] [CrossRef]
  33. BPS. Code of Human Research Ethics; BPS: Leicester, UK, 2014. [Google Scholar]
  34. Herschel, R.; Miori, V.M. Ethics & Big Data. Technol. Soc. 2017, 49, 31–36. [Google Scholar] [CrossRef]
  35. Floridi, L.; Taddeo, M. What is data ethics? Philos. Trans. R. Soc. 2016. [Google Scholar] [CrossRef]
  36. Carroll, S.R.; Garba, I.; Figueroa-Rodríguez, O.L.; Holbrook, J.; Lovett, R.; Materechera, S.; Parsons, M.; Raseroka, K.; Rodriguez-Lonebear, D.; Rowe, R.; et al. The CARE Principles for Indigenous Data Governance. Data Sci. J. 2020, 19, 1–12. [Google Scholar] [CrossRef]
  37. Thomson, J.J. The Trolley Problem. Yale Law J. 1985, 94, 1395–1415. [Google Scholar] [CrossRef]
  38. Parsons, T.D. Ethical Challenges in Digital Psychology and Cyberpsychology; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
  39. Murove, M.F. Ubuntu. Diogenes 2014, 59, 36–47. [Google Scholar] [CrossRef]
  40. Ess, C. Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee; IGI Global: Hershey, PA, USA, 2002. [Google Scholar]
  41. Markham, A.; Buchanan, E. Ethical Decision-Making and Internet Research: Recommendations from the Aoir Ethics Working Committee (Version 2.0). 2002. Available online: https://aoir.org/reports/ethics2.pdf (accessed on 14 May 2021).
  42. Sugarman, J.; Lavori, P.W.; Boeger, M.; Cain, C.; Edson, R.; Morrison, V.; Yeh, S.S. Evaluating the quality of informed consent. Clin. Trials 2005, 2, 34–41. [Google Scholar] [CrossRef]
  43. Biros, M. Capacity, Vulnerability, and Informed Consent for Research. J. Law Med. Ethics 2018, 46, 72–78. [Google Scholar] [CrossRef]
  44. Tam, N.T.; Huy, N.T.; Thoa, L.T.B.; Long, N.P.; Trang, N.T.H.; Hirayama, K.; Karbwang, J. Participants’ understanding of informed consent in clinical trials over three decades: Systematic review and meta-analysis. Bull. World Health Organ. 2015, 93, 186H–198H. [Google Scholar] [CrossRef] [PubMed]
  45. Falagas, M.E.; Korbila, I.P.; Giannopoulou, K.P.; Kondilis, B.K.; Peppas, G. Informed consent: How much and what do patients understand? Am. J. Surg. 2009, 198, 420–435. [Google Scholar] [CrossRef] [PubMed]
  46. Nusbaum, L.; Douglas, B.; Damus, K.; Paasche-Orlow, M.; Estrella-Luna, N. Communicating Risks and Benefits in Informed Consent for Research: A Qualitative Study. Glob. Qual. Nurs. Res. 2017, 4. [Google Scholar] [CrossRef]
  47. Wiles, R.; Crow, G.; Charles, V.; Heath, S. Informed Consent and the Research Process: Following Rules or Striking Balances? Sociol. Res. Online 2007, 12. [Google Scholar] [CrossRef]
  48. Wiles, R.; Charles, V.; Crow, G.; Heath, S. Researching researchers: Lessons for research ethics. Qual. Res. 2006, 6, 283–299. [Google Scholar] [CrossRef]
  49. Naarden, A.L.; Cissik, J. Informed Consent. Am. J. Med. 2006, 119, 194–197. [Google Scholar] [CrossRef] [PubMed]
  50. Al Mahmoud, T.; Hashim, M.J.; Almahmoud, R.; Branicki, F.; Elzubeir, M. Informed consent learning: Needs and preferences in medical clerkship environments. PLoS ONE 2018, 13, e0202466. [Google Scholar] [CrossRef]
  51. Nijhawan, L.P.; Janodia, M.D.; Muddukrishna, B.S.; Bhat, K.M.; Bairy, K.L.; Udupa, N.; Musmade, P.B. Informed consent: Issues and challenges. J. Adv. Phram. Technol. Res. 2013, 4, 134–140. [Google Scholar] [CrossRef]
  52. Kumar, N.K. Informed consent: Past and present. Perspect. Clin. Res. 2013, 4, 21–25. [Google Scholar] [CrossRef]
  53. Hofstede, G. Cultural Dimensions. 2003. Available online: www.geerthofstede.com (accessed on 12 May 2021).
  54. Hofstede, G.; Hofstede, J.G.; Minkov, M. Cultures and Organizations: Software of the Mind, 3rd ed.; McGraw-Hill: New York, NY, USA, 2010. [Google Scholar]
  55. Acquisti, A.; Brandimarte, L.; Loewenstein, G. Privacy and human behavior in the age of information. Science 2015, 347, 509–514. [Google Scholar] [CrossRef] [PubMed]
  56. McEvily, B.; Perrone, V.; Zaheer, A. Trust as an Organizing Principle. Organ. Sci. 2003, 14, 91–103. [Google Scholar] [CrossRef]
  57. Milgram, S. Behavioral study of obedience. J. Abnorm. Soc. Psychol. 1963, 67, 371–378. [Google Scholar] [CrossRef] [PubMed]
  58. Haney, C.; Banks, C.; Zimbardo, P. Interpersonal Dynamics in a Simulated Prison; Wiley: New Youk, NY, USA, 1972. [Google Scholar]
  59. Reicher, S.; Haslam, S.A. Rethinking the psychology of tyranny: The BBC prison study. Br. J. Soc. Psychol. 2006, 45, 1–40. [Google Scholar] [CrossRef] [PubMed]
  60. Reicher, S.; Haslam, S.A. After shock? Towards a social identity explanation of the Milgram ’obedience’ studies. Br. J. Soc. Psychol. 2011, 50, 163–169. [Google Scholar] [CrossRef]
  61. Beauchamp, T.L. Informed Consent: Its History, Meaning, and Present Challenges. Camb. Q. Healthc. Ethics 2011, 20, 515–523. [Google Scholar] [CrossRef]
  62. Ferreira, C.M.; Serpa, S. Informed Consent in Social Sciences Research: Ethical Challenges. Int. J. Soc. Sci. Stud. 2018, 6, 13–23. [Google Scholar] [CrossRef]
  63. Hofmann, B. Broadening consent - and diluting ethics? J. Med Ethics 2009, 35, 125–129. [Google Scholar] [CrossRef]
  64. Steinsbekk, K.S.; Myskja, B.K.; Solberg, B. Broad consent versus dynamic consent in biobank research: Is passive participation an ethical problem? Eur. J. Hum. Genet. 2013, 21, 897–902. [Google Scholar] [CrossRef] [PubMed]
  65. Sreenivasan, G. Does informed consent to research require comprehension? Lancet 2003, 362, 2016–2018. [Google Scholar] [CrossRef]
  66. O’Neill, O. Some limits of informed consent. J. Med. Ethics 2003, 29, 4–7. [Google Scholar] [CrossRef] [PubMed]
  67. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 319–340. [Google Scholar] [CrossRef]
  68. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  69. McKnight, H.; Carter, M.; Clay, P. Trust in technology: Development of a set of constructs and measures. In Proceedings of the Digit, Phoenix, AZ, USA, 15–18 December 2009. [Google Scholar]
  70. McKnight, H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2011, 2, 12. [Google Scholar] [CrossRef]
  71. Thatcher, J.B.; McKnight, D.H.; Baker, E.W.; Arsal, R.E.; Roberts, N.H. The Role of Trust in Postadoption IT Exploration: An Empirical Examination of Knowledge Management Systems. IEEE Trans. Eng. Manag. 2011, 58, 56–70. [Google Scholar] [CrossRef]
  72. Hinch, R.; Probert, W.; Nurtay, A.; Kendall, M.; Wymant, C.; Hall, M.; Lythgoe, K.; Cruz, A.B.; Zhao, L.; Stewart, A. Effective Configurations of a Digital Contact Tracing App: A Report to NHSX. 2020. Available online: https://cdn.theconversation.com/static_files/files/1009/Report_-_Effective_App_Configurations.pdf (accessed on 14 May 2021).
  73. Parker, M.J.; Fraser, C.; Abeler-Dörner, L.; Bonsall, D. Ethics of instantaneous contact tracing using mobile phone apps in the control of the COVID-19 pandemic. J. Med. Ethics 2020, 46, 427–431. [Google Scholar] [CrossRef]
  74. Walrave, M.; Waeterloos, C.; Ponnet, K. Ready or Not for Contact Tracing? Investigating the Adoption Intention of COVID-19 Contact-Tracing Technology Using an Extended Unified Theory of Acceptance and Use of Technology Model. Cyberpsychol. Behav. Soc. Netw. 2020. [Google Scholar] [CrossRef]
  75. Velicia-Martin, F.; Cabrera-Sanchez, J.-P.; Gil-Cordero, E.; Palos-Sanchez, P.R. Researching COVID-19 tracing app acceptance: Incorporating theory from the technological acceptance model. PeerJ Comput. Sci. 2021, 7, e316. [Google Scholar] [CrossRef]
  76. Rowe, F.; Ngwenyama, O.; Richet, J.-L. Contact-tracing apps and alienation in the age of COVID-19. Eur. J. Inf. Syst. 2020, 29, 545–562. [Google Scholar] [CrossRef]
  77. Roache, R. Why is informed consent important? J. Med. Ethics 2014, 40, 435–436. [Google Scholar] [CrossRef] [PubMed]
  78. Eyal, N. Using informed consent to save trust. J. Med. Ethics 2014, 40, 437–444. [Google Scholar] [CrossRef]
  79. Eyal, N. Informed consent, the value of trust, and hedons. J. Med. Ethics 2014, 40, 447. [Google Scholar] [CrossRef] [PubMed]
  80. Tännsjö, T. Utilitarinism and informed consent. J. Med. Ethics 2013, 40, 445. [Google Scholar] [CrossRef] [PubMed]
  81. Bok, S. Trust but verify. J. Med. Ethics 2014, 40, 446. [Google Scholar] [CrossRef]
  82. Rousseau, D.M.; Sitkin, S.B.; Burt, R.S.; Camerer, C. Not so different after all: A cross-discipline view of trust. Acad. Manag. Rev. 1998, 23, 393–404. [Google Scholar] [CrossRef]
  83. Robbins, B.G. What is Trust? A Multidisciplinary Review, Critique, and Synthesis. Sociol. Compass 2016, 10, 972–986. [Google Scholar] [CrossRef]
  84. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  85. Weber, L.R.; Carter, A.I. The Social Construction of Trust; Clincal Sociology: Research and Practice; Springer Science+Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  86. Ferrin, D.L.; Bligh, M.C.; Kohles, J.C. Can I Trust You to Trust Me? A Theory of Trust, Monitoring, and Coopertaion in Interpersonal and Intergroup Relationships. Group Organ. Manag. 2007, 32, 465–499. [Google Scholar] [CrossRef]
  87. Schoorman, F.D.; Mayer, R.C.; Davis, J.H. An integrative model of organizational trust: Past, present, and future. Acad. Manag. Rev. 2007, 32, 344–354. [Google Scholar] [CrossRef]
  88. Fuoli, M.; Paradis, C. A model of trust-repair discourse. J. Pragmat. 2014, 74, 52–69. [Google Scholar] [CrossRef]
  89. Lewicki, R.J.; Wiethoff, C. Trust, Trust Development, and Trust Repair. Handb. Confl. Resolut. Theory Pract. 2000, 1, 86–107. [Google Scholar]
  90. Bachmann, R.; Gillespie, N.; Priem, R. Repairing Trust in Organizations and Institutions: Toward a Conceptual Framework. Organ. Stud. 2015, 36, 1123–1142. [Google Scholar] [CrossRef]
  91. Bansal, G.; Zahedi, F.M. Trust violation and repair: The information privacy perspective. Decis. Support Syst. 2015, 71, 62–77. [Google Scholar] [CrossRef]
  92. Memery, J.; Robson, J.; Birch-Chapman, S. Conceptualising a Multi-level Integrative Model for Trust Repair. In Proceedings of the EMAC, Hamburg, Germany, 28–31 May 2019. [Google Scholar]
  93. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
  94. Lee, J.-H.; Song, C.-H. Effects of trust and perceived risk on user acceptance of a new technology service. Soc. Behav. Personal. Int. J. 2013, 41, 587–598. [Google Scholar] [CrossRef]
  95. Cheshire, C. Online Trust, Trustworthiness, or Assurance? Daedalus 2011, 140, 49–58. [Google Scholar] [CrossRef] [PubMed]
  96. Pettit, P. Trust, Reliance, and the Internet. Inf. Technol. Moral Philos. 2008, 26, 161. [Google Scholar]
  97. Stewart, K.J. Trust Transfer on the World Wide Web. Organ. Sci. 2003, 14, 5–17. [Google Scholar] [CrossRef]
  98. Eames, K.T.D.; Keeling, M.J. Contact tracing and disease control. Proc. R. Soc. Lond. 2003, 270, 2565–2571. [Google Scholar] [CrossRef]
  99. Jetten, J.; Reicher, S.D.; Haslam, S.A.; Cruwys, T. Together Apart: The Psychology of COVID-19; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2020. [Google Scholar]
  100. Ahmed, N.; Michelin, R.A.; Xue, W.; Ruj, S.; Malaney, R.; Kanhere, S.S.; Seneviratne, A.; Hu, W.; Janicke, H.; Jha, S.K. A Survey of COVID-19 Contact Tracing Apps. IEEE Access 2020, 8, 134577–134601. [Google Scholar] [CrossRef]
  101. Kretzschmar, M.E.; Roszhnova, G.; Bootsma, M.C.; van Boven, M.J.; van de Wijgert, J.H.; Bonten, M.J. Impact of delays on effectiveness of contact tracing strategies for COVID-19: A modelling study. Lancet Public Health 2020, 5, e452–e459. [Google Scholar] [CrossRef]
  102. Bengio, Y.; Janda, R.; Yu, Y.W.; Ippolito, D.; Jarvie, M.; Pilat, D.; Struck, B.; Krastev, S.; Sharma, A. The need for privacy with public digital contact tracing during the COVID-19 pandemic. Lancet Digit. Health 2020, 2, e342–e344. [Google Scholar] [CrossRef]
  103. Abeler, J.; Bäcker, M.; Buermeyer, U.; Zillessen, H. COVID-19 Contact Tracing and Data Protection Can Go Together. JMIR Mhealth Uhealth 2020, 8, e19359. [Google Scholar] [CrossRef] [PubMed]
  104. Van Bavel, J.J.; Baicker, K.; Boggio, P.S.; Caprano, V.; Cichocka, A.; Cikara, M.; Crockett, M.J.; Crum, A.J.; Douglas, K.M.; Druckman, J.N.; et al. Using social and behavioural science to support COVID-19 pandemic response. Nat. Hum. Behav. 2020, 4, 460–471. [Google Scholar] [CrossRef] [PubMed]
  105. Ackland, R. Web Social Science: Concepts, Data and Tools for Social Scientitis in the Digital Age; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2013. [Google Scholar]
  106. Papacharissi, Z. A Networked Self and Platforms, Stories, Connections; Routledge: London, UK, 2018. [Google Scholar]
  107. Raghupathi, W.; Raghupathi, V. Big data analytics in healthcare: Promise and potential. Health Inf. Sci. Syst. 2014, 2, 1–10. [Google Scholar] [CrossRef]
  108. Agbehadji, I.E.; Awuzie, B.O.; Ngowi, A.B.; Millham, R.C. Review of Big Data Analytics, Artificial Intelligence and Nature-Inspired Computing Models towards Accurate Detection of COVID-19 Pandemic Cases and Contact Tracing. Int. J. Environ. Res. Public Health 2020, 17, 5330. [Google Scholar] [CrossRef]
  109. Cheney-Lippold, J. We Are Data: Algorithms and the Making of Our Digital Selves; New York University Press: New York, NY, USA, 2017. [Google Scholar]
  110. O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown: New York, NY, USA, 2016. [Google Scholar]
  111. Austin, C. RDA COVID-19 Zotero Library—March 2021. Available online: https://www.rd-alliance.org/group/rda-covid19-rda-covid19-omics-rda-covid-19-epidemiology-rda-covid19-clinical-rda-covid19 (accessed on 14 May 2021).
  112. Norton, A.; Sigfrid, L.; Aderoba, A.; Nasir, N.; Bannister, P.G.; Collinson, S.; Lee, J.; Boily-Larouche, G.; Golding, J.P.; Depoortere, E.; et al. Preparing for a pandemic: Highlighting themes for research funding and practice—Perspectives from the Global Research Collaboration for Infectious Disease Preparedness (GloPID-R). BMC Med. 2020, 18, 273. [Google Scholar] [CrossRef] [PubMed]
  113. Floridi, L. On the intrinsic value of information objects and the infosphere. Ethics Inf. Technol. 2002, 4, 287–304. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.