Next Article in Journal / Special Issue
Crowdsourcing Sexual Objectification
Previous Article in Journal / Special Issue
Networked Memory Project: A Policy Thought Experiment for the Archiving of Social Networks by the Library of Congress of the United States
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Research “Involving” Humans to Research “Affecting” Humans: A Proposal for a Principled Expansion of Research Ethics’ Jurisdiction to Create Traction for a Philosophy of Technology

by
Madelaine Saginur
Faculty of Law, Common Law Section, University of Ottawa, Fauteux Hall, 57 Louis Pasteur Street, Ottawa, Ontario K1N 6N5, Canada
Laws 2014, 3(3), 509-528; https://doi.org/10.3390/laws3030509
Submission received: 29 April 2014 / Revised: 8 July 2014 / Accepted: 28 July 2014 / Published: 4 August 2014
(This article belongs to the Special Issue Technology, Social Media and Law)

Abstract

:
The field of research ethics offers a new approach to addressing the issues created by the unchecked development of technology. Research ethics could make a contribution, both substantively and procedurally, to help create a framework for reviewing the social and political consequences of actual or proposed technological developments. This paper puts forth a proposal for a principled expansion of research ethics’ jurisdiction, specifically a move from “Research Involving Humans” to “Research Affecting Humans”, and undertakes a case study of “Web 2.0” to analyze whether a philosophy of technology based on research ethics might work.

1. Introduction

In his seminal book “The Whale and the Reactor: A Search for Limits in an Age of High Technology” [1], Langdon Winner sets out the generally unaddressed (or, at the very least, generally under-addressed) social and political consequences of unchecked technological development. He discusses some of the factors that may have contributed to the lack of a “philosophy of technology”: a taken-for-granted assumption that “the only reliable sources for improving the human condition stem from new machines, techniques, and chemicals” ([1], p. 5), a value system where efficiency and economics are considered paramount ([1], pp. 44–47), a view that the human relationship to technical things is too apparent to warrant any meaningful reflection ([1], p. 5) and a systemic separation of “makers” and “users” ([1], pp. 5–6), for example.
The field of research ethics may offer a new approach to addressing the issues created by the unchecked development of technology. Research ethics could make a contribution, both substantively, in the development of a philosophy of technology, and procedurally, to help create a framework for reviewing the social and political consequences of actual or proposed technological developments. Concurrently, the suggested approach to broadening the mandate of research ethics through incorporating technological development within its scope may help address concerns of “ethics creep”; not all ethics issues, particularly ethics issues regarding technological development, are best dealt with through the research ethics board review mechanism.
The modern field of research ethics was born largely out of two horrific research undertakings: the research program of the Nazis and the Tuskegee syphilis study ([2], pp. 4–5). These studies did not occur in a context in which research ethics principles were generally respected; research studies that now seem clearly unethical were not uncommon as recently as the 1950s and 1960s.
Research ethics developed initially to protect research participants from (1) forced or uninformed participation in research; and (2) physical or psychological harm caused by participation in research [3]. Over time, as science and scientific inquiry has developed, and as our understanding of what constitutes harm to research subjects has evolved, the scope of the jurisdiction of research ethics has expanded too. It now covers genetic research, in which there is a weakening of the connection between research and research subject; it has also taken on the task of creating an international clinical trials registry, in order to address publication bias in clinical trials literature.
The expansion of research ethics’ jurisdiction, however, has been piecemeal. This paper puts forth a proposal for a principled expansion of research ethics’ jurisdiction, specifically a move from “research involving humans” to “research affecting humans”.
The paper begins with an analysis of why having a philosophy of technology is important, drawing principally on the work of Langdon Winner. It then moves on to a brief history of research ethics in order to provide context for why the field developed as it did, and then sets out how research ethics works in practice. Discussions about research ethics focus primarily on the regime in place in Canada, the jurisdiction with which the author is most familiar, with references to other jurisdictions and international policy documents when illustrative.
The paper then describes and analyses two areas where the scope of “research involving humans”—the foundational justification for research ethic’s purpose and jurisdiction—is already expanding and where mechanisms other than research ethics board review have come into play. Through an analysis of genetics, this paper demonstrates that many of the ideas that Winner espouses do now have a precedent; the ethical regime governing genetics looks to broader social implications. Through an examination of the new clinical trials registry, we find a procedural structure that could be used as a model for putting into practice a review mechanism for technology.
The paper then undertakes a case study of “Web 2.0” to analyze whether a philosophy of technology based on a research ethics model might work. Specifically, the case study examines the possibility of having anticipated the inherent risks present in creating an infrastructure that allows for anonymous (or apparently anonymous) content production. The analysis demonstrates that an ethics oversight mechanism could have played a positive role in spearheading a public discussion about the social and ethical concerns related to the creation of an interactive internet and in providing information to academics, policy makers, and the general public earlier on, thereby facilitating study, discussion and appropriate regulation of the results of these research endeavors.
The paper then sets out the limitations of this proposal before concluding.

2. The Need for a Philosophy of Technology

Technology has been defined in a variety of ways, including “artificial aids to human activity” ([1], p. 4), “human modification of the environment for a useful purpose” ([4], para. 6) and “scientifically-based, systematic thought and discourse about the responsible deployment of matter, energy, and information for human purposes” ([5], p. 95). There is something curious about all these definitions, something particularly obvious in the last one: built into them is the requirement that technology is good, or at least constructive. That is, to qualify as technology, by definition a thing must aid us, serve a useful purpose, or deploy something responsibly and for our purposes. The impetus for these definitions is likely to distinguish technology from artistic creations, which do not serve an obviously “useful purpose” in the same way that our intuition suggests that technology can. These kinds of definitions betray bias and interfere with a critical analysis of technology. I will thus work with a more neutral definition: for the purposes of this paper, the word technology applies to any scientifically-based human modification of the environment.

2.1. Technologies are not Neutral

Although the definition of technology must be neutral to achieve a comprehensive analysis, Winner identifies a valid problem: technologies themselves are not neutral because they play a major role in structuring human activity, sometimes intentionally and sometimes inadvertently ([1], p. 6). Technologies affect the quality of personal relationships and societal structuring, and are “enduring frameworks of social and political action” ([1], p. x).
Thus a proposal to take into account social and political consequences of technology is not a gratuitous hampering of innovation; rather it is necessary to ensure that the consequences of technologies—direct and indirect, intended and unintended—are anticipated and analyzed earlier on in the technologies’ development to the greatest extent possible.

2.2. Regulation after the Fact is Insufficient

Social scientists often step in after a new technology is introduced to study its consequences. At that point, it is often too late to control its impact ([1], p. 10). Changes are not necessarily possible after a structure for a technology has been chosen: “choices tend to become strongly fixed in material equipment, economic investment and social habit, [so] the original flexibility vanishes for all practical purposes once the initial commitments are made” ([1], p. 29).
Winner states his central thesis, thusly:
Faced with any proposal for a new technological system, citizens or their representatives would examine the social contract implied by building that system in a particular form. They would ask: How well do the proposed conditions match our best sense of who we are and what we want this society to be? Who gains and who loses power in the proposed change? Are the conditions produced by the change compatible with equality, social justice, and the common good? ... [T]he heretofore concealed importance of technological choices would become a matter for explicit study and debate.
([1], pp. 55–56)
Some theorists would likely doubt the feasibility of Winner’s thesis. Cockfield, for example, holds that technologies’ social consequences are unpredictable ([4], para. 9; [6]). Thus his law and technology theory is focused on regulating consequences, and considers the following kinds of questions: do we enforce a shrink wrap license? Do we enforce a contract term permitting a vendor to unilaterally modify a contract if it posts notice of these changes on its corporate web site? Cockfield also asserts that we might achieve better regulation by waiting to really understand the implications of the technology before attempting to regulate it ([4], para. 49).
Regulating technology’s consequences is important, and well-informed regulation is indeed likely to be better regulation. However, there are limitations to the regulatory options available when we address regulation only after introducing the technology. Once a technology has been implemented, the regulatory question generally focuses on how to control its use. For obvious reasons, the ability to ask the question “should this technology be allowed in the first place?” is significantly harder to address when considering an entrenched technology used and relied upon by individuals and companies throughout the world. Similarly, it is significantly harder to even pose the broader questions that Winner asks about whether the technological system supports the society we want to be, how power is redistributed as a result of the technology, and how distributive justice is affected.
Further, as I will explore in more detail later on, technology’s consequences are not as unpredictable as some critics suggest. Turning to the appropriate scientific or social science literature can provide a great deal of information useful in anticipating the potential impact of a new technology. The earlier the analysis of social and political consequences begins, the sooner we as a society will be able to begin to understand the consequences and begin to regulate, educate or otherwise play a role in ensuring that technological developments do not have unacceptable (or unanticipated) effects. This process should begin in the early developmental stages of all technology. Essentially, what I propose is a system that would allow for the implementation of a version of the precautionary principle [7] to technological development.

3. Research Ethics

3.1. Impetus and Early Beginnings

Research Ethics is a relatively new area, with origins in the 1930s. The Codes founding the field developed as a response to medical research projects that shocked the general public when they became known: the Nazi’s whole cadre of medical experiments during the Holocaust, the Tuskegee Syphilis Study and others.
The Nazi doctors committed barbaric acts to concentration camp prisoners. From exposing unconsenting research subjects to extreme low pressure and cold temperatures, to seeing if sea water could be treated and made potable by forcing them to drink dangerously “treated” sea water, to injecting them with life-threatening infectious diseases (Typhus and Infectious Jaundice Virus) and mutilating and grafting limbs onto them, the Nazi medical experiments qualified as war crimes and crimes against humanity [8].
The United States Public Health Service began the Tuskegee Syphilis Study in 1932 in order to study the natural history of untreated syphilis. The research subjects were hundreds of African American men already infected with the disease. During the first stages of the study in the 1930s, the only available treatments for syphilis were arsenicals and heavy metals, “treatments” with a low cure rate and highly toxic side effects. However, the research continued into the 1970s, decades after the Nazi medical experiments came to light and after penicillin had become the drug of choice for syphilis in the mid-1940s: this antibiotic had a high cure rate and much more limited side effects than the earlier treatments [9]. The start of the use of penicillin to successfully treat syphilis is of key importance, as “[t]he use of placebo, or no treatment, is acceptable in studies where no current proven intervention exists” ([10], art. 32). Despite the researcher’s knowledge of penicillin’s effectiveness in treating and curing their research subjects’ disease, the researchers deceived these men, telling them that they were receiving free treatment for “bad blood”. After penicillin was known to be a successful treatment for syphilis, withholding treatment became infinitely worse: not only were the men being deceived; they were also being denied treatment. In fact they were simply undergoing physical assessments by researchers interested in how syphilis acts on the human body. The study ended only in 1972 after the mainstream media reported on it ([11], p. 1646).
Unethical research studies were not uncommon: Henry Beecher’s famous 1966 article “Ethics and Clinical Research” detailed twenty-two studies that were unethical for a variety of reasons, including withholding known effective treatments, subjecting children suffering from mental health issues to therapies that caused permanent liver damage; studying the toxicology of a drug known to cause aplastic anemia and associated morbidity and mortality; and injecting cancer cells into human subjects who were only told they were receiving “some cells”, with no mention of cancer [12].
Out of the horrors of the Nazi medical experiments came the Nuremburg Code, which sets out basic principles of research ethics, such as informed consent, minimal harm, proportionality of risk to benefit, and a right to withdraw, among other things [13]. Following the Tuskegee Syphilis Study, the United States government published the Belmont Report, which set out similar ethical principles for the protection of human subjects in research [14]. A shift in social norms had to accompany these documents: as Beecher stated almost fifty years ago, “There is a belief prevalent in some sophisticated circles that attention to these matters [protecting research subject] would ‘block progress’” ([12], p. 367). He cites Pope Pius XII: “science is not the highest value to which all other orders of values... should be subordinated” ([12], p. 367).

3.2. Research Ethics in Practice: Canada

Research ethics in Canada is entrenched in our research environment. A significant part of the research ethics system are research ethics boards (REBs). Procedurally, in Canada, review of research protocols is done by a duly constituted REB. Every institution that houses researchers who apply for funding from one of the Tri-Councils, namely, the Canadian Institutes of Health Research, the Social Sciences and Humanities Research Council of Canada or the Natural Sciences and Engineering Research Council of Canada, must ensure that all research projects “involving humans” conducted within their jurisdiction or under their auspices are reviewed by an REB that the institution establishes or appoints ([15], art. 6.1). REBs must include experts in the field of study, in law and in ethics, as well as at least one representative of the public ([15], art. 6.4). The REB serves a gatekeeper role: until REB approval is obtained, the research project may not begin. While this section focuses on Canada, it is notable that research ethics board review prior to medical research is also an international norm ([10], art. 15).

3.3. Research Ethics Review: The Scope of “Involving Humans” Has Already Expanded

Initially, the “harm” that was being addressed and prevented through research ethics requirements and by REBs was primarily forced or uninformed participation in research, and physical or psychological harm from participation in research. However, research can lead to many more harms than just these. Fortunately, over time, the scope of the “jurisdiction” of research ethics has expanded to address new harms that have emerged both through scientific innovation and through advances in understanding of what harm is. At the same time, there has been a shift in our understanding of who does research ethics: research ethics is not synonymous with REB review. Genetics research and clinical trials are two areas where these two evolutions in research ethics—what needs to be subject to some sort of ethical oversight, and what research ethics procedurally entails—are taking place. Examining the ethical practices surrounding these two areas of research demonstrates many of the additional harms that research can cause, and also demonstrates that calling research ethics a field concerned exclusively with “research involving human” has become something of a misnomer.

3.3.1. Genetics

Examining developments in research ethics as applied to genetic research is illustrative. Uniquely, genetic research is not done on a research subject directly; rather, the research is done on a blood sample, and the research is done in a lab. The direct harm to the research subject consists of a pin-prick to obtain the blood sample: the risk of this procedure is de minimis.
Researchers often also wish to access the personal health information of the samples’ donors, as more robust research results when clinical information is available to the researchers. However, it is widely recognized that the profound impacts of genetic research cut two ways:
Research may help us better understand the human genome, and genetic contributions to health and disease. It may lead to new approaches to preventing and treating disease. Individuals may benefit from learning about their genetic predispositions, if intervention strategies are available to prevent or minimize disease onset and mitigate symptoms, or to otherwise promote health. Genetic research also has the potential, however, to stigmatize individuals, communities or groups, who may experience discrimination or other harms because of their genetic status, or may be treated unfairly or inequitably ([15], p. 181).
Thus, when our research ethics doctrines require the research subject’s informed consent in the context of genetic research, the consent must not be restricted to consent to prick the research subject’s skin and obtain a blood sample. The subsequent uses of the blood sample, and the safeguards in place to protect the information obtained should be (and are) part of research ethics as well. For example, in Canada, to use any human biological materials for research, researchers must inform prospective research participants of “the intended uses of the biological materials”, “the measures employed to protect the privacy of and minimize risks to participants”, and “any anticipated linkage of biological materials with information about the participant” ([15], art. 12.2(c),(d),(f)). In other words, as a starting point, the centrality of voluntary consent remains undisputed, and voluntary consent requires “sufficient knowledge and comprehension of the elements of the subject matter involved, as to enable [the research subject] to make an understanding and enlightened decision” ([13], art. 1; [16], art. 6; [17]).
It could have (and has) been argued that the connection between research and research subject has been weakened to such an extent that it is no longer necessary to have a research ethics framework to govern this type of research: in other words, that when the research takes places on organic matter in a lab, it no longer falls under the purview human subject research [18]. At the time research ethics as a field was first developing, “research involving humans” necessarily meant research directly on a human being. This conception reflected the state of medical research at the time. The expansion of research ethics to cover genetics, to use Lawrence Lessig’s framework, is truly a triumph of translation over originalism ([19], p. 360).
Sometimes the second phase of a genetics research project cannot properly be conceptualized until the first phase has been completed. There can be samples left over after genetic research projects that can be successfully used for follow-up or additional research projects. Going back to the initial donors to ask for them to consent to their sample being used for another study is at best costly and at worst impossible. It is also not necessarily desirable given that not everyone who once provided a sample of blood would wish to hear from researchers every time that blood may be used in a new study. Therefore, certain aspects of the consent process have been modified. Genetic research—specifically, in the context of biobanking, or the creation of a bank of biological material samples (blood samples, tumors, tissues etc.)—has added some flexibility to recognize that, while there are new and different harms present in genetic research, some of the harms present in “traditional” human subject research do not exist in research about genetics, specifically, a risk of physical harm, illness or death. Thus, in some jurisdictions, and when certain conditions are met, it is possible to waive the requirement of obtaining consent for the subsequent research project using the sample. In Canada, some of the required conditions for waiving consent to subsequent research include the following: that it is impossible or impracticable to obtain consent, that any harm to the individual is unlikely, that privacy is protected, and that all previously expressed preferences of the individual will be respected ([15], art. 5.5). Also, as always, the research project must be reviewed by an REB, whose role it is to protect the research participant by putting into practice the Tri-Council Policy Statement and its core principles: respect for persons, concern for welfare, and justice [15].
Further, research ethics recognizes that “[g]enetic information has implications beyond the individual because it may reveal information about biological relatives and others with whom the individual shares genetic ancestry” ([15], chap. 13), or might provide “scientific evidence” that either supports or refutes the stories and histories that a community shares [20]. In some instances, not only must the consent of the person who supplies genetic material be obtained a broader community consultation is recommended in order to meet ethical standards ([15], art. 13.6).
Research in genetics provides a precedent for determining that certain research activities must not move forward for ethical reasons, despite their potential for advancing scientific innovation. There is, for example, an outright moratorium on human cloning that applies internationally. UN Resolution 59/280 bans “all forms of human cloning inasmuch as they are incompatible with human dignity and the protection of human life” [21]. UNESCO’s Universal Declaration on the Human Genome and Human Rights similarly bans “[p]ractices which are contrary to human dignity, such as reproductive cloning of human beings” ([22], art. 11). In Canada, it is illegal to knowingly produce a human clone ([23], s.5). Creating a human clone is probably scientifically possible; we successfully cloned a sheep, Dolly, in 1996 ([5], p. 77). Despite this, the international community has simply held such activity to be contrary to human dignity and therefore impermissible.
The second moratorium in the field of genetics is on genetic modifications that are passed on to future generations. UN Resolution 59/280 bans “the application of genetic engineering techniques that may be contrary to human dignity” ([21], (d)). Canada, for example, bans knowingly “alter[ing] the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants” ([23], s.5 (1)(f)). Genetic modifications that are not passed on to future generations do not hold the same intrinsic moral problems. Somatic cell gene transfer, as a non-inheritable genetic modification is called, can be used to cure pathologic conditions through the alteration of a single gene or combination of genes in the non-germ cells (i.e., cells that are not passed on to offspring) of the body ([5], p. 95).
There are still ethical issues to address when researchers use the somatic cell gene transfer technique: whether it is safe and effective; whether the patient or research subject obtains sufficient understandable information to be able to provide informed consent; and how to determine which conditions or “diseases” it is appropriate to target ([5], p. 95). Genetic modification of a germline, the recipient’s inheritable material, is much more ethically fraught. Ethical arguments against inheritable genetic modification (“IGM”) include safety concerns, both known and unknown and to both research participants and their children; the fact that subsequent generations who will be affected by the modification have not consented; concerns about the ethics of altering the genome; and worries about the slippery slope towards genetic enhancement ([24], p. 132). Notably, it seems that many of the more recent scholars writing about IGM hold that the moratorium is not ethically justified: that the safety concerns are no longer warranted; that “society entrusts parents to act in the best interests of their children ([25], p. 161); and that worries about the slippery slope towards genetic enhancement can be addressed “by controlling use of the technology rather than banning it altogether” ([26], p. 195). Also, some experts feel that not allowing research in the field of germline genetic modifications would be the problem. James Watson, co-discoverer of the structure of DNA expressed this point-of-view at a conference ([27], p. 317). Thus the possibility of inheritable genetic modification introduced through research participation could be construed as a situation similar to that of any new technology: one with two options. Namely, move forward and identify/address the consequences later or hold back and identify/address the consequences first and then decide whether to proceed. In contrast to most technologies, the latter option was selected.
To summarize, what we have with genetics research is a substantive ethical framework that applies (under the rubric of “research involving humans”) to samples of blood or tissue completely separate from the person from whom the sample originated. In Canada, how those samples are used in research must always be reviewed by an REB to ensure that the research complies with ethical norms. Most of the time (with the exception of subsequent uses when certain criteria are met) the consent of the sample donor must be sought for the research for which the sample will be used. Sometimes, broader community consultation may be appropriate because the research findings may affect the broader community. There is precedent for a moratorium on certain research paths or procedures; not only when there seems to be broad consensus that it is unacceptable, as in the case of human cloning, but also when there seems to be ongoing debate regarding whether it is acceptable, as in the case of inheritable genetic modification. The moratorium on inheritable genetic modification is distinguished from the permissible non-inheritable genetic modification, showing that the threshold that must be met to allow for changes that will carry on to future generations is higher.
The framework that has been set up for research in genetics could serve as a model for a broader philosophy of technology. The substantive precedent is there: in genetics, “innovation” comes at a potentially high social cost, and so mechanisms have been created to ensure that this innovation progresses only insofar as the broader social costs are anticipated, discussed and minimized.

3.3.2. Clinical Trials Registration

The World Health Organization (WHO) established the international clinical trials registration platform in 2005 [28,29]. The goal of this registry is to address the publication bias in clinical trials literature [30,31]. More specifically, there is a well-documented phenomenon that clinical trials which are “successful” are more likely to be published than clinical trials which are “unsuccessful”. Thus, if an epidemiologist or statistician were to perform a meta-analysis [32] of all the literature on a particular drug or procedure, she would have biased results; the drug or procedure would appear more effective and less risky than it really is, because the studies where the drug or procedure appeared ineffective or where the side-effects were more toxic are less likely to be published. The WHO International Clinical Trials Registry Platform seeks “to ensure that a complete view of research is accessible to all those involved in health care decision making” [28].
This is not just a scientific issue. It is an ethical issue as well. Just as research that either is bad science or lacks social value is unethical because it puts research participants at risk for no reason, good research may prove valueless if the results are not communicated to the public. Non-publication of negative results means that the risks to research subjects “are not redeemed by the social value of the knowledge produced” ([33], para. 28; [34,35]). As stated in the TCPS:
There are compelling ethical reasons for the registration of all clinical trials. Registration improves researchers’ awareness of similar trials so that they may avoid unnecessary duplication and thereby reduce the burden on participants. Registration also improves researchers’ ability to identify potential collaborators and/or gaps in research so that they may pursue new avenues of inquiry with potential benefits to participants and to society. Perhaps of most concern is the danger that some researchers or sponsors may only report trials with favorable outcomes.
([15], chap. 11)
The Helsinki Declaration Ethical Principles for Medical Research Involving Human Subjects [10], a document that has been called “the primary source and arbiter of research ethics worldwide” [36], specifically calls for the registration of all clinical trials before the first subject is recruited, and holds that researchers, authors, sponsors, editors and publishers are ethically obligated to publish and disseminate the results of all research ([10], arts. 8, 9). The information required for registration includes registration and other identifying numbers, sponsor(s) identity(ies), contacts for public and scientific queries, countries where study subjects will be recruited, health conditions being studied, description of the intervention(s) being performed, and the outcomes, or events that are being measured because they are thought to be influenced by the intervention [28]. In Canada, the Tri-Council Policy statement requires that all clinical trials be registered with a duly recognized and easily web-accessible registry before the researchers recruit the first research subject ([15], art. 11.3). While the WHO International Clinical Trials Registry Platform is voluntary, the TCPS makes registration mandatory for publicly funded research in Canada.
The registry’s value may not be immediately apparent. It communicates the existence of a trial, not of the findings of the trial. Its value comes from the fact that it is an important step in increasing transparency in clinical trials. Making pharmaceutical companies transparent about publication bias and selective reporting by ensuring that the existence of a clinical trial is public knowledge increases the ability of patients, practitioners and policy-makers to make well-informed decisions about healthcare [30].
Academic journals are just beginning to take steps to require access to the raw findings of all trials [37]. The British Medical Journal’s Editor-in-Chief has praised the pharmaceutical company GlaxoSmithKline for its recent announcement that it would allow access to anonymized patient level data from its clinical trials, when the requirements of a “reasonable scientific question, a protocol, and a commitment from the researchers to publish their results” are met. The company has tasked an independent panel with assessing all such requests. The Editor-in-Chief does temper her praise, noting that we will have to see whether the process actually works well in practice; still, providing data evidences a further move towards transparency. The British Medical Journal has also decided to limit its publication of clinical trials of drugs and devices (whether industry funded or not) to studies undertaken by companies or organizations who commit to make the relevant anonymized patient-level data available on reasonable request [37].
In the modern research climate, Clinical Trials registration is described as an ethical, as well as scientific, imperative. However, it is not strictly speaking about the research subject; rather, it recognizes that research to which research subjects contributed should not be hidden, and should be performed in a way that benefits the wider public in an ongoing way.
Arguments against the registration of clinical data are ultimately outweighed by the arguments for registration. The general arguments against include freedom of research, protection of private data and protection of the financial interests of the research sponsor (pharmaceutical company) ([34], p. 278). However, while freedom of research is an important principle ([22], art. 12(b)), arguing freedom of research in a context where that “freedom” exploits research subjects and puts at risk future patients is offensive, as is protecting a “financial interest” that is dependent on deception of this sort. The argument about protection of private data has more legitimacy to anyone concerned about legal protection for innovators, and justifiably gives pause. It is understandable that companies would not wish to, for example, disclose information that would prevent them from later obtaining a patent. Ultimately, as in many complicated areas where interests compete, the means to achieve an acceptable solution will be through a values-based approach applied to this specific situation. So long as we recognize the value and importance of allowing innovators the option of pursuing patents and providing access to important information with significance to the public, I have confidence that we can find a way to protect both.
Procedurally, the registry paradigm offers parallels to the kind of public discussion about technological developments that Winner finds lacking. Winner is a proponent of open information and transparency in technological development. In order to meet his exhortation to “examine critically the nature and significance of artificial aids to human activity” ([1], p. 4) and have this critical examination be “a topic widely discussed by scholars and technical professionals, a lively field of inquiry often chosen by students at our universities and technical institutes” ([1], p. 3) there needs to be a way to access sufficient information about what is happening in terms of technological developments to ground this discussion.

4. From Research Involving Humans to Research Affecting Humans

While research ethics started out as a conceptual and principled set of norms, it quickly moved to focus in practice on REB review. As the above examples demonstrate, it is starting to broaden again. And the areas described above—genetics and clinical trials registration—are not the only examples of broadening research ethics scope. For example, there have been calls for research ethics to address justice issues in international clinical trials [38,39].
Research ethics oversight can play a role in compelling scientists to pay attention to issues related to their research that go beyond the scientific and to provide information to academics, policy makers, and the general public early in the scientific process, thereby facilitating study, discussion and appropriate regulation of the results of these research endeavors. The creation of a mechanism to request further information regarding a technology would also allow for follow-up; not only would we create a processes for ethically evaluating technologies, we would be able to access information necessary for gauging their effectiveness. Both the ethical framework in place for genetic research and the international clinical trials registry demonstrate that, although research ethics is still officially concerned with research involving humans, there are instances where it involves itself in areas that simply to do not fall under the traditional ambit of research involving humans. These ethical mandates are addressing something broader.
There is no overall consensus about whether this broadening is a positive or negative development. Some critics claim that the ethical conduct of research should be the responsibility of researchers, “except possibly in that small proportion of cases where prospective research participants may be so intrinsically vulnerable that their well-being may need to be overseen” ([40], p. 1). There is a growing literature on why “ethics creep”—the increasing bureaucratization of REBs and their expanding reach—is a negative development [41]. However, there is no consensus (even among scholars who believe that ethics creep is a problem) regarding how to delineate the boundaries of what research needs to be reviewed [42]. Ethics creep literature generally focusses on the jurisdiction of the REB (or equivalent). A mechanism such as the clinical trials registry, which does not slow down research or increase its cost, might not even be considered problematic by those concerned with ethics creep.
Ironically, perhaps, there is empirical evidence that scientists are not good at considering the ethical and societal implications of their research. A recent empirical study found a lack of awareness by scientists about ethical and social issues related to their research; a belief that ethical and social issues were not relevant to their research; confidence that they could manage any issues that might arise (perhaps misplaced); and an incapacity to include ethical and societal considerations into their daily scientific research practice due to a number of factors, including scientific culture and pressures to publish ([43], p. 44).
There is another body of literature which suggests that research ethics does not go far enough, and that since “no aspect of research with humans…is devoid of ethical significance” [44], ethical oversight should be involved in all aspects of research involving humans. With a criticism that parallels Langdon Winner’s, the authors of “Research Ethics Broadly Writ: Beyond REB Review” find that there is much more to research ethics than what is currently reviewed by REBs. They argue that “health research is a multi-stage process and each stage has unique ethical implications that may or may not fall under the mandate of the REB” ([33], para. 1). However, “key stakeholders lack the foresight and political will to make the necessary changes” to the ethics review process to take this into account ([33], para. 1). The authors put forth a lifecycle for health research involving human subjects (HRIHS) that contains twelve elements:
(1)
priority setting;
(2)
education (scientific and ethical);
(3)
protocol design;
(4)
funding review;
(5)
ethics review;
(6)
recruitment;
(7)
informed consent;
(8)
monitoring;
(9)
study termination;
(10)
data analysis;
(11)
knowledge transfer (KT); and,
(12)
quality assurance and quality improvement (QA/QI) (of all relevant processes) ([31], para. 7).
Research ethics should, according to this model, take into account principles such as distributional justice and resource allocation, which the REB is not equipped to evaluate [33,45]. Broader societal involvement is required to meet the ideals of this model.
Thus research ethics is accused both of going too far and of not going far enough. Still it might provide a transplantable framework for putting into practice the philosophy of technology that Winner proposes, particularly of the oversight procedure does not implicate REB review. Consider how the process of technological development and commercialization follows its own “life cycle”: priority setting; research; modeling; development; testing; monitoring; commercialization; and user feedback, for example are all steps which occur as technology is developed.
The benefit of breaking up the process into segments allows us the opportunity to consider each one independently. The authors of “Research Ethics Broadly Writ: Beyond REB Review” argue for expansion and re-thinking in a world which, on the whole, has already accepted the utility and necessity of ethical consideration. If we apply the same analysis to the technology setting, which lacks a pre-existing system of ethics oversight or discussion, we can achieve a more beneficial model from the beginning.
In only the simplest case scenario, might REBs prove useful with putting into place a review process for technologies, namely, when the development of the technology already involves human research as part of its process. Take, for example, the regulation of medical devices. At some point in the development of medical devices, in-hospital clinical trials will need to occur, involving REBs. In this context, REBs tend to examine the consent form to make sure potential research subjects are adequately informed about the study. They also examine the evidence to ensure that the new device appears to be at least as good as whatever the current treatment for the condition-in-question is, and the design of the study to make sure that appropriate oversight and stopping rules are in place so that if the new device causes more adverse events than anticipated, there will be a quick feedback mechanism to the investigators and the study will be stopped. If the REB (and researchers, for that matter) treated the review of medical devices similarly to the review of genetic research projects, other obligations could be placed on the investigators and other social implications of the device could be examined. Consider the example of cochlear implants. When obtaining a cochlear implant, a person must consent not just to treatment, but to the product’s terms of service. Warranty provisions can be voided if the device is used in the wrong way or used with an unauthorized competitor [46]. If a new cochlear implant (or other implantable device) was at the clinical trials stage, perhaps REBs would request information regarding downstream conditions for the use of the device (should it ultimately be approved) and set conditions for these, if necessary. In other words, REBs would be involved with technological development.
I am not advocating, however, for a review process for all technological developments that mirrors REB review. Procedurally, what I would advocate for is a system resembling the need to publish the existence of a clinical trial with some basic information about the clinical trial (publicly accessible registry), coupled with the provision upon request of further information about what is going on when certain conditions are met (as GSK announced it would do voluntarily, and what BMJ is making a pre-condition of publication). With some basic information about what technological development is happening, with a possibility of obtaining further information upon request, social scientists, ethicists and others could start a conversation about risks and benefits earlier on.

5. Case Study: Web 2.0 and the Appearance of Anonymity

The case study this paper will consider is the development of internet platforms where users can add content and interact with each other without disclosing their identity, and with the appearance of anonymity.
Philip Zimbardo, a renowned psychologist, conducted experiments beginning in the late 1960s and 1970s which provide persuasive evidence that the appearance of anonymity increases people’s propensity to hurt others. More specifically, Zimbardo devised a situation in which, in pairs, one woman had reason to shock another [47]. In half the instances, the women were anonymized; they were given hoods and not named, but instead given a number. In the other half of cases, the women wore name tags and were called by their names. “[T]he women who were made to feel anonymous, in a group setting, given permission to inflict pain on someone else, exerted twice as much pain … as did the women who were identifiable” ([48], p. 164). This study was ground-breaking and spawned much additional research on “deindividuation” and aggression. For example, in another study conducted by a former graduate student of Zimbardo, trick-or-treating children were observed at a series of homes participating in the research study. Inside each entranceway were a bowl of candy and a bowl of money. An experimenter greeted the children and told them they may take one candy. She then said she had to get back to work and left the room. At half the houses, the experimenter asked the children their names and where they lived. At the other half, the children remained anonymous. Different sized groups of children came trick-or-treating during that Halloween night. The results showed that the “anonymous” children were more likely to steal candy and/or money, and also the children in bigger groups were more likely to steal. When both of these factors were present—anonymity and group presence—children were most likely to steal ([48], p. 165). There is a strong body of research evidence that anonymity is “an antecedent of antisocial behavior”, particularly when this anonymity occurs in a group setting ([49], p. 181).
The internet, with its early beginning linked strongly to universities, had different architecture at different universities regarding the ability to be anonymous online. For example, at University of Chicago in the mid-1990s, any computer that plugged into an Ethernet connection jack on campus could provide access to the internet and the user remained anonymous. This was a conscious decision of then-Provost Geoffrey Stone, a free speech scholar ([19], p. 33). At Harvard, only registered machines would provide access to the internet when plugged into a campus Ethernet connection jack. All online activity on the network was “monitored and tracked to a particular machine” ([19], p. 33). Lessig uses this contrast to show how different architectures, chosen to embody different values, “differ in the extent to which they make behavior within each network regulable” ([19], p. 34). At Harvard it remains easy to track online behavior to an individual. At University of Chicago in the mid-1990s, it was very difficult.
Looking at the social science research described above, however, this same contrast could show another way that the different architectural choices may have been important. At Harvard, users’ identities were fully known; users may have been less likely therefore to engage to antisocial behavior online. At University of Chicago, antisocial behavior was likely more common because users were anonymous. There are many architectural features of the internet which make it possible to find out who did what: tracing, cookies, and Single Sign-on (SSO) technology, among others ([19], pp. 47–50). Thus, it is possible to regulate after-the-fact. However, if, due to other architectural features, people think that they are anonymous, they will behave as though their online comments and activity are, indeed, untraceable.
A prominent and recent example of this phenomenon is the internet troll named Violentacrez, who posted a huge quantity of disgusting material—pornography, racism, gore—on the website, Reddit. An editor at Gawker discovered his identity and outed him [50]. Presumably, Violentacrez, who we now know to be Michael Brutsch, thought his activity online would not be traced back to him. He also had a “group” (a large number of followers on the site). Since being identified publicly, Michael Brutsch was fired from his job and said he regrets not having stopped his actions sooner [51].
What I hope to communicate through this case study is not that anonymity online is always bad. Anonymity plays an important role in allowing whistleblowers to act, in facilitating political discourse particularly in countries without free speech protections, and in exploring sensitive topics such as sexual orientation before being comfortable doing so in an identifiable manner. However, it is not always clear whose input was sought when online architectures were being developed. Did technology developers make all the decisions? At what point in technological development were social scientists able to obtain sufficient information about technological development to study its broader social consequences? The social science evidence existed. Some of the problems which have arisen because of the feeling of anonymity could have been predicted. Even when reading recent case law regarding when the identity of an anonymous online commenter should be disclosed, there appears to be no discussion on this social science evidence. Rather, the analysis focuses on a balancing of the values of free speech and protection of reputation (defamation) [52,53].

6. Limitations of a Philosophy of Technology Based on Research Ethics

There are significant limitations that would cause challenges if a technological reporting/oversight system was implemented based on our current research ethics system. As a preliminary matter, a philosophy of technology that is ready to be applied does not exist yet. However, as information about developing technologies became accessible, this would create the raw data on which a philosophy of technology could be developed. Over time, a body of literature would develop; national or international commissions could be struck.
Also, for technological development that requires research using human subjects, the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans applies only to University and related research, not to the private sector. Much technological development happens in the private sector. And part of the problem with technological development is that it has “too often been hijacked by, and subordinated to, crassly commercial and military purposes” ([5], p. 94). This is a problem for research ethics as well: that is, the problem of how to ensure that private-sector research (most notably research taking place in the pharmaceutical industry) abides by the same high ethical standards as publicly-funded research. The problem of engaging private sector actors would likely be exacerbated should the research ethics framework be extended to all technological development.
Next, presumably, in a majority of cases, implementation of such a system would require a process analogous to the clinical trials registration system or GlaxoSmithKline’s commitment to making available anonymized patient level data from its clinical trials. This system makes sense for clinical trials because the clinical trial sponsors are usually huge pharmaceutical companies who have the capacity, the legal expertise and the resources to register their studies. It is also easy enough to communicate the rules to the target audience. However, some technological developments that have had huge social impacts have not been conceptualized by big companies. Facebook, for example, was initially built by Mark Zuckerberg, a then-university student, in his dorm room. There are currently many app developers who are children. Creating a technology registration system would need to address these issues. For example, perhaps it could apply only to incorporated companies, and also, the onus could be placed on companies like Apple and Google to register apps on their platforms.
Finally, a registration system (or a make-information-available-upon-legitimate-request system) would have to be reconciled with our intellectual property system. To obtain a patent, an invention must be new. Prior disclosure of enough information might invalidate a future patent application. Relatedly, trade secrets are extremely valuable to companies. Having to disclose certain information about technological development prior to bringing them to market could make an innovative company lose a competitive edge. Any research ethics system would have to acknowledge and accommodate these concerns.

7. Conclusions

There exist similarities between, on the one hand, the research that is the focus of the field of research ethics and, on the other hand, the technological development that is the focus of Winner’s sought-after philosophy of technology: notably, a value system where efficiency and economics are extremely important, as well as a systemic separation of “makers” and “users”. However, there are also differences between the two that explain why pre-emptive, involved oversight was developed in one context without much push-back, and yet has not even been adequately considered in the other context: essentially, the shock factor. Recall, it was horrible experimentation on humans which led to the development of research ethics in the first place.
While the potential harm in the research context appears obvious and the harm from technological development may appear less so (or perhaps less directly linked to the technology), the social and political impact of technology can be profound. Langdon Winner’s concerns about the apparent lack of concern over the consequences are warranted, and demand action from those of us concerned about unchecked technological development. Similar to the research process, no aspect of the technology development process is devoid of ethical significance. To create a means of putting into practice Winner’s philosophy of technology, we can apply research ethics not only to “research involving humans”, but also to “research affecting humans”. This would mean that all technological development would need to be assessed by people beyond the technological developers. Research ethics would provide a basic a framework (both substantive and procedural) for the way forward as we contemplate how to consider and address the social and political consequences of our rapid technological development.

Conflicts of Interest

The author declares no conflict of interest.

References and Notes

  1. Langdon Winner. The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago: The University of Chicago Press, 1986. [Google Scholar]
  2. Though a predecessor field, medical ethics, has roots from much further back, e.g., the Hippocratic Oath. See: Helga Kuhse, and Peter Singer. “What is bioethics? A historical introduction.” In A Companion to Bioethics. Edited by Helga Kuhse and Peter Singer. Oxford: Blackwell Publishing, 2001, pp. 3–11. [Google Scholar]
  3. Interestingly, both the Nazi research program and the Tuskegee syphilis study have racism at their root.
  4. Arthur J. Cockfield. “Towards a Law and Technology Theory.” Manitoba Law Journal 30 (2004): 383–415. [Google Scholar]
  5. Denis Kenny. “Inheritable Genetic Modification as moral Responsibility in a Creative Universe.” In The Ethics of Inheritable Genetic Modification: A Dividing Line? Edited by John E.J. Rasko, Gabrielle M O’Sullivan and Rachel A Ankeny. Cambridge, UK: Cambridge University Press, 2006, pp. 77–102. [Google Scholar]
  6. While Cockfield states that “technological developments determine certain paths and influence human behaviour, often in unanticipated ways” (para. 7), he later states that “inattention to technological developments leads to an increased risk that unanticipated adverse social outcomes will take place” (para. 32). This suggests that technologies’ consequences may be more predictable that he at first implies.
  7. Joakim Zander. The Application of the Precautionary Principle in Practice: Comparative Dimensions. Cambridge, UK: Cambridge University Press, 2010. [Google Scholar]
  8. Mitscherlich Alexander, Mielke Fred, and Norden Heinz. Doctors of Infamy: The Story of the Nazi Medical Crimes. New York: Henry Schuman, 1949. [Google Scholar]
  9. Centers for Disease Control and Prevention. “U.S. Public Health Service Syphilis Study at Tuskegee: The Tuskegee Timeline. ” Available online: www.cdc.gov/tuskegee/timeline.htm (accessed on 29 July 2014).
  10. World Medical Association. “Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects (1964 [revised 1975, 1983, 1989, 1996, 2000, 2002, 2004, 2008]).” Available online: http://www.wma.net/en/30publications/10policies/b3/ (accessed on 29 July 2014).
  11. Susan M. Reverby. “Listening to Narratives from the Tuskegee Syphilis Study.” The Lancet 377 (2011): 1646–47. [Google Scholar] [CrossRef]
  12. Henry R. Beecher. “Ethics and Clinical Research.” The New England Journal of Medicine 274 (1966): 367–72. [Google Scholar] [CrossRef]
  13. National Institutes of Health. “The Nuremberg Code. ” Available online: history.nih.gov/research/downloads/Nuremberg.pdf (accessed on 29 July 2014).
  14. U.S. Department of Health & Human Services. “The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. ” Available online: www.hhs.gov/ohrp/humansubjects/guidance/belmont.html (accessed on 29 July 2014).
  15. Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada and the Social Sciences, and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans, 2nd ed. Ottawa: Interagency Advisory Panel on Research Ethics, 2010, Available online: http://www.pre.ethics.gc.ca/pdf/eng/tcps2/TCPS_2_FINAL_Web.pdf (accessed on 31 July 2014).
  16. United National Educational, Scientific and Cultural Organization. “Universal Declaration on Bioethics and Human Rights.” Available online: http://portal.unesco.org/en/ev.php-URL_ID=31058&URL_DO=DO_TOPIC&URL_SECTION=201.html (accessed on 29 July 2014).
  17. Human Genome Organization Ethics Committee. “Statement on Human Genomic Databases.” Available online: http://www.hugo-international.org/img/genomic_2002.pdf (accessed on 29 July 2014).
  18. Kevin D. Haggerty. “Ethics Creep: Governing Social Science Research in the Name of Ethics.” Qualitative Sociology 27 (2004): 391–414. [Google Scholar] [CrossRef]
  19. Lawrence Lessig. Code Version 2.0, New York: Basic Books, 2006.
  20. Dena S. David. “Genetic Research & Communal Narratives.” Hastings Center Report 34 (2004): 40–49. Available online: http://www.thehastingscenter.org/pdf/publications/hcr_jul_aug_2004_article1.pdf (accessed on 29 July 2014). [Google Scholar]
  21. Declaration on Human Cloning, United Nations 59/280 (2005).
  22. United Nations Educational, Scientific and Cultural Organization. “Universal Declaration on the Human Genome and Human Rights.” Available online: http://portal.unesco.org/en/ev.php-URL_ID=13177&URL_DO=DO_TOPIC&URL_SECTION=201.html (accessed on 29 July 2014).
  23. Assisted Human Reproduction Act SC 2004, c 2 at s 5.
  24. Françoise Baylis, and Jason Scott Robert. “Radical rupture: Exploring biological sequelae of volitional inheritable genetic modification.” In The Ethics of Inheritable Genetic Modification: A Dividing Line? Edited by John E.J. Rasko, Gabrielle M O’Sullivan and Rachel A. Ankeny. Cambridge, UK: Cambridge University Press, 2006, pp. 131–48. [Google Scholar]
  25. Rosemary Tong. “Traditional and feminist bioethical perspectives on gene transfer.” In The Ethics of Inheritable Genetic Modification: A Dividing Line? Edited by John E.J. Rasko, Gabrielle M O’Sullivan and Rachel A. Ankeny. Cambridge, UK: Cambridge University Press, 2006, pp. 159–73. [Google Scholar]
  26. Ruth Chadwick. “Gene therapy.” In A Companion to Bioethics. Edited by Helga Kuhse and Peter Singer. Oxford: Blackwell Publishing, 2001, pp. 189–97. [Google Scholar]
  27. Meredith Wadman. “Germline Gene Therapy must be Spared Excessive Regulation.” Nature 392 (1998): 317. [Google Scholar] [CrossRef]
  28. World Health Organization. “International Clinical Trials Registry Platform.” Available online: http://www.who.int/ictrp/about/en/ (accessed on 29 July 2014).
  29. Davina Ghersi, and Tikki Pang. “En route to international clinical trial transparency.” The Lancet 372 (2008): 1531–32. [Google Scholar] [CrossRef]
  30. Davina Ghersi, Mike Clarke, Jesse A. Berlin, A. Metin Gulmezoglu, Rebecca D. Kush, Pisake Lumbiganon, David Moher, Frank W. Rockhold, Ida Sim, and Elizabeth Wager. “Reporting the Findings of Clinical Trials: A Discussion Paper.” Bulleting of the World Health Organization 86 (2008): 492–93. Available online: http://www.who.int/bulletin/volumes/86/6/08-053769.pdf (accessed on 29 July 2014). [Google Scholar] [CrossRef]
  31. Cécile Pino, Isabelle Boutron, and Phippe Ravaud. “Inadequate description of educational interventions in ongoing randomized controlled trials.” Trials 13 (2012): 1–8. [Google Scholar] [CrossRef]
  32. “The Oxford English Dictionary.” Available online: http://www.oed.com/ (accessed on 29 July 2014), sub verbo “meta-analysis”: Analysis of data from a number of independent studies of the same subject published or unpublished), esp. in order to determine overall trends and significance; an instance of this.
  33. James A. Anderson, Brenda Sawatzky-Girling, Michael McDonald, Daryl Pullman, Raphael Saginur, Heather A. Sampson, and Donald J. Willison. “Research Ethics Broadly Writ: Beyond REB Review.” Health Law Review 19 (2011): 12–24. [Google Scholar]
  34. Daniel Strech. “Normative arguments and new solutions for the unbiased registration and publication of clinical trials.” Journal of Clinical Epidemiology 65 (2012): 276–81. [Google Scholar] [CrossRef]
  35. Rosario M. Isasi, and Thu Minh Nguyen. “The Rational for a Registry of Clinical Trials Involving Human Stem Cell Therapies.” Health Law Review 16 (2008): 56–68. [Google Scholar]
  36. Michael D.E. Goodyear, Trudo Lemmens, Dominique Sprumont, and Godfrey Tangwa. “Does the FDA Have the Authority to Trump the Declaration of Helsinki? ” British Medical Journal 338 (2009). [Google Scholar] [CrossRef]
  37. Fiona Godlee. “Clinical Trial Data for All Drugs in Current Use Must be Made Available for Independent Scrutiny.” British Medical Journal 345 (2012): 1–2. [Google Scholar] [CrossRef]
  38. Solomon R. Benatar, and Peter A. Singer. “A new look at international research ethics.” British Medical Journal 321 (2000): 824–26. [Google Scholar] [CrossRef]
  39. Bridget Pratt, Deborah Zion, Khin Maung Lwin, Phaik Yeong Cheah, Francois Nosten, and Bebe Loff. “Linking international clinical research with stateless populations to justice in global health.” BMC Medical Ethics 15 (2014): 1–18. [Google Scholar] [CrossRef]
  40. Murray Dyck, and Gary Allen. “Is Mandatory research ethics reviewing ethical? ” Journal of Medical Ethics 39 (2012): 517–20. [Google Scholar] [CrossRef]
  41. Adrian Guta, Stephanie A. Nixon, and Michael G. Wilson. “Resisting the seduction of ‘ethics creep’: Using Foucault to surface complexity and contradiction in research ethics review.” Social Sciences and Medicine 98 (2013): 301–10. [Google Scholar] [CrossRef]
  42. Mark Israel. “Rolling back the bureaucracies of ethics review.” Journal of Medical Ethics 39 (2012): 525–26. [Google Scholar] [CrossRef]
  43. Jennifer Blair McCormick, Angie M. Boyce, Jennifer M. Ladd, and Mildred Cho. “Barriers to Considering Ethical and Societal Implications of Research: Perceptions of Life Scientists.” American Journal of Bioethics Primary Research 3 (2012): 40–50. [Google Scholar] [CrossRef]
  44. Kathleen Cranley Glass. “In Memoriam: Benjamin Freedman.” McGill Reporter. 24 April 1997. Available online: http://reporter-archive.mcgill.ca/Rep/r2915/freedman.html (accessed on 29 July 2014).
  45. In the United States, Institutional Research Boards are explicitly disallowed from considering broader social goals, see 45 CFR 46.111(a)(2) “...The IRB should not consider possible long-range effects of applying knowledge gained in the research (for example, the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility.”
  46. Ian Kerr. “The Repo Men Reductio: Body EULAs, Unfair Terms and Security of the Person.” In Presentation delivered at the IP Scholars Workshop held at University of Ottawa, Ottawa, Canada, 25 May 2012.
  47. I was unable to determine why the experiment used only women and not men as research subjects.
  48. Scott Drury, Scott A. Hutchens, Duane E. Shuttlesworth, and Carole L. White. “Philip G. Zimbardo on his Career and the Stanford Prison Experiment’s 40th Anniversary.” History of Psychology 15 (2012): 161–70. [Google Scholar] [CrossRef]
  49. Edward Diener, Scott C. Fraser, Arthur L. Beaman, and Roger T. Kelem. “Effects of Deindividuation Variables on Stealing among Halloween Trick-or-Treaters.” Journal of Personality and Social Psychology 33 (1976): 178–83. [Google Scholar] [CrossRef]
  50. Adrian Chen. “Unmasking Reddit’s Violentacrez, The Biggest Troll on the Web.” Gawker. 12 October 2012. Available online: http://gawker.com/5950981/unmasking-reddits-violentacrez-the-biggest-troll-on-the-web (accessed on 29 July 2014).
  51. David Fitzpatrick, and Drew Griffin. “Man Behind ‘Jailbait’ posts exposed, loses job.” CNN. 19 October 2012. Available online: http://www.cnn.com/2012/10/18/us/internet-troll-apology/index.html?hpt=hp_c1 (accessed on 29 July 2014).
  52. Warman v Fournier et al, 2010 ONSC 2126 (CanLII).
  53. Morris v Johnson, 2011 ONSC 3996 (CanLII).

Share and Cite

MDPI and ACS Style

Saginur, M. From Research “Involving” Humans to Research “Affecting” Humans: A Proposal for a Principled Expansion of Research Ethics’ Jurisdiction to Create Traction for a Philosophy of Technology. Laws 2014, 3, 509-528. https://doi.org/10.3390/laws3030509

AMA Style

Saginur M. From Research “Involving” Humans to Research “Affecting” Humans: A Proposal for a Principled Expansion of Research Ethics’ Jurisdiction to Create Traction for a Philosophy of Technology. Laws. 2014; 3(3):509-528. https://doi.org/10.3390/laws3030509

Chicago/Turabian Style

Saginur, Madelaine. 2014. "From Research “Involving” Humans to Research “Affecting” Humans: A Proposal for a Principled Expansion of Research Ethics’ Jurisdiction to Create Traction for a Philosophy of Technology" Laws 3, no. 3: 509-528. https://doi.org/10.3390/laws3030509

APA Style

Saginur, M. (2014). From Research “Involving” Humans to Research “Affecting” Humans: A Proposal for a Principled Expansion of Research Ethics’ Jurisdiction to Create Traction for a Philosophy of Technology. Laws, 3(3), 509-528. https://doi.org/10.3390/laws3030509

Article Metrics

Back to TopTop