Blog Viewer

American Institutional Review Boards: Safeguards or Censorship?

By SRAI JRA posted 03-15-2018 12:00 AM

  

Volume XLIX, Number 1

Authors 

  • Kristi N. Hottenstein, Ph.D. University of Michigan-Flint

Introduction

In 2010 the United States government spent over $16.5 billion dollars on human subject research conducted at institutions of higher education and other non-governmental institutions (AAUP, 2013). Over 55,000 human subject research projects were conducted by the 18 U.S. federal agencies that same year. By 2015, 98,352 clinical trials were being conducted in the United States, putting the U.S. at the forefront of biomedical research (U.S. National Institutes of Health, 2016). Human subject research in the United States is regulated through the Common Rule within the Code of Federal Regulations, a uniform set of rules for the protection of human subjects. The need to protect human subjects is crucial but some argue current government regulations are too strenuous and impede legitimate research. There are mixed reviews among administrators and academicians possibly arising as a result of a shift from individual to collective responsibility in the protection of human subjects and it has left many within the research community feeling uneasy about IRBs.

The ethical treatment of human subjects dates as far back as 1760 B.C.E. where government penalties for medical error were acknowledged in Hammurabi’s Code (Sanders & Ballengee- Morris, 2008). The Hippocratic oath, required of physicians as early as the late 4th century B.C.E., states “to abstain from all intentional wrong-doing and harm, especially from abusing the bodies of man or woman, found or free” (Sanders & Ballengee-Morris, 2008, p. 313). The Nuremberg Code, established in 1947, was a result of unethical medical experiments conducted on concentration camp prisoners in Germany and in German-occupied countries. The Nuremburg Code helped place human subject research on the U.S. governmental agenda (U.S. Department of Health and Human Services, 2016a). Although the code never became law in neither Germany nor the United States, it was the basis for the Code of Federal Regulations, Title 45, part 46, which was adopted in 1991, over 40 years later (White, 2007). In 1953, the National Institutes for Health (NIH) opened their Clinical Center in response to the growth in clinical research following World War II (Bankert & Amdur, 2006). It was also during this time social scientists experienced a funding boom in the United States increasing the volume of social and behavioral research being conducted (Stark, 2007). Social and behavioral research’s utilization of human subjects gained attention in 1961 with Stanley Milgram’s study on obedience to authority (Stark, 2007). This well-known and highly debated experiment was the first outside the biomedical venue to receive such publicity. In 1966 the U.S. Surgeon General became involved in the regulating of human subjects by requiring reviews of studies receiving funding from the U.S. Public Health Service.

On July 26, 1972, the New York Times broke a story on one of the most highly publicized studies in history, the Tuskegee Syphilis experiment. This federally funded study, which ran for nearly 50 years before the U.S. Department of Health, Education and Welfare put an end to it, knowingly withheld information and treatment from syphilis patients. The story was high profile, incredibly political, and quickly spread throughout the country. As a result, federal regulations on human subject research moved from governmental agenda to decision agenda. It took only two years for the National Research Act to be passed in 1974. The act required institutions to have diverse boards of at least five members to review federally funded research on human subjects, thus beginning the history of institutional review boards (IRBs) in the United States. Figure 1 represents an evolutionary timeline of human subject research regulations in the United States and the following section examines the timeline through the lens of John Kingdon’s Multiple Streams Theory for policy creation.

Figure 1. An evolutionary timeline of human subject research regulations in the United States
Figure 1. An evolutionary timeline of human subject research regulations in the United States. View Larger Image.

Kingdon’s Multiple Streams Theory

John Kingdon (2010), a scholar in American politics, notes the political structure in the United States as fractured more than anywhere else in the world. Policy proposals are developed according to their own incentives and selection criteria, whether or not they are solutions to problems or responsive to political considerations. Kingdon’s Multiple Streams Theory is based on the premise that three, independently flowing streams of problems, actors, and processes, may converge at any point to create a policy. Figure 2 below illustrates Kingdon’s (2010) Multiple Streams Theory.

fig2_a2_2018.png

Figure 2. A diagram representing Kingdon’s (2010) Multiple Streams Theory. 

In what Kingdon (2010) calls “policy primeval soup,” (p. 35) ideas float around, bumping into one another, encountering new ideas, and forming combinations and re-combinations. If the right combination of problems, policies, and politics come together, government takes action and regulations or laws are formed (Kingdon, 2010). In the consequent sub-sections below, Kindgon’s streams are further defined and then used to analyze the creation of a policy window for human subject research regulations.

The three streams

Problem recognition occurs in the problem stream and is critical to agenda setting. Large magnitude events or changes such as disasters or crises catch officials’ attention and can draw their attention to some items more than others. Kindgon does an excellent job of illustrating the difference between conditions and problems noting how conditions can become problems when they violate important values. He uses the following example. A lack of public transportation can be viewed as a transportation problem or a civil rights problem, and he notes that the treatment of the subject varies greatly based on how the problem is classified.

Different policy versions develop within the policy stream. It is in this stream that ideas float around, change, reform, combine, and await implementation.

The political stream refers to the willingness and ability of the politicians or actors to implement a policy change. Kingdon (2010) discusses how powerful changes in administration or shifts in national mood are to agenda and policy setting.

The policy window

The policy window refers to the convergence of the aforementioned streams. Applying Kingdon’s model to human subject research regulations, one could assert the Tuskegee Syphilis experiment provided just that window for federal regulations on human subject research. A condition (a lack of human subject research regulations) became a problem when it violated the rights of human subjects. Additionally, the magnitude of the media coverage and negative publicity forced politicians to focus their attention on human subject research regulations. At the time the Tuskegee experiment became public, some human subject research regulations had been implemented while others remained floating in the policy soup. The below chart is a modified version of Kingdon’s (2010) Multiple Streams Theory illustrating how his theory can be used to inform the creation of human subject research policy following the Tuskegee Syphilis Experiment.

Figure 3. A modified version of Kingdon’s (2010) Multiple Streams Theory illustrating how his theory can be used to inform the creation of human subject research public policy.
Figure 3. A modified version of Kingdon’s (2010) Multiple Streams Theory illustrating how his theory can be used to inform the creation of human subject research public policy.

The National Research Act [1974] was actually codified five years before the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, a federal advisory committee on the topic, issued the Belmont Report, the first handbook for IRBs. The notion that the Act was passed and implemented prior to (or not in conjunction with) the release of a supporting handbook for researchers and institutions lends support to the argument that IRBs were created in reaction to the policy window being opened at the convergence of the three streams instead of being implemented in a proactive fashion. By 1986 the Applied Research Ethics National Association, a professional association for those working with IRBs, was formed to provide leadership and guidance to the IRB community, but by this time, the umbrella of IRBs had already ballooned (Bankert & Amdur, 2006). IRBs were responsible for oversight of any research involving human subjects deemed to have generalizable knowledge. In 1991, 14 other federal departments and agencies joined the Department of Health and Human Services and adopted the Common Rule from the Code of Federal Regulations. The Common Rule is identical to subpart A of 45 CFR part 46 of the Health and Human Services regulations (U.S. Department of Health and Human Services, 2016a, para 3). Throughout the following decade, many institutions of higher education adopted the Common Rule, not only for federally funded research, but for all research involving human subjects (White, 2007). By the turn of the 21st century, the National Bioethics Advisory Commission, established by executive order in 1995 to advise government entities on bioethical issues, proposed even more guidelines including the certification of IRB members. These regulations were not decided on, but rather returned to the policy stream, a place White (2007) refers to as bureaucratic limbo, until another policy window is opened.

The Common Rule has generally remained unchanged since 1991. This is surprising in some respects given increased legal pressure on IRBs. In the early 2000’s, individual IRB members were named as defendants in the Robertson v. McGee lawsuit [2002] involving cancer research. Suit was also filed against individual IRB members over a lack of informed consent in Townsend v. University Hospital[2002] and the Kennedy Krieger Institute was named as a defendant in 2001 in Grimes v. Kennedy Krieger Institute, again, over informed consent (Beasley, 2009). Arguably, advances in science and research, inquiry, and methodology warrant updates to the human subject research regulations governing them. While the Office of Human Research Protections has issued Advanced Notices of Proposed Rule Making (ANPRM), Draft Guidance for Institutional Review Boards, as well as other recommendations, changes have not been implemented. Amending these regulations appears to have been a series of fits and starts over the last decade, but upon closer review, what appears to be a spasmodic pattern of revision attempts, may actually be an emerging pattern of discovery, policy review, and, finally, resistance. When the Department of Health and Human Services published an ANPRM in 2011 and solicited feedback, it received over 1,100 responses (AAUP, 2013). This is a clear indicator of the importance of IRB policy within the research community. Few will argue against the need for the protection of human subjects, but many believe a handful of isolated unethical practices have created a spiral of knee-jerk reactions resulting in a loss of academic freedom and a laundry list of other problems for researchers alike. Chadwick and Dunn (2000) sum up the last 50 years of IRB evolution by saying “Like many highway projects, the IRB system was sound when it was designed, but became out-of-date and overloaded almost from the start” (p. 21).

The Office of Human Research Protection database now contains over 10,500 records of registered IRBs (U.S. Department of Health and Human Services, 2016b). Additional data from the Office of Human Research Protections indicates the United States is leading the world in number of clinical trials. The Commission Report,Moral Science: Protecting Participants in Human Subjects Research(2011) completed by the Presidential Commission for the Study of Bioethical Issues, further illuminates the national stage human subject research has been placed on. While America appears to be at the forefront of assuring policy is in place for the ethical treatment of human subjects, there still maintains a level of dissonance between the federal government creating and handing down the regulations and the researchers who must abide by their parameters. The following sections highlight a number of critical issues surrounding the IRB debate in the United States.

Critical viewpoints

Research suggests a number of concerns regarding IRBs (AAUP, 2006; Bosk & De Vries, 2004; Feeley, 2007; Grady, 2010; Howard, 2006; Stark, 2007; White, 2007). Academic freedom, power, appeal process, and terminology were specifically listed as concerns by the AAUP (American Association of University Professors, 2006). Social and behavioral sciences professors have noted concerns specific to their professional areas including cookie cutter bio-medical and clinical best practices being implemented for social science research. American ethnographers have taken issue with the impact IRBs have had on their research noting that IRB processes slow their research and impede interviews ( Jaschik, 2008). Over 300 publications and presentations have come into existence in the last 20 years highlighting concerns with IRBs (van den Hoonaard & Hamilton, 2016).

Academic freedom infringement

Literature suggests IRBs can infringe on academic freedom (American Association of University Professors, 2006; Feeley, 2007; Howard, 2006; & Stark, 2007). Academic researchers postulate IRBs not only infringe on academic freedom, but on the very principle of freedom for which America was founded (Feeley, 2007; Stark, 2007). Safeguards for academic freedom evolved during the 1930’s and were codified in the American Association of University Professors’ 1940 Statement of Principles on Academic Freedom and Tenure. The 1960s and 1970s afforded federal regulations that protected the rights of human subjects. Ironically there is contention over which of these two liberties should supersede.

The literature indicates problems with IRBs have only been widely discussed in the last few decades. Malcolm Freeley, former President of the American Association of University Professors, in his 2006 AAUP presidential address, sparked great debate over the topic stating that IRBs “represent a failure of law” (p. 2). This address created momentum, especially in the humanities, among those opposed to IRB regulations. The Oral History Association, the primary membership organization for oral historians, voted this same year (2006) to endorse the AAUP’s report on academic freedom, specifically asking institutions to outright exempt IRB applications “whose methodology consists entirely of collecting data by survey, conducting interviews, or observing behavior in public areas” (Howard, 2006, p. 1).

To some, IRBs are viewed as governmental sanctioning of research which violates freedom of expression and thus the American Constitution (Feeley, 2007). Stark (2007) argued that the regulations aimed to protect the rights of human subjects actually violate the rights of researchers. While the federal government issues no official license for IRBs, it does (via its agencies) create and hand down the regulations. The notion that researchers with IRB approval are “licensed” to do research, and those without approval are not, certainly makes for a plausible argument that IRBs are equal to a governmental licensing agent.

Terminology

Subjectivity of regulatory terms used to define the scope of practice for institutional review boards has been scrutinized in the literature. One example of this is the term risk. Risk is a concept built on harm, yet another very subjective term. According to federal regulations, risk should be assessed based on whether the potential harm to participants is reasonable in relation to the benefits of the research (Hemmings, 2006). This subjectivity requires the IRB members to form their own assessment of risk and then either approve or deny based on their opinions. In a published report on Research on Human Subjects, the AAUP (2006) speaks out strongly against this stating, “there could hardly be a more obvious potential threat to academic freedom” (p. 1). Even in the most well thought out equations of cost-benefit ratios there is room for subjectivity. Many IRB members are not qualified to assess risk and often rely on a “no risk” line in the sand (White, 2007). Dingwall (2016) argues, “the risk to human subjects must be balanced against the wider societal benefits to sick people in the future” (p. 31), but the literature suggests that IRBs often struggle to find this balance erring too heavily on the side of caution because of vaguely defined regulatory terms.

Inconsistency

With over 10,000 IRBs nationwide and a vague set of federal regulations, interpretation of the regulations is not consistent. In addition to the sheer number of IRBs, consistency problems are compounded by the rotation and appointment of IRB members. Feeley (2007) states ongoing turn over leads to a lack of institutional memory and consistency. Yanow and Schwartz-Shea (2008) shed light on the inconsistencies of IRBs revealing it is not uncommon to have discrepancies in approvals for identical research seeking approval at multiple site locations. O’Neill (2016) notes inconsistencies in IRB decisions as a result of differing assessments of risk. Again, this differing assessment of risk is likely a product of the subjectivity of the term itself. He also purports that inconsistencies exist in who the board is protecting from risk. Some boards view risk only through the lens of risk to the participant, while other boards consider risk to others involved in the study including observers, researchers, and even institutions.

Mission creep and self-sustainment

Early initiatives such as the Belmont Report were specific to federally funded research. Over the past several decades regulations have been applied to non-federally funded research and have branched out over areas, which some argue, they were not initially intended. This phenomenon has been referred to throughout the literature as mission creep or mission drift (Sullivan, 2011; Trimmer, 2016; van den Hoonaard & Hamilton, 2016; White, 2007). White (2007) defined mission drift as “the process of co-opting a successful and well-conceived process, then gradually and mindlessly expanding it until it is no longer capable of performing its original function” (p. 548). In their recent book, The ethics rupture: Exploring alternatives to formal research-ethics review, van den Hoonaard and Hamilton (2016) state, “perhaps the most problematic and pervasively noted complaint about IRBs is their expansion of their scope” (p. 77). Grady (2010) claims over time, IRBs have shifted to protect institutions as much as individual subjects. The mission of IRBs is to protect human subjects. The Federal Code of Regulations lists “protecting the rights and welfare of human subjects of research” as a fundamental aspect to assuring regulation compliance (U.S. Department of Health and Human Services, 1999, p. 5). It may be possible over time the mission of IRBs has drifted to protect institutions as much as human subjects. From a societal context this aligns with the cultural shift from individual to collective responsibility over the past several decades, and it is certainly plausible that IRB mission drift is an effect of institutions, IRBs, and individual IRB members being named in lawsuits.

Feeley (2007) asserts that self-appointed protectors of ethics will gravitate toward these self- appointed positions. He concludes by stating IRBs confirm role theory in that when a censor role is created and someone is appointed to it, they will most likely fulfill that role. Role theory is based on the premise that an individual acts within the socially defined category (or role) they are fulfilling. Furthermore, colleges and universities have created positions, departments, and entire divisions dedicated to IRBs and human subject research. One explanation may be that mission creep is a product of IRB self-sustainment. It is also plausible that subjectivity of regulatory terms and scope of IRB practice may also contribute to IRB mission creep.

Power

Institutional review boards are not advisory boards. They are often viewed as authoritarian in nature and working against, instead of in collaboration with, the researcher. The IRB dictates to the researcher what he or she can and cannot do and there is often no appeal process, or, if a process exists, it affords the IRB ultimate decision authority. This puts all the power in the hands of the board. This process is contrary to most shared governance or faculty governance structures with which academicians may be accustomed, thus contributing to the dissonance between researchers and IRBs. Howard (2006) notes in practice IRBs may expand their scope past federal regulations, setting mandates at their discretion. There is no outside monitoring or assessment to evaluate the board’s performance. This seems ironic as a key function of the board is to monitor processes, yet no consistent set of checks and balances exist for the IRB itself. Brainard (2006) noted this problem as it relates to IRB members having conflicts of interest with the research being proposed to them. Specifically, IRB members did not disclose when they had financial conflicts with proposed research and some did not fully understand the meaning of this conflict. Other IRB members acknowledged voting on proposals where a conflict of interest existed, for example, having relationships with companies sponsoring the research, or relationships with competing companies (Brainard, 2006).

Regulating outside the purview of biomedical models

IRBs face a number of criticisms, including being largely based on quantitative biomedical models. Some of the largest critics of IRBs stem from the social sciences and humanities, because they view their research, which is predominately qualitative, as misunderstood by IRB members (Howard, 2006). Social scientists have reported instances where IRBs expect all research questions, questionnaires, and the like be presented to the IRB up front. This is not always conducive to qualitative inquiry. Oral historians, for example, conduct interviews and much of what is asked of the subjects is based on their answers to previous questions. The nature of oral historian research is very interactive and open-ended. Because biomedical models prefer to see complete protocols up front, including set questionnaires during the review process, often times research of this nature is frowned upon or delayed ( Jaschik, 2008). Qualitative researchers argue, in many cases, their methodologies are outside the IRB’s scope of knowledge, yet still within their scope of practice. Furthermore, Dingwall (2016) argues there is “no historical evidence of the abuse of human subjects [in the social sciences] that is in any way comparable to that perpetuated by biomedical researchers in the last 150 years” (p. 27), yet social science research is still largely bound by the same regulations. This was a hot topic among oral historians specifically as it relates to privacy issues and informed consent. Historians report being asked to delete tapes and shred transcripts instead of archiving them for future use. Oral historians fought diligently to make their concerns heard, and as a result the Office of Human Research and Protection (OHRP) granted them an exclusion status in 2003 for most oral history research (White, 2007).

This article explored the creation of the American IRB system through the lens of John Kingdon’s Multiple Streams Theory. We have examined the critical viewpoints surrounding the discord between researchers and IRBs. Literature suggests these viewpoints are often times problematically interwoven, creating a muddied implementation and practice of human subject research regulations for end users. The following section summarizes the article and makes recommendations for future research.

Conclusion

Institutional review boards were codified to protect human subjects, an ethical and noble concern, but arguably the regulations were hastened both in response to a highly publicized research experiment and political considerations. The Tuskegee Syphilis experiment opened the public policy window for human subject research regulations almost 50 years ago and researchers and law makers have been at odds ever since as to whether IRBs are safeguards or censorship. Advances in science and technology certainly warrant updates to existing regulations but there is clear apprehension from the academic community. This apprehension stems from how these new or revised regulations will impact their research practices. For many in the academic world of “publish or perish” time is of the essence and additional regulations have implications for their research timelines. Furthermore, the reactionary nature by which human subject research regulations were originally put into practice created a significant learning curve for implementation. This learning curve is a cause for concern among researchers. Whether these concerns raised by researchers elevate to a level of censorship is still widely debated.

The literature on institutional reviews boards leaves a number of unanswered questions. Are the issues surrounding IRBs a result of individual IRBs, the federal regulations, or a lack of clarity and scope with regards to implementation? Sizable research gaps exist within these areas. Future research on these topics may help to clarify for IRBs and researchers alike the scope of practice for IRBs going forward. A research gap also exists with regards to the training and accountability of IRBs. To this end, a more formalized and systematic IRB training for researchers and IRBs alike, is key to producing quality, ethical research. Appropriately aligned training by both parties is a proactive approach to ensuring a common understanding of human subject research regulations.

Literature on the effectiveness of IRBs leaves much to be desired. Despite a vast amount of research on, and opinions of, IRBs, little is actually known about their impact on the protection of human subjects. One suggestion would be to fill the research gap that exists as to the effectiveness of IRBs in preventing unethical treatment of human subject research participants. Such a study would not only inform IRBs and researchers, but policy makers as well.

Like it or not, it seems institutional review boards are a permanent part of higher education research. Positions, departments, and entire divisions governing human subject research now exist on college and university campuses. Moreover, if IRB mission creep continues, as it has since the creation of IRBs, more disciplines and departments will find their research subject to IRB approval. If human subject research and IRBs must come hand in hand, additional studies to fill existing research gaps is critical to ensuring amicable relationships between IRBs and researchers.

References

American Association of University Professors [AAUP]. (2006). Research on human subjects: Academic freedom and the institutional review board. Retrieved April 29, 2017 from https://www.aaup.org/report/research-human-subjects-academic-freedom-and... 

American Association of University Professors [AAUP]. (2013). Regulation of research on human subjects: Academic freedom and the institutional review board. Retrieved February 2017 from https://www.aaup.org/file/IRB-Final-Report.pdf

Bankert, E. A., & Amdur, R. J. (2006). Institutional review board management and function. (2nd ed.). Sudbury, MA: Jones and Bartlett Publishers.

Beasley, D. C. (2009). Coupling responsibility with liability: Why institutional review board liability is good public policy. Northern Kentucky Law Review, 36(1), 46-65.

Bosk, C. L., & De Vries, R. G. (2004, September). Bureaucracies of mass deception: Institutional review boards and the ethics of ethnographic research. The Annals of the American Academy of Political and Social Science, 595(1), 249-263. https://doi.org/10.1177/0002716204266913

Brainard, J. (2006, December 8). Study finds conflicts of interest in many research-review boards. The Chronicle of Higher Education, p. A22.

Chadwick, G. L., & Dunn, C. M. (2000). Institutional review boards: Changing with the times? Journal of Public Health Management and Practice, 6(6), 19-27. doi:10.1097/00124784-200006060-00005

Dingwall, R. (2016). The social costs of ethics regulation research. In Will C. van den Hoonaard and Ann Hamilton (Eds.) The ethics rupture: Exploring alternatives to formal research ethics review (pp. 25-42). Toronto: University of Toronto.

Feeley, M. (2007). Legality, social research, and the challenge of institutional review boards. Law & Society Review, 41(4), 757-776.
https://doi.org/10.1111/j.1540-5893.2007.00322.x

Grady, C. (2010). Do IRBs protect human research participants? JAMA, 304(10), 1122- 1123. doi:10.1001/jama.2010.1304

Hemmings, A. (2006). Great ethical divides: Bridging the gap between institutional review boards and researchers. Educational Researcher, 35(4), 12-18.
https://doi.org/10.3102/0013189X035004012

Howard, J. (2006, November 10). Oral history under review. The Chronicle of Higher Education, p. A14-A17.

Jaschik, S. (2008). Threat seen to oral history. Inside Higher Ed. Retrieved January 3, 2017 from https://www.insidehighered.com/news/2008/01/03/history

Kingdon, J. W. (2010). Agendas, alternatives, and public policies, Updated edition (2nd ed.). NY: Longman.

O’Neill, P. (2016). Assessing risk in psychological research. In Will C. van den Hoonaard and Ann Hamilton (Eds.) The ethics rupture: Exploring alternatives to formal research ethics review (pp. 119-132). Toronto: University of Toronto.

Presidential Commission for the Study of Bioethical Issues. (2011, December). MORAL SCIENCE: Protecting Participants in Human Subjects Research. Retrieved February 2016 from http://bioethics.gov/sites/default/files/Moral%20Science%20June%202012.pdf

Sanders, J. H., & Ballengee-Morris, C. (2008). Troubling the IRB: Institutional review boards’ impact on art educators conducting social science research involving
human subjects. Studies in Art Education 49(4): 311-327. https://doi.org/10.1080/00393541.2008.11518744

Stark, L. (2007). Victims in our own minds? IRBs in myth and practice. Law & Society Review 41(4), 777-786. https://doi.org/10.1111/j.1540-5893.2007.00323.x

Sullivan, G. (2011). Education research and human subject protection: Crossing the IRB quagmire. Journal of Graduate Medical Education, 3(1), 1-4. https://doi.org/10.4300/JGME-D-11-00004.1 

Trimmer, K. (2016). Political pressure on educational and social research. NY: Routledge.

U.S. Department of Health and Human Services. (1999). Code of Federal Regulations. Retrieved March 28, 2017 from
http://www.hhs.gov/opa/grants-and-funding/grant-forms-and-references/45-...

U.S. Department of Health and Human Service. (2016a). Information on Protection of Human Subjects in Research Funded or Regulated by U.S. Government. Retrieved February 2017 from https://www.hhs.gov/1946inoculationstudy/protection

U.S. Department of Health and Human Service. (2016b). Office of Human Research Protection Database. Retrieved November 15, 2017 from
https://ohrp.cit.nih.gov/search/irbsearch.aspx

U.S. National Institutes of Health. (2016). Clinicaltrials.gov. Retrieved January 30, 2017 from https://www.clinicaltrials.gov/

Van den Hoonaard, W., & Hamilton, A. (2016). The ethics rupture: Exploring alternatives to formal research-ethics review. Toronto: University of Toronto.

White, R. F. (2007). Institutional review board mission creep: The common rule, social science, and the nanny state. Independent Review 11(4), 547-564. Retrieved from http://www.jstor.org/stable/24562415

Yanow, D., & Schwartz-Shea, P. (2008). Reforming institutional review board policy: Issues in implementation and field research. PS: Political Science and Politics 41(03), 483-94. https://doi.org/10.1017/S1049096508080864

Keywords

Institutional review board, federal regulations, public policy, human subject research, academic freedom

Have any questions? Contact the editor »


#VolumeXLIXNumber1
#InstitutionalReviewBoard
#FederalRegulations
#PublicPolicy
#HumanSubjectResearch
#AcademicFreedom
0 comments
35 views

Permalink