Volume LVII, Number 1
Jennifer E. Taylor, Ph.D, MBA
Rush MD Anderson Cancer Center
Rush University Medical Center
Abstract
Clinical trials are essential for continuously improving patient care. However, managing clinical trials remains complex and lacks consistent support. Using a review method similar to a scoping study of recent literature—yet going beyond standard search parameters to include published papers, reports, and recommendations from select national groups—this article offers an overview and synthesis of current knowledge from multiple sources. Its goal is to identify critical challenges and promising opportunities shaping the landscape of clinical trial management. A core goal is to uncover strategic themes and evidence-based recommendations that could motivate further research in this understudied area. It also aims to help research administrators understand the root causes and nature of the barriers they face, as well as to provide guidance on resources and practices that may help them effectively address the core issues of administrative burden, delays, and, at times, failures in clinical trial management discussed here.
Keywords: clinical trials, research administration, academic medical centers, workforce development, IRB, regulatory reform, CTSA, operational efficiency
Introduction
Clinical trials are essential building blocks for the development of evidence-based medicine (Institute of Medicine [IOM], 2010; National Academies of Sciences, Engineering, and Medicine [NASEM] 2021). They are a critical step that helps move us from fundamental scientific discoveries to applied, phased clinical trials and, if successful, to informing the enhancement of what becomes state-of-the-art care (NASEM, 2021; Sherman et al., 2016; Tannock et al., 2016). Unfortunately, research administration and clinical trial management are often under-resourced while facing multiple organizational and systemic barriers. Research administrators frequently confront increasing and sometimes discrepant regulatory requirements, inconsistent and confusing financial policies, and a rapidly changing scientific and technological environment (Buntin et al., 2024; Dorsey & Topol, 2020). Despite the importance of these administrative teams for advancing healthcare practice, the work has received relatively little attention in the scholarly literature (Schneider et al., 2022).
The administrative frameworks and resources supporting trials may be as important to their success as scientific rigor and clinical endpoints (Wang et al., 2025). Research administrators handle a wide range of responsibilities, including regulatory compliance, budgeting and contracting, staff training, data management, and coordination across departments, sponsors, and external partners. The outcome of a clinical trial often depends on the efficiency and adaptability of these administrators in addressing these responsibilities (Getz et al., 2020; Buntin et al., 2024; Wang et al., 2025).
Over the past decade, national efforts have sought to improve the efficiency and effectiveness of the institutional research infrastructure and policy environment for managing clinical trials. The National Institutes of Health created the Clinical and Translational Science Awards (Gordon et al., 2017) program, in part, to streamline processes for translating knowledge from fundamental research into patient care (National Center for Advancing Translational Sciences [NCATS], 2021a, 2023). A national survey of physicians conducted by the Association of American Medical Colleges (AAMC) has examined the capacity and limitations of clinical-research offices (Dandar et al., 2025).
Despite the efforts of these initiatives, recent studies and reports continue to identify systemic inefficiencies, significant variation in the adoption of best practices, and personnel issues, including shortages and frequent turnover (Dandar et al., 2025; Flynn et al., 2013; Sampson et al., 2022; Schneider et al., 2022).
This manuscript provides an introductory review and synthesis of the challenges and potential opportunities facing research administrators in academic medical centers (AMCs) and their collaborators in the current clinical trials landscape. It also establishes a context for the rest of this Special Issue and serves as a foundation for future development of the knowledge base on effective research administration for clinical trials. Specifically, it asks: (1) What structural and procedural barriers hinder efficient, ethical trial management at AMCs and related sites? (2) What organizational opportunities and policy reforms can enhance administrative practices?
In the sections below, we first outline our approach to the reviewed literature, then discuss some of the most common barriers to effective clinical trial management and their specific areas of impact. Next, we highlight lessons from recent research on opportunities and innovations that have shown promise for overcoming these challenges and improving overall clinical trial management. Third, we provide an integrated discussion of common issues and strategies for reform. Finally, we offer recommendations for administrators, institutional leaders, and national stakeholders seeking to improve the effectiveness, quality, and sustainability of clinical trial research infrastructures.
To answer these questions, as detailed further below, we synthesize findings from quantitative and qualitative empirical studies. We also consider the work of relevant national professional organizations, federal partnerships, and related national initiatives. To enrich the discussion, we draw on published interviews and works that provide "voices of the field" contributions, incorporating key insights they offer to help contextualize the overall findings (AAMC, 2023; Fifield et al., 2026; Flynn et al., 2013) The goal is not only to describe current challenges but also to highlight proven and emerging solutions, particularly those with potential for successful scaling. It concludes with a summary of the current state of the literature and recommendations for future research and action.
Methods
This manuscript is not intended to be a systematic review. Much of the work is guided by the principles outlined in the PRISMA-ScR lists for a scoping review (Tricco et al., 2018). However, our search for materials extended beyond what a typical scoping review would include. Because of the limited research specifically focused on the work of research administrators in clinical trials, we adopted a "big tent" approach. For example, in addition to materials identified through our database search parameters, we also reviewed reports, materials, and recommendations from key organizations that have provided national leadership in conducting and applying research related to clinical trials (AAMC, 2022a, 2022b, 2023; Dandar et al., 2025; NCATS, 2021a, 2023; Sonstein et al., 2024). While I believe those included are the most important, there may be others we are unaware of. Nonetheless, given the main goals of this work—which are to provide an overview of the current state and key issues, and to guide more systematic reviews and empirical research on research administrators' roles in leading and managing clinical trials—our methods and processes are similar to, yet more comprehensive than, those of a scoping review (Arksey & O'Malley, 2005; Levac et al., 2010).
Search Strategy and Inclusion/Exclusion Criteria. The search strategy examined databases focused primarily on medical and organizational science. These included PubMed, Scopus, Web of Science, and ProQuest. Search terms used in Boolean search strings included: clinical trial administration, academic medical center, research management, barriers, opportunities, CTSA, and clinical research office (Zarin et al., 2011; Sehgal et al., 2025).
Additionally, we searched for materials on the websites of several highly specialized organizations that focus on or are central to issues related to Clinical Trials. The organizations included the Association of American Medical Colleges (AAMC), the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (Office of Extramural Research, n.d.), and the Clinical Trials Transformation Initiative (AAMC, 2022a, 2022b, 2023; NCATS, 2021a, 2021b, 2023; Sonstein et al., 2024).
For practical reasons and current relevance, the scope of the work focused on the most recent information, policies, and practices in the USA. Therefore, the search was limited to English-language publications from 2017 to 2025, except for those from clinical trial-related organizations, where the date range extended to approximately ten years. In the future, it may be helpful to compare and contrast concerns about clinical trials internationally or to examine them historically; such considerations were beyond the scope of this initial effort.
The focus was on examining administrative or operational aspects of clinical trials in AMCs or comparable institutions, including quantitative or qualitative data or papers that provided interview data on the experiences of research administrators involved in clinical trials (Buntin et. al., 2024; Dombeck et al., 2024; McCausland et al., 2024). We excluded opinion pieces unless they included formal research elements or voices from the field, such as systematic interviews with research administration professionals. Forty-eight peer-reviewed studies and 14 policy or professional documents met the full set of inclusion criteria.
Barriers to Clinical Trial Administration
The administration of clinical trials in AMCs and affiliated settings requires research administrators to navigate a wide range of systemic, organizational, and technical obstacles. These barriers often stem from long-standing institutional processes that cause misalignments between the essential work of clinical trials with administrative capacity. Drawing on national reports, empirical research, and the voices of research administrators, we have identified six key areas cited as proving to be sources of significant challenges for research administrators. These include complexity, financial and contracting inefficiencies, and workforce instability. While we have grouped critical practices and policies under separate headings, it is important for research professionals working on clinical trials or new to the field to recognize that there is considerable overlap and interaction among these issues within the categories used in this manuscript.
Regulatory and IRB Complexity. One of the most widely noted and persistent barriers, identified by both researchers and administrators, is the complexity of regulatory oversight (NASEM, 2016; Green et al., 2023). Institutional Review Board (IRB) processes often cause frustration, especially in multisite studies where single IRB (sIRB) mandates have not led to consistent efficiency gains (Klitzman, 2015; Menikoff et al., 2017). The National Institutes of Health (Gordon et al., 2017) policy requiring the use of sIRBs for federally funded multisite research sought to simplify approval processes. Despite this, many institutions continue to face issues with duplicate local reviews, inconsistent reliance agreements, and risk-averse interpretations of policy (Daudelin et al., 2020).
Illustratively, Corneli et la, (2021) after interviewing 34 key stakeholders in the sIRB and local IRB process note, “About half of the stakeholders, across all stakeholder groups, remarked on the variability of sIRB review processes among different IRBs, primarily the difficulty of managing inconsistencies and the lack of policies and procedures across institutions….”(p. 31).
Other regulatory aspects of clinical trial administration may impose additional burdens on administrators and investigators. These may stem from oversight requirements regarding data privacy, biosafety, human subjects’ protections, and mandatory registration and disclosure protocols (NASEM, 2016). Each of these domains imposes distinct, and at times conflicting, administrative requirements that must be reconciled during trial initiation and conduct (Corneli et al., 2021).
Issues of data privacy and security compliance, for example, require strict adherence to the Health Insurance Portability and Accountability Act (HIPAA) and to the technical requirements under the U.S. Food and Drug Administration’s 21 CFR Part 11 (1997) for the use of electronic records and signatures (Epic Systems, n.d.; U.S. Food and Drug Administration, 1997; 2016). Trial documentation must include appropriate validation, audit trails, and role-based access controls (U.S. Food and Drug Administration, 2025). Failure to comply can jeopardize subject confidentiality and trial integrity. Nonetheless, many sites struggle to implement centralized compliance frameworks because of decentralized IT infrastructure and inconsistent interpretations of how privacy laws intersect with research workflows.
Recently, the FDA’s Decentralized Clinical Trials Guidance (2024) emphasized the use of evolving practices for remote monitoring and electronic documentation as off-site work increased. Similarly, the International Council for Harmonisation’s (ICH, 2025) Good Clinical Practice E6(R3) (2025) aimed to reduce administrative burdens caused by compliance by adopting what it called a quality-by-design approach. This recommendation seeks to decrease administrative load by moving away from a checklist culture and instead embedding quality and compliance into trial design from the start (U.S. Food and Drug Administration, 2025).
The Federal Demonstration Project (FDP) has worked over recent decades to reduce administrative burdens and provide research administrators with helpful tools, including standardized templates (Federal Demonstration Partnership, 2021). Each of these recent reforms aims to eliminate redundancy and accelerate processes. Unfortunately, implementing each element of these initiatives separately—and together—has also added new responsibilities to already overburdened administrative professionals. Additionally, many institutions resist adopting strategies that are not “home grown.” They may either reject the tools and recommendations from these initiatives or require significant adaptation before adopting the suggested solutions to address compliance concerns.
Adding another layer of complexity, often new to research administrators who have not worked in healthcare settings, are biosafety protocols. These protocols and associated protections must be in place for studies involving recombinant DNA, gene therapies, infectious agents, or hazardous biologics. Investigational protocols that involve even minimal-risk biological agents may require review by institutional biosafety committees (IBCs), environmental health offices, or infectious disease subcommittees—each with its own timelines and documentation requirements (Corneli, et al, 2021; Gordon et al., 2017). These reviews often occur separately from IRB review, leading to misaligned activation timelines and redundant information requests that frustrate investigators and delay enrollment. For multisite trials, these challenges may amplify when, for example, biosafety standards are interpreted differently across institutions or when local biosafety infrastructure is underdeveloped.
Human subjects’ protections are also often shaped by local policies that may extend beyond federal baseline standards. Institutions may require additional protections for vulnerable groups (e.g., non-English speakers, minors, pregnant individuals), including tailored processes (Kraft et al., 2018) translated materials, and other specialized features. The local, at times idiosyncratic, requirements create significant variability in procedures. In turn, they lead to increased training burdens—especially for multisite trials, those enrolling diverse populations (Kraft et al., 2018), or those using stratified recruitment methods (Menikoff et al., 2017; Corneli et al., 2021). Additionally, the rise of digital recruitment platforms and remote consent methods has raised new ethical issues related to coercion, accessibility, and digital literacy (U.S. FDA, 2016). These concerns remain unresolved in federal guidance (AAMC, 2023).
One of the most critical sets of issues that impacts the success of all clinical trials concerns trial registration and results disclosure obligations. The National Institute of Health, ICMJE, and other federal agencies mandate these obligations. These requirements introduce a set of administrative tasks that can often lead to competing perspectives among central research and specialized therapeutic hubs/departmental offices. The work is multi-layered and complex. Requirements include the need to register trials prospectively on ClinicalTrials.gov (Zarin et al., 2011; National Library of Medicine, n.d.). In most cases, there is a 12-month window to post results after primary completion. Research teams must also align trial identifiers across multiple federal and private platforms to stay in compliance. Managing these steps may place significant administrative burdens on study teams already stretched thin by other responsibilities (Gribben et al., 2020; Green et al., 2023). Failures or mistakes in registration or delayed results reporting can result in public noncompliance notices, funder sanctions, or reputational damage for the institution.
Taken together, these overlapping layers of oversight often operate in silos. Each may have its own documentation, review timeline, and risk interpretation. The combination of these tasks can create a cumulative burden that delays trial activation and complicates compliance. Research administrators are often the first line of defense in managing these interdependencies. However, they frequently lack institutional authority, staff capacity, and the overall understanding or system-level visibility needed to resolve conflicts efficiently (Snyder et al., 2016; Mullen et al., 2023; Harris et al., 2025).
As the clinical trial enterprise continues to grow in complexity, there have been increasing calls for unified regulatory operations offices, as well as offices located within departments that have the specialized understanding and expertise to move trial activation forward more quickly than can be done by those with less specialization. Harmonized review protocols and federal guidance that better align privacy, safety, ethics, and transparency requirements within a coherent administrative framework—used consistently among central administrative and unit/departmental research administrators—can also increase effectiveness, reduce barriers to rapid trial initiation, improve processes and patient care, and cause far fewer difficulties (Lauer & Gordon, 2016; Snyder et al., 2016; Harris et al., 2025).
Opportunities and Innovations in Clinical Trial Administration
Despite the complexity of the barriers facing clinical trials in academic medical centers (AMCs), this ecology also includes numerous examples of reform, innovation, and institutional learning. Clinical research offices across the United States have responded to longstanding inefficiencies with creative models that seek not merely to overcome operational bottlenecks but to reimagine the role of research administration as a driver of institutional excellence (Harris et al., 2025; Watters et al., 2018).
This section highlights four key opportunity areas, each supported by empirical evidence or case-based validation, and together representing a strategic, systems-level approach to reducing the administrative burdens of clinical trials. These areas are: (1) streamlining support structures, (2) improving digital infrastructure and workflow tools, (3) advancing workforce professionalization, and (4) establishing metrics and leadership for continuous improvement.
Centralized, Integrated, and Distributed Hub Trial Support Models. Over the past decade, a significant trend in research administration has been the merging of scattered clinical trial functions into integrated structures, often managed at the level of the specialty hospitals or centers (Lee at al., 2021; Carson & Bennett, 2021; Harris et al., 2025; Wang et al., 2025). These specialty hospital research and disease-specific clinical trial distributed hubs provide contracting, budgeting, regulatory support, and data coordination for clinical trials relevant to their disease focus, with support from designated staff in institution-wide units (e.g., HR, finance, compliance).
Distributed hubs, such as these specialized, distributed clinical trials offices (CTOs), represent a structural reengineering of the clinical trial enterprise to provide the appropriate expertise, oversight, and accountability for the specific disease areas they serve, thereby reducing the complexity and delays that may arise in such phases of clinical trials as start-up to activation periods (Abu-Shaheen et al., 2020; Hillery et al., 2025) and when disparate and far-flung staff are involved (Hoyo et al., 2024; Green et al., 2023).
System-wide implementation of streamlining efforts (Mullen et al., 2023) and integrated digital platforms, such as Clinical Trial Management Systems (CTMS), centralized eBinders, participant recruitment dashboards, and shared calendaring systems, enable CTO teams to monitor status, trigger reminders, and pre-populate compliance documents across trials (Sampson et al., 2022).
Institutions such as the University of Michigan, the University of Pennsylvania, and Oregon Health & Science University have embedded such platforms within their CTOs, aligning them with IRB platforms and electronic health records to create seamless workflows (Epic Systems, n.d.; Sampson et al., 2022). A study of CTSA-funded institutions found that streamlined organizational structures reduced activation times by a third or more (Watters et al., 2018).
Reflecting these refinements, leading institutions have adopted hybrid models that balance centralized functions such as HR and digital tools with specialized CTO hubs to manage complex disease-specific trials (CTSA Coordination, Communication, and Operations Support Center (CCOS), 2022; Lee at al., 2021; Carson & Bennett, 2021; Harris et al., 2025; Wang et al., 2025).
Digital Tools and Infrastructure Upgrades. Digital transformation efforts increasingly guide and support modern clinical trial management, reducing administrative burdens, improving data quality, and enabling more flexible trial operations (AAMC, 2022a, 2022b; Menikoff et al., 2017). Clinical Trial Management Systems (CTMS), such as OnCore and Forte (n.d.), serve as the operational backbone, supporting centralized tracking of trial milestones, budgets, IRB status, invoicing, and enrollment (Sampson et al., 2022).
When integrated with REDCap (Harris et al., 2009), eRegulatory systems, and EHRs, CTMS creates an interoperable ecosystem that improves efficiency throughout the trial lifecycle (RealTime eClinical Solutions, n.d.; Sehgal et al., 2025). An AAMC (2022c) benchmarking report found that institutions with EHR-CTMS interoperability reduced recruitment lag by 25–40%, particularly in large multisite trials.
eConsent platforms such as REDCap (Harris et al., 2009) eConsent, DocuSign (n.d.), and Medidata Rave (Medidata Solutions, Inc., 2023) have improved participant onboarding through real-time version control, multilingual access, and remote consent (AAMC, 2022a, 2022b; U.S. FDA, 2016). Remote source data verification (rSDV) and risk-based monitoring (RBM) tools such as Medrio (n.d.), Veeva Vault (Veeva Systems, n.d.), and Florence eBinders (Florence Healthcare, n.d.) provide audit trails and regulatory-compliant document sharing (Ohmann et al., 2017; Watters et al., 2018).
However, fragmented IT governance across AMCs hampers platform adoption. In a CTSA IT core survey, 43% of institutions reported poor coordination between research and IT leadership (Mayo-Wilson et al., 2018).
In summary, digital infrastructure, when supported by coordinated implementation and governance, is critical to enabling the efficient, compliant, and equitable conduct of trials.
Workforce Development, Career Pathways, and Professional Recognition. As discussed, staff turnover, workplace dissatisfaction, and the perception of limited opportunities for advancement or recognition of their efforts are significant barriers to conducting a high-quality clinical trials program. Studies have shown that the clinical trial workforce, especially research coordinators, regulatory specialists, and data managers, has historically been undervalued, underfunded, and overlooked in strategic planning (Knapke, Snyder, et al., 2022; Knapke, Jenkerson, et al., 2022; Mullen et al., 2023). Trials may be delayed or under-enrolled due to coordinator attrition; compliance risks increase with untrained or overburdened staff; and sponsor satisfaction drops when site operations underperform.
To address these concerns, national organizations and others have begun developing initiatives to professionalize the clinical trial workforce (Musshafen et al., 2021; Office of Extramural Research, n.d.). For example, the Joint Task Force for Clinical Trial Competency (JTF-CTC) has proposed a standardized, competency-based training framework. It identifies eight core clinical research skill domains that are essential for research coordinators and other clinical trial staff. These domains include: (1) Scientific concepts and research design, (2) ethical and participant safety considerations, (3) investigational product development and regulation, (4) clinical trial operations (GCPs), (5) study and site management, (6) data management and informatics, (7) leadership and professionalism, and (8) communication and teamwork (Sonstein et al., 2014, 2024; NCATS, 2021a, 2021b, 2023).
Some AMCs have moved beyond the recommendation stage to develop staffing pipelines that address the challenge of recruiting qualified personnel (Musshafen et al., 2021). They have established academies for clinical research staff, offering certification programs, modular training, peer mentorship, and pathways to national certification (e.g., CCRC, CCRP). In some cases, these specialized training efforts are integrated into larger, more established programs, such as nursing, where clinical research nurses are trained. Others have established centralized research workforce offices or developed "coordinator pools" that enable staff to be shared across studies, helping to fill staffing gaps (Tsevat & Smyth, 2020). However, such initiatives are often the exception rather than the norm. In many settings, these efforts remain grant-funded pilots without long-term institutional budget commitments or cross-departmental coordination.
The task force recommends aligning training programs, job descriptions, and performance evaluations with this model. Such alignment can clarify expectations, foster professional identity, and improve cross-role coordination (Sonstein et al., 2014, 2024). Many CTSA-affiliated institutions have begun offering onboarding curricula and continuing education aligned with these domains.
Beyond training, forward-looking AMCs are implementing career ladder opportunities, such as classifying research staff into tiers (e.g., Clinical Research Coordinator I, II, III). Proposed advancement criteria include competencies, tenure, and leadership contributions. Importantly, to have the desired impact on retention and staff satisfaction, these ladders must be combined by institutions with favorable salary bands, internal recognition programs, and access to leadership development.
At a major CTSA hub, implementing a tiered coordinator ladder, along with mentorship and recognition programs, led to a 40% decrease in turnover over two years, improved trial continuity, and higher satisfaction scores among principal investigators (Stroo et al., 2020). They created a "research academy" offering internal credentialing and structured professional development modules, combining them with recognition through annual awards linked to performance and innovation.
National-level recommendations have called for shared job taxonomies, standardized core competencies, and coordinated funding models to support a sustainable clinical research workforce (NCATS, 2021a, 2023; AAMC, 2022a, b, c; Sonstein et al., 2024). These recommendations are especially urgent given the increasing complexity of trial protocols, the demands of decentralized and hybrid trial designs, and the need for greater inclusivity. Studies of these efforts can help us better define and shape the most effective approaches in this area.
Institutional-Level Funding and Sustainability. Professionalization efforts are unlikely to succeed if they rely solely on soft-money positions funded by individual grants. Soft-money positions can create job insecurity, disrupt trial continuity, and make long-term workforce planning nearly impossible. To address these issues, several promising approaches are being explored.
Some AMCs are piloting institutionally pooled staffing models in which core coordinators and regulatory FTEs receive funding from central infrastructure funds, supplemented by effort allocations from grants and contracts. Others are using NCATS (2021a) or NIH administrative supplements to provide bridge funding for coordinator roles during trial transitions (NCATS, 2021a; Knapke, Snyder, et al., 2022; Knapke, Jenkerson, et al., 2022).
Sustainable models also require institutional workforce planning and forecasting tools that enable institutions to monitor coordinator workloads, balance staffing across departments, and anticipate hiring needs tied to projected trial volume. Knapke, Snyder, et al, (2022), note “As a benchmark example of proactively addressing CRP staffing issues, Duke University invested resources into establishing multiple CTR job descriptions and clearly defining job titles and levels of progression through a competency-based, tiered approach…. Such innovations decreased the overall turnover rate among CRPs locally. (p.2)
Additionally, a workforce with a broad array of backgrounds and language proficiencies is becoming central to effective workforce strategies for recruiting trial participants with similar characteristics.
Metrics, Continuous Improvement, and Leadership Models. While advances in infrastructure, workforce development, and digital tools are vital to modernizing clinical trial management, the long-term success of these initiatives also depends on institutional capacity for measurement, accountability, and strategic leadership. Without systems to assess performance and adapt in real time, even well-funded innovations can struggle. Therefore, the fourth area of administrative reform underscores the need to develop robust metrics, ongoing improvement frameworks, and forward-looking leadership models that can guide and support continuous improvement efforts across the research enterprise (Daudelin et al., 2019; Rubio, 2013).
From Output Metrics to Operational Intelligence. Traditionally, clinical trial performance at AMCs has been tracked through a narrow lens, most commonly focusing on enrollment targets, the number of trials activated, or total research funding. While these metrics are important, they overlook key aspects of operational health, including time from protocol submission to activation, regulatory turnaround time, coordinator/staff workload, participant diversity (Fisher & Kalbaugh, 2011; George et al., 2014; National Cancer Institute [NCI], 2020), protocol deviations, and the percentage of trials meeting enrollment benchmarks, among others. Failure to track these and similarly important markers of the efficacy, efficiency, and resource adequacy of the process, as well as progress through key trial milestones, limits strategic decision-making. A lack of such data obscures barriers that affect trial quality and efficiency (Daudelin et al., 2019, 2020; Mayo-Wilson et al., 2018; Rubio, 2013).
To address this gap, an increasing number of institutions are developing real-time data dashboards and performance monitoring systems. These tools integrate data from Clinical Trial Management Systems (CTMS), institutional review boards (IRBs), contracting offices, and human resources to provide multidimensional views of trial performance. These metrics and dashboards empower clinical research leaders to diagnose systemic bottlenecks, target improvement efforts, and benchmark progress over time (Daudelin et al., 2019, 2020; Krumholz, 2014). Critically, for research administrators, once they have clear data and metrics, they are better able to identify and address barriers.
Notably, several CTSA hubs have moved beyond internal dashboards to participate in national metric-sharing collaboratives, enabling cross-institutional benchmarking. This level of transparency fosters shared learning, contextualizes performance, and supports resource-allocation arguments during internal planning or NIH review.
Continuous Improvement and Feedback Loops. In parallel with data collection, leading institutions are embedding quality improvement (QI) cycles into research operations, often borrowing techniques from other areas of business and industry as well as healthcare delivery. These include widely used approaches such as root cause analyses, particularly of delayed activations or high-deviation trials; Lean/Six Sigma principles for workflow redesign and implementation fidelity; and rapid-cycle process improvement (RCPI) methods using Plan-Do-Study-Act (PDSA) models. Others have implemented cross-functional "tiger teams" to address urgent pain points in contracting, budgeting, or onboarding (Gordon et al., 2017; Sampson et al., 2022). Each of these structured approaches has the potential to help move clinical trial administration from a reactive, compliance-driven model to a proactive, learning-oriented system. For instance, one CTSA-affiliated AMC implemented a PDSA cycle to address trial activation delays and reduced the median activation time from 172 to 96 days within one year (Frontiers Clinical and Translational Science Institute, 2018).
Importantly, these efforts required multidisciplinary participation, including legal, IRB, clinical departments, and research administration.
Professionalized Leadership Structures. Finally, the evolution of leadership roles in clinical research administration is a key component of systemic reform. In the past, many responsibilities for trial oversight, compliance, and performance management fell to individual principal investigators (PIs), who may lack the bandwidth or institutional perspective to address enterprise-wide challenges. Recognizing this, a growing number of institutions have created dedicated leadership roles, such as "Directors of Clinical Trial Operations," "Chief Research Administrative Officers," and "Associate Deans for Clinical Research Infrastructure" (AAMC, 2022c).
Individuals in these roles can leverage systems-level thinking, administrative expertise, and authority to drive reforms and operational improvements, often more effectively than when done at the central institutional level. They often oversee metrics and data systems, lead quality improvement initiatives, liaise with central administration, and represent the institution in national collaboratives. Importantly, when employing the organizational improvement processes noted above, professional leadership is not limited to senior executive roles. Empowering mid-level administrators, such as study startup leads or regulatory unit heads, with ownership of performance targets and tools is equally essential for distributed accountability.
The Role of the CTSA Consortium. We acknowledge that, given the limited availability of peer-reviewed research on administrative issues and improvements, reports from the CTSA program and related initiatives have contributed more than planned to the main conclusions and findings discussed in this manuscript. It was somewhat discouraging that there were not more primary sources. These circumstances also underscored, at least for us, the need for a paper like this to encourage our field to focus more on building a systematic knowledge base for the issues addressed here. The CTSA program, managed by the NIH's NCATS, has played a vital role in promoting data transparency and continuous improvement. Using tools like the Common Metric Initiative, CTSA hubs measure performance in areas such as trial start-up time, IRB turnaround, and trial enrollment diversity (George et al., 2014; NCI, 2020; NCATS, 2021b). These data support evidence-based reforms and help standardize best practices across institutions. Additionally, shared governance structures—such as research advisory boards composed of investigators, administrators, and community representatives—can expand expertise, enhance the credibility of metrics, and ensure effective performance oversight.
Directions for Future Research
The review re-centers research administrators as key contributors to translational success. Such success will include more rapid activation of trials, reducing the complexity of addressing issues, such as coverage analysis, that have stubbornly resisted efforts to make them simple steps on the way to contracting and activation, and ensuring that trials are implemented with fidelity while maintaining high standards of research integrity.
Despite recent progress toward these goals and understandings, research on clinical trial administration remains limited. Most findings and recommendations come from case studies, internal reports, reports from national research organizations—often where clinical trials are just one part of their broader focus (e.g., CTSA/NCATS reports)—and institutional evaluations. Clearly, many aspects of clinical trial management still require further study, and systematic research could significantly reduce the challenges faced by research administrators in this area. A strong evidence base can also guide efforts to overcome common barriers, saving time and effort. Without this knowledge, administrators may have to rely on trial and error or on fragmented insights that might not be appropriate for their specific organizational contexts.
As we develop frameworks for conceptualizing and implementing studies to improve the contexts and management of clinical trials while reducing unnecessary burdens and barriers, it may also be helpful to look at related efforts in organizational change outside of healthcare. Drawing on lessons from initiatives in neighboring fields that focus on systems facing similar challenges in achieving systematic change, improving effectiveness, and, importantly, outcomes can provide valuable insights that we can apply to our own efforts to improve administrative practices for clinical trials systematically. For example, Sarason (1982) argued that to achieve lasting and effective reforms of organizational systems, researchers need to start by analyzing the core assumptions of professionals in those systems about what those systems look like and how they function.
Using public education as an example, Sarason (1982) highlights that most reforms seldom start by questioning why, in K–12 schools, we accept certain regularities, such as which responsibilities are associated with and assigned to roles, how the day is organized, how core processes change or remain similar across grade levels as an outside observer would view them, or even the way the physical structures of schools are typically similar.
Using the example of how the secondary school day is divided into a set number of classes of specific lengths, rather than, for instance, dedicating a full school day to each subject or to an overarching theme, Sarason (1982) and others (Felner, 2000; Jackson & Andrews, 2000) discuss promising educational reforms and further observe that these structural norms can hinder the implementation of other reforms essential for improving academic results. For example, instead of separating lessons by subject, if we teach them as longer, integrated thematic units that span multiple days, it allows teachers from different disciplines to collaborate in multidisciplinary teams. Lessons structured this way are more effective at engaging students and fostering deeper understanding. However, because most secondary schools usually stick to the “traditional” segmentation of the day into relatively short classes (i.e., 50–60 minutes or less), taught by different teachers, the advantages of more effective teaching models are lost.
Because some of the barriers mentioned above stem, at least in part, from how AMCs organize administrative functions, starting our improvement efforts with fundamental questions such as "why do we do it/structure it this way?" and "how could we restructure our routines or organizational units to enable better coordination?" could be as beneficial as asking these questions in education. These efforts involve bringing together what might otherwise be fragmented and poorly coordinated organizational elements and offering them as integrated services provided by a multidisciplinary and multifunctional clinical trials team within clinical trial management. Viewing our improvement efforts through this lens could help us identify and address what Sarason (1982) has called “hidden regularities” that impede successful change and reform efforts.
Many examples, both within and outside healthcare, can highlight change efforts that clinical research administrators can use when developing studies and assessments of change and reform efforts, even if they are not explicitly focused on clinical trial concerns. Additionally, insights gained by those involved in research administration through their work experiences can provide a valuable understanding of the causes and potential solutions to issues that may impede the conduct of more efficient and effective clinical trials at AMCs. We hope this paper will motivate and encourage those in research administration to explore these and other lines of research that can help address the challenges of conducting clinical trials.
Jennifer E. Taylor, Ph.D, MBA
Rush MD Anderson Cancer Center
Joan and Paul Rubschlager Building
1520 W Harrison St, Chicago, IL 60607
Rush University Medical Center
Chicago, Illinois
(312) 563-5231
jennifer_e_taylor@rush.edu
References
Abu-Shaheen, A., Al Badr, A., Al Fayyad, I., Al Qutub A., Faqeih, E. A., & Al-Tannir, M. (2020). Streamlining and cycle time reduction of the startup phase of clinical trials. Trials, 21(1), 115. https://www.doi.org/10.1186/s13063-020-4079-8
Arksey, H., & O’Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19–32. https://doi.org/10.1080/1364557032000119616
Association of American Medical Colleges (AAMC). (2022a). Advancing clinical trials through digital innovation. Washington, DC.
Association of American Medical Colleges (AAMC). (2022b). The evolving role of digital technologies in clinical trials. https://www.aamc.org/data-reports
Association of American Medical Colleges (AAMC). (2022c). Benchmarking clinical research operations at academic medical centers. https://www.aamc.org/data-reports
Association of American Medical Colleges (AAMC). (2023). Principles for enhancing clinical trial diversity. https://www.aamc.org/media/61336/download
Buntin, M. B., Brawley, O. W., & Sharfstein, J. M. (2024). Advancing a national infrastructure for clinical trials. JAMA Health Forum, 5(12), e244383. https://doi.org/10.1001/jamahealthforum.2024.4383
Carson, K. R., & Bennett, C. L. (2021). Contemporary clinical trials office: Doing more with less. JCO Oncology Practice, 17(1), 1–2. https://www.doi.org/10.1200/OP.20.00871
Corneli, A., Dombeck, C. B., McKenna, K., & Calvert, S. B. (2021). Stakeholder experiences with the single IRB review process and recommendations for Food and Drug Administration guidance. Ethics & Human Research, 43(3), 26–36. https://www.doi.org/10.1002/eahr.500092
CTSA Coordination, Communication, and Operations Support (CCOS) Center. (2022). Guidance for CTSA program groups: Hybrid organizational models for Clinical Trial Office (CTO) hubs. National Center for Advancing Translational Sciences (NCATS). https://ccos-cc.ctsa.io/resources/governance
Dandar, V., Dutterer, J., Blood, A., Bush, V., Farmakidis, A., Griffith, A., Kendrick, E., McKenzie, S., McOwen, K., & Singhet, A. (2025). 2023 AAMC Regional Medical Campus survey report. https://www.aamc.org/media/81671/download?attachment
Daudelin, D. H., Peterson, L. E., & Selker, H. P. (2020). Pilot test of an accrual Common Metric for the NIH Clinical and Translational Science Awards (CTSA) Consortium: Metric feasibility and data quality. Journal of Clinical & Translational Science, 5(1), e44. https://www.doi.org/10.1017/cts.2020.537
Daudelin, D. H., Peterson, L. E., Welch, L. C., Chandler, R., Pandey, M., Noubary, F., Lee, P. L., & Selker, H. P. (2019). Implementing common metrics across the NIH Clinical and Translational Science Awards (CTSA) consortium. Journal of Clinical and Translational Science, 4(1), 16–21. https://www.doi.org/10.1017/cts.2019.425
DocuSign. (n.d.). eSignature for clinical trials. https://www.docusign.com/solutions/industries/life-sciences
Dombeck, C., Swezey, T., Kehoe, L., Kinchen, K., Roe, M., Stewart, M., & Corneli, A. (2024). Embedding clinical trial elements into clinical practice: Experiences from trial designers and implementers. Journal of Clinical and Translational Science, 8(1), 202. https://doi.org/10.1017/cts.2024.647
Dorsey, E. R., & Topol, E. J. (2020). Telemedicine 2020 and the next decade. The Lancet, 395(10227), 859. https://doi.org/10.1016/S0140-6736(20)30424-4
Epic Systems Corporation. (n.d.). MyChart Research Integration. https://www.epic.com/software#PatientEngagement
Federal Demonstration Partnership. (2021). FDP subcontract guidance document. https://thefdp.org/wp-content/uploads/FDPSUB1-1.pdf
Felner, R. D. (2000). Educational reform as ecologically-based prevention and promotion: The project on high performance learning communities. In D. Cicchetti, J. Rappaport, I. Sandler, & R. P. Weissberg (Eds.), The promotion of wellness in children and adolescents (pp. 271–307). Child Welfare League of America. https://psycnet.apa.org/record/2001-00169-009
Fifield, B. A., Baker, A., Elkhidir, O., Philbin, N., Hinch, I., McVinnie, N., Llancari, A., Pecoraro, C., Rahimi, M., Visconti, T., Shoust, A., McMurphy, S., Soucie, K., Hamm, C., & Porter, L. A. (2026). Patient perspective of hesitancies and strategies to increase cancer clinical trial participation. Journal of Clinical and Translational Science, 10(1), e39. https://doi.org/10.1017/cts.2026.10695
Fisher, J. A., & Kalbaugh, C. A. (2011). Challenging assumptions about minority participation in US clinical research. American Journal of Public Health, 101(12), 2217–2222. https://doi.org/10.2105/AJPH.2011.300279
Florence Healthcare. (n.d.). eBinders for clinical research sites. https://florencehc.com/ebinders/
Flynn, K. E., Hahn, C. L., Kramer, J. M., Check, D. K., Dombeck, C. B., Bang, S., Perlmutter, J., Khin-Maung-Gyi, F. A., & Weinfurt, K. P. (2013). Using central IRBs for multicenter clinical trials in the United States. PLoS One, 8(1), e54999. https://www.doi.org/10.1371/journal.pone.0054999
Frontiers Clinical and Translational Science Institute. (2018). Streamlining clinical trial startup: A PDSA approach to operational efficiency. University of Kansas Medical Center.
George, S., Duran, N., & Norris, K. (2014). A systematic review of barriers and facilitators to minority research participation among African Americans, Latinos, Asian Americans, and Pacific Islanders. American Journal of Public Health, 104(2), e1–381. https://doi.org/10.2105/AJPH.2013.301706
Getz, K. A., Sethuraman, V., Rine, J., Peña, Y., Ramanathan, S., & Stergiopoulos, S. (2020). Assessing patient participation burden based on protocol design characteristics. Therapeutic Innovation & Regulatory Science, 54(5), 1176–1184. http://doi.org/10.1007/s43441-019-00092-4
Gordon, V. M., Culp, M. A., & Wolinetz, C. D. (2017). Final NIH policy on the use of a single Institutional Review Board for multisite research. Clinical & Translational Science, 10(3), 130–132. https://doi.org/10.1111/cts.12447
Green, J. M., Goodman, P., Kirby, A., Cobb, N., & Bierer, B. E. (2023). Implementation of single IRB review for multisite human subjects research: Persistent challenges and possible solutions. Journal of Clinical and Translational Science, 7(1), e99. https://www.doi.org/10.1017/cts.2023.517
Gribben, J., Macintyre, E., Sonneveld, P., Doorduijn, J., Gisselbrecht, C., Jäger, U., Le Gouill, S., Rule, S., & Dreyling, M. (2020). Reducing bureaucracy in clinical research: A call for action. Hemasphere, 4(2), e352. https://www.doi.org/10.1097/HS9.0000000000000352
Harris, P. A., Kennedy, N., Wilkins, C. H., Lane, K., Bernard, G. R., Casey, J. D., Ford, D. E., Waddy, S. P., Wiley, K. L. Jr., Edwards, T. L., McBee, N., Thompson, D. D., Stroud, M., Serdoz, E. S., Nelson, S. J., Jones, M., Eyzaguirre, L. M., Boone, L. R., Baird, J., . . . Hanley, D. F. (2025). Insights from the trial innovation network's initial consultation process. Journal of Clinical and Translational Sciences, 9(1), e149. https://www.doi.org/10.1017/cts.2025.10084
Harris, P. A., Taylor, R., Thielke, R., Payne, J., Gonzalez, N., & Conde, J. G. (2009). Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42(2), 377–381. https://doi.org/10.1016/j.jbi.2008.08.010
Hillery, S., Majkowski, R., Wang, Y., Barney, B., Eyzaguirre, L., Mould, A., McBee, N., Woo, E., Holthouse, E., Wiley, K., Waddy, S. P., Ford, D., Hanley, D. F., & Lane, K. (2025). Accelerating start-up cycles in investigator-initiated multicenter clinical trials. Journal of Clinical and Translational Science, 9(1), e249. https://www.doi.org/10.1017/cts.2025.10180
Hoyo, V., Nehl, E., Dozier, A., Harvey, J., Kane, C., Perry, A., Samuels, E., Schmidt, S., & Hunt, J. (2024). A landscape assessment of CTSA evaluators and their work in the CTSA consortium, 2021 survey findings. Journal of Clinical and Translational Science, 8(1), e79. https://doi.org/10.1017/cts.2024.526
Institute of Medicine (IOM) (US) Forum on Drug Discovery, Development, and Translation. (2010). Transforming clinical research in the United States: Challenges and opportunities: Workshop summary. National Academies Press.
International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. (2025). ICH harmonized guidelines for good clinical practice E6(R3). https://database.ich.org/sites/default/files/ICH_E6%28R3%29_Step4_FinalGuideline_2025_0106.pdf
Jackson, A. W., & Andrews, G. A. (2000). Turning points 2000: Educating adolescents in the 21st century. Teachers College Press.
Klitzman, R. (2015). The ethics police?: The struggle to make human research safe. Oxford University Press.
Knapke, J. M., Jenkerson, M., Tsao, P., Freel, S., Fritter, J., Helm, S. L., Jester, P., Kolb, H. R., Mendell, A., Petty, M., & Jones, C. T. (2022). Academic medical center clinical research professional workforce: Part 2 - Issues in staff onboarding and professional development. Journal of Clinical & Translational Science, 6(1), e81, 1–10. https://doi.org/10.1017/cts.2022.412
Knapke, J. M., Snyder, D. C., Carter, K., Fitz-Gerald, M. B., Fritter, J., Kolb, H. R., Marchant, M., Mendell, A., Petty, M., Pullum, C., & Jones, C. T. (2022). Issues for recruitment and retention of clinical research professionals at academic medical centers: Part 1 - collaborative conversations Un-Meeting findings. Journal of Clinical & Translational Science, 6(1), e80, 1–9. https://www.doi.org/10.1017/cts.2022.411
Kraft, S. A., Cho, M. K., Gillespie, K., Halley, M., Varsava, N., Ormond, K. E., Luft, H. S., Wilfond, B. S., & Lee, S. S. J. (2018). Beyond consent: Building trusting relationships with diverse populations in precision medicine research. American Journal of Bioethics, 18(4), 3–20. https://doi.org/10.1080/15265161.2018.1431322
Krumholz, H. M. (2014). Big data and new knowledge in medicine: The thinking, training, and tools needed for a learning health system. Health Affairs, 33(7), 1163–1170. https://doi.org/10.1377/hlthaff.2014.0053
Lauer, M. S., & Gordon, D. (2016). NIH policy on dissemination of NIH-funded clinical trial information. Federal Register, 81(183), 64922–64928. https://www.govinfo.gov/content/pkg/FR-2016-09-21/pdf/2016-22379.pdf
Lee, C., Werner, T. L., Deal, A. M., Krise-Confair, C. J., Bentz., T. A., Cummings, T. M., Grant, S. C., Lee, A. B., Moehle, J., Moffett, K., Peck, H., Williamson, S., Zafirovski, A., Shaw, K., & Hofacker, J. K. (2021). Clinical trial metrics: The complexity of conducting clinical trials in North American cancer centers. JCO Oncology Practice, 17(1), e77–e93. https://www.doi.org/10.1200/OP.20.00501
Levac, D., Colquhoun, H., & O'Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5, 69. https://doi.org/10.1186/1748-5908-5-69
Mayo-Wilson, E., Heyward, J., Keyes, A., Reynolds, J., White, S., Atri, N., Alexander, G. C., Omar, A., & Ford, D. E. on behalf of the National Clinical Trials Registration and Results Reporting Taskforce Survey Subcommittee. (2018). Clinical trial registration and reporting: A survey of academic organizations in the United States. BMC Medicine, 16(1), 60. https://www.doi.org/10.1186/s12916-018-1042-6
McCausland, D., Haigh, M., McCallion, P., & McCarron, M. (2024). IRB challenges in multisite studies: A case report and commentary from the Intellectual Disability Supplement to the Irish Longitudinal Study on Ageing (IDS-TILDA). HRB Open Research, 7, 3. https://www.doi.org/10.12688/hrbopenres.13854.1
Medidata Solutions, Inc. (2023). Medidata Rave EDC (Version 2023.1) [Computer software]. https://www.medidata.com/en/clinical-trial-products/clinical-data-management/edc-system
Medrio. (n.d.). Decentralized clinical trials platform. https://medrio.com/
Menikoff, J., Kaneshiro, J., & Pritchard, I. (2017). The Common Rule, updated. New England Journal of Medicine, 376(7), 613–615. https://doi.org/10.1056/nejmp1700736
Mullen, C. G., Houlihan, J. Y., Stroo, M., Deeter, C. E., Freel, S. A., Padget, A. M., & Snyder, D. C. (2023). Leveraging retooled clinical research infrastructure for Clinical Research Management System implementation at a large Academic Medical Center. Journal of Clinical & Translational Science, 7(1), e127. https://www.doi.org/10.1017/cts.2023.550
Musshafen, L. A., Poger, J. M., Simmons, W. R., Hoke, A. M., Hanson, L. N., Bondurant, W.W., McCullough, J. R., & Kraschnewski, J. L. (2021). Strengthening the clinical research workforce through a competency-based orientation program: Process outcomes and lessons learned across three academic health institutions. Journal of Clinical & Translational Science, 5(1), e178. https://www.doi.org/10.1017/cts.2021.852
National Academies of Sciences, Engineering, and Medicine (NASEM). (2016). Optimizing the nation’s investment in academic research: A new regulatory framework for the 21st century. The National Academies Press. https://doi.org/10.17226/21824
National Academies of Sciences, Engineering, and Medicine (NASEM). (2021). Improving the evidence base for treatment decision making for older Americans with cancer: Proceedings of a workshop—in brief. The National Academies Press. https://doi.org/10.17226/26157
National Cancer Institute. (2020). Enhancing diversity in clinical trials: A report from the Cancer Moonshot Initiative. U.S. Department of Health and Human Services. https://www.cancer.gov/research/key-initiatives/moonshot-cancer-initiative
National Center for Advancing Translational Sciences (NCATS). (2021a). Clinical and Translational Science Awards (CTSA) Program annual progress report. National Institutes of Health. https://ncats.nih.gov/research/research-activities/ctsa
National Center for Advancing Translational Sciences (NCATS). (2021b). CTSA Program Common Metrics Initiative: Aggregate performance report. National Institutes of Health. https://ncats.nih.gov/sites/default/files/sunsetting-of-the-common-metrics-initiative-in-summer-2022.pdf
National Center for Advancing Translational Sciences (NCATS). (2023). Clinical and Translational Science Awards (CTSA) Program. National Institutes of Health. https://ncats.nih.gov/ctsa
National Library of Medicine. (n.d.). ClinicalTrials.gov. https://clinicaltrials.gov/ [as referenced for digital trial registration systems]
Office of Extramural Research. (n.d.). Research training and career development. National Institutes of Health. https://researchtraining.nih.gov
Ohmann, C., Banzi, R., Canham, S., Battaglia, S., Matei, M., Ariyo, C., Becnel, L., Bierer, B., Bowers, S., Clivio, L., & Dias, M. (2017). Sharing and reuse of individual participant data from clinical trials: Principles and recommendations. BMJ Open, 7(12), e018647. https://doi.org/10.1136/bmjopen-2017-018647
OnCore (CTMS) Forte. (n.d.). OnCore Clinical Trial Management System (CTMS). Advarra. https://www.advarra.com/solutions/sites/ctms/oncore/
RealTime eClinical Solutions. (n.d.). eRegulatory / Complion eISF. https://realtime-eclinical.com/solutions/complion/
Rubio, D. M. (2013). Common metrics to assess the efficiency of clinical research. Evaluation & the Health Professions, 36(4), 432–46. https://www.doi.org/10.1177/0163278713499586
Sampson, R., Shapiro, S., He, W., Denmark, S., Kirchoff, K., Hutson, K., Paranal, R., Forney, L., McGhee, K., & Harvey, J. (2022). An integrated approach to improve clinical trial efficiency: Linking a clinical trial management system into the Research Integrated Network of Systems. Journal of Clinical and Translational Science, 6(1), e63. https://doi.org/10.1017/cts.2022.382
Sarason, S. B. (1982). The culture of the school and the problem of change (2nd ed.). Allyn & Bacon.
Schneider, P. J., Pedersen, C. A., Ganio, M. C., & Scheckelhoff, D. J. (2022). ASHP National Survey of Pharmacy Practice in Hospital Settings: Clinical services and workforce-2021. American Journal of Health-System Pharmacy, 79(18), 1531–1550. https://doi.org/10.1093/ajhp/zxac147
Sehgal, S., Pua, E. C., Rojevsky, S., Becich, M. J., Fehrmann, J., Knosp, B. M., Wilcox, A., Talbert, J. C., Craven, C. K., & Starren, J. (2025). A maturity model for Clinical Trials Management Ecosystem. Journal of Clinical and Translational Science, 9(1), e28. https://www.doi.org/10.1017/cts.2024.1168
Sherman, R. E., Anderson, S. A., Dal Pan, G. J., Gray, G. W., Gross, T., Hunter, N. L., LaVange, L., Marinac-Dabic, D., Marks, P. W., Robb, M. A., & Shuren, J. (2016). Real-world evidence — What is it and what can it tell us? New England Journal of Medicine, 375(23), 2293–2297. https://doi.org/10.1056/NEJMsb1609216
Snyder, D. C., Brouwer, R. N., Ennis, C. L., Spangler, L. L., Ainsworth, C. L., Budinger, S., Mullen, C., Hawley, J., Uhlenbracu, G., & Stacy, M. (2016). Retooling institutional support for infrastructure for clinical research. Contemporary Clinical Trials, 48, 139–145. https://doi.org/10.1016/j.cct.2016.04.010
Sonstein, S. A., Seltzer, J., Li, R., Silva, H., Jones, C. T., & Daemen, E. (2014). Moving from compliance to competency: A harmonized core competency framework for the clinical research professional. Clinical Researcher, 28(3), 17–23. https://mrctcenter.org/wp-content/uploads/2023/04/2014_6_harvard_mrct_clinical_researcher_publication_competency_framework.pdf
Sonstein, S. A., Silva, H., Jones, C. T., & Bierer, B. E. (2024). Education and training of clinical research professionals and the evolution of the Joint Task Force for Clinical Trial Competency. Frontiers in Pharmacology, 15, 1291675. https://doi.org/10.3389/fphar.2024.1291675
Stroo, M., Asfaw, K., Deeter, C., Freel, S. A., Brouwer, R. J. N., Hames, B., & Snyder, D. C. (2020). Impact of implementing a competency-based job framework for clinical research professionals on employee turnover. Journal of Clinical and Translational Science, 4(4), 331–335. https://www.doi.org/10.1017/cts.2020.22
Tannock, I. F., Amir, E., Booth, C. M., Niraula, S., Ocana, A., Seruga, B., Templeton, A. J., & Vera-Badiillo, F. (2016). Relevance of randomized controlled trials in Oncology. Lancet Oncology, 17(12), e560–e567. https://www.doi.org/10.1016/S1470-2045(16)30572-1
Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K. K., Colquhoun, H., Levac, D., Moher, D., Peters, M. D. J., Horsley, T., Weeks, L., Hempel, S., Akl, E. A., Chang, C., McGowan, J., Stewart, L., Hartling, L., Aldcroft, A., Wilson, M. G., Garritty, C., . . . Straus, S. E. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467–473. https://www.doi.org/10.7326/M18-0850
Tsevat, J., & Smyth, S. S. (2020). Training the translational workforce: Expanding beyond translational research to include translational science. Journal of Clinical and Translational Science, 4(4), 360–362. https://www.doi.org/10.1017/cts.2020.31
U.S. Food and Drug Administration. (2016, December). Use of electronic informed consent: Questions and answers: Guidance for institutional review boards, investigators, and sponsors. https://www.fda.gov/media/116850/download
U.S. Food and Drug Administration. (2025). E6(R3) Good clinical practice guidance for industry. https://www.fda.gov/drugs/guidance-compliance-regulatory-information/guidances-drugs
U.S. Food and Drug Administration’s 21 CFR Part 11. (1997, March 20). Part 11—Electronic records; electronic signatures, 21 U.S.C. 321-393; 42 U.S.C. 262. https://www.federalregister.gov/documents/1997/03/20/97-6833/electronic-records-electronic-signatures#page-13464
Veeva Systems. (n.d.). Vault Clinical Suite. https://www.veeva.com/products/clinical/
Wang, F., Wang, X., Liu, E., Song, F., & Chen, Y. (2025). Impacts of clinical research units on clinical research - A systematic review of empirical studies. Systematic Reviews, 14(1), 94. https://www.doi.org/10.1186/s13643-025-02813-3
Watters, J. T., Pitzen, J. H., Sanders, L. J., Bruce, V. M., Cornell, A. R., Cseko, G. C., Grace, J. S., Kwon, P. S., Kukla, A. K., Lee, M. S., Monosmith, M. D., Myren, J. D., Kottschade, R. S., Shaft, M. N., Weis, J. A., Welter, J. C., & Bharucha, A. E. (2018). Transforming the activation of clinical trials. Clinical Pharmacology & Therapeutics, 103(1), 43–46. https://doi.org/10.1002/cpt.898
Zarin, D. A., Tse, T., Williams, R. J., Califf, R. M., & Ide, N. C. (2011). The ClinicalTrials.gov results database—Update and key issues. The New England Journal of Medicine, 364(9), 852–860. https://doi.org/10.1056/NEJMsa1012065
1 Several of the references relating to electronic systems may include additional information that can be found in [ ] that follows the reference. This information may be useful when it may not be clear to the reader from the reference which type of system or system name the reference is related to in the text.