Blog Viewer

Using Competencies to Transform Clinical Research Job Classifications

By SRAI JRA posted 09-15-2017 12:00 AM

  

Volume XLVIII, Number 2

Authors

Rebecca Namenek Brouwer
Duke University

Christine Deeter
Duke University

Deborah Hannah
Duke University

Terry Ainsworth
Duke University

Catherine Mullen
Duke University

Betsy Hames
Duke University

Heather Gaudaur
Duke University

Tara McKellar
Duke University

Denise C. Snyder
Duke University

Acknowledgments

The authors would like to acknowledge funding for this initiative from Duke’s CTSA grant UL1TR001117, as well as contribution and leadership from the following individuals: Mark Stacy, Billy Newton, David Smithwick, Andrea Doughty, Mary Smith, Angie Cain, Leigh Burgess, Rebbecca Moen, and the countless research professionals that contributed to this work.

Problem Statement

The role of the clinical research professional has evolved substantially in the past 20 years due in large part to the ever-changing field of clinical research. The regulatory requirements have grown and responsibilities multiply to keep pace with the shifting research landscape ( Johnson, 2013). This results in additional burden for investigators and study teams. Changes in the past 20 years have created the new environment in which clinical research is conducted. The emergence of the electronic medical record has required research staff to learn new documentation practices and policies. The executive order signed by President Bill Clinton in 2000 allowed for payment of standard of care research, however, this National Coverage Determination added layers of complexity around payments. The proliferation of international trials introduced additional complexity around site and study management, an understanding of laws, and tedious communication. HIPAA requires staff to have a deeper understanding of privacy concerns and ethics in research. From FY 2003 to 2013, the National Institutes of Health (NIH) effectively lost 22% of its capacity to fund research, creating greater pressure for clinician scientists to secure external support to pay for their research (National Academies of Sciences, Engineering & Medicine, 2016).

As the work in clinical research has come to embody far more than the pursuit of improved patient care, this has meant that staff supporting the research are asked to go farther than simply recruiting participants into studies and ensuring study visits and interventions occur as intended. While job responsibilities and roles may have changed rapidly, the job descriptions and classifications held by individuals performing the tasks have not (Stevens & Daemen, 2015). In addition, training demands and resources for development have soared (Speicher et al., 2012). To address the increased responsibilities and complexities of work, Duke University School of Medicine leadership agreed that an overhaul of job descriptions for clinical research professionals was needed. Job descriptions related to clinical research at the institution had not been updated in more than fifty years. However, a simple “update” of the jobs was not enough; instead, a deep dive was pursued with an intensive review from a focused workgroup representative of many stakeholder groups, and engagement across the institution for departments, centers, institutes, faculty, staff and administrators who may potentially be impacted. The workgroup reviewed literature, researched options and existing models in the field with clinical research professional organizations and participated in CTSA activities before choosing a competency-based model (Sonstein et al., 2014). This is consistent with other fields, where competency models are adopted to ensure readiness of the workforce, and assess skills needed for appropriate conduct of a specific job (O’Neil, 2014).

There is a need to professionalize the research professional workforce in order to continue to produce high quality clinical research. This means that those involved in clinical research need to ensure that the roles we are asking staff to fill are 1) well-articulated, 2) competency-based, 3) appropriately matched to experience and educational level, and 4) have descriptions that are updated frequently to keep up with the shifting landscape. In an attempt to do just that, Duke University undertook a large effort to revise job classifications for research professionals. Below, we describe the steps taken towards professionalizing the workforce in this large academic medical center.

Observations and Methods

Formation of a working group

Human Resources departments within institutions cannot take on a transformation like this on their own as they may not have the subject matter expertise. Therefore, for the purposes of refining job classifications at Duke, the multidisciplinary Clinical Research Professionals Working Group (CRPWG) was formed. This group included those who have grown up in the field of clinical research for over twenty years, representatives from Human Resources (corporate, school, and department levels), and administrators key to the clinical research enterprise. The group was relatively stable, with approximately 10 participants who have remained very engaged in the process over a three-year period. The group convened every 2-3 weeks, and tackled assignments in between meetings. Process and implementation was driven by a core group in the institution’s central clinical research support office (known as the Duke Office of Clinical Research; (Snyder et al., 2016)).

Importantly, the workgroup was committed to transparency and engagement within the research community. This was facilitated by the creation of a wiki page, which was visible to the School of Medicine, and was updated roughly once a week, with draft documents available for review. Workgroup members frequently presented to the research community in order to address concerns, (see engagement below). We requested feedback in a variety of venues, and incorporated suggested edits to the plan when appropriate. The group ensured that throughout the process, the research community had a voice and an opportunity to participate in shaping the long-term vision.

Finally, our group connected frequently with contacts at other institutions working on similar initiatives. Members of our committee took part in meetings with other Clinical and Translational Science Award consortium members about the supplement “Enhancing Clinical Research Professionals’ Training and Qualifications,” which aims to improve the safety, efficiency, and quality of clinical trials by establishing standardized educational competencies and training across the CTSA Consortium. Our committee members who worked with this group assisted with the refinement of competencies in several domain areas.

Stakeholder engagement

Key to the success of the initiative is frequent and true engagement with a variety of stakeholders. The CRPWG engaged groups in three general realms: Administrative leaders, faculty, and the research community. The message threaded through conversations with each group is that the long-term investment in this workforce is important on many levels – 1) at the employee level: the initiative encourages equity and fairness to employees and allows employees to better understand career progression as it is tied to competencies; 2) at the institutional level: the initiative reduces risk by identifying competencies associated with job responsibilities and ensuring quality training; it also “raises the bar” for our workforce, allowing us to attract and retain higher quality research professionals through the development of a research career ladder; and 3) at the workforce level: while hiring and training higher performing individuals, we can expect better performance, and higher quality research.

With administrative leaders, the focus of the CRPWG was on messaging and frequent updates. Careful consideration was given to the potential impact on our institution, including financial impact, and how best to handle employee and departmental concerns about the change. Engagement was ongoing via brief email communications, but also via regularly scheduled face- to-face meetings.

The CRPWG engaged a faculty advisory committee, a group of clinician-scientists from a variety of disciplines who are actively involved in clinical research activities. These faculty advisors were selected as they are invested in the future of the field and the institution, and have teams that are considered high-performing. The CRPWG aimed to get these faculty members’ perspectives on how this initiative would affect their study teams and research projects. Faculty advisors allowed their teams to participate in a mock mapping exercise. At the completion of the mock mapping, faculty and their senior managers gave critical, candid feedback that was incorporated into our final processes.

Perhaps the most robust and important group of stakeholders engaged in the process were the research professionals themselves. These are the same individuals who would be affected by the changes. This community was invited to take part in the process in a variety of ways. First, the Research Professional’s Network (RPN), which is open to all research professionals, offers networking/professional growth opportunities and at the time of the mapping included approximately 400 individuals. These employees engaged in the initial roll-out and evaluation of the proposed model (see below), development of competencies for the various competency domains, and received summary updates in the network’s e-communication. Secondly, a central listserv of research professionals (approximately 7,000 people) was used to communicate updates in the process and invite participation in the RPN events. Third, we leveraged Duke’s clinical research structure to engage leaders within each of our eighteen Clinical Research Units (CRUs). CRUs are the clinical research scientific and administrative unit for Departments, Centers and Institutes). These leaders were updated about the initiative at biweekly meetings, and were encouraged to invite CRPWG members to speak to their research professional staff about the initiative.

All feedback from stakeholder meetings was brought back to regular CRPWG meetings. In short, the engagement at multiple levels was a crucial, yet time-consuming component, of this important initiative.

Derive a model

At Duke, the number of titles held by personnel in jobs with clinical research responsibilities had become unmanageable. This caused issues related to equity across the organization, and created an inability to understand the composition of our research professional workforce. For these reasons, the CRPWG aimed to simplify the number of job classifications/descriptions from approximately 80 to 12. Job classifications were grouped into a few broad categories, based on general job functions. Clinical Research Specialist (and Sr.) positions are focused on supporting clinical research teams or performing less complex tasks involved in executing research studies. Regulatory coordinator (and Sr.) positions are specialized in the area of research regulation. Clinical Research Coordinator (and Sr.) and Clinical Research Nurse Coordinator (and Sr.) positions are responsible for participating or leading the day-to-day operations of clinical research studies (nurses will also participate in clinical activities associated with research). The Research Practice Manager (and assistants) have larger responsibilities associated with oversight of clinical research activities within specific therapeutically-aligned units. And finally, the Research Program Leader (and Sr.) positions manage day-to-day operations of clinical research activities but have additional leadership responsibilities in the areas of program and portfolio management. Figure 1 depicts the model.

Figure 1. Clinical Research Professionals Working Group competency-based framework for job classifications.

Figure 1. Clinical Research Professionals Working Group competency-based framework for job classifications.

Notable in our model is the concept of tiered positions. Positions marked with a gradient band under the same title would be treated differently than the others. In these positions, there would be multiple tiers, with each tier associated with different levels of competency in various domains (see below on competencies). In order to move between tiers, the employee must exhibit the competencies associated with that tier. Employees have two opportunities per year to move to the next tier, and manager endorsement is required (see below on implementation for more details). A major benefit of the tiered system is that it allows for employee growth (with associated compensation) without the need to reclassify positions—a historically burdensome HR process.

This proposed model was presented to various stakeholders, and a stoplight evaluation was conducted with 175 staff. Community members were asked to select either a green light (no major concerns) or red light (major concerns, with reasoning behind the concern) for 10 items. This included the appropriateness of:

  1. Name,
  2. Minimum education, and
  3. Minimum experience requirements for each position; the inclusion of tiers for some positions; and the number of titles.

The feedback we received was that all proposed elements of the plan exceeded the green light threshold of 80%; however, the final name of some titles were adjusted based on comments.

Competencies drive the framework

Competencies guide the work of professionals within many health-related fields (Melnyk, Gallagher-Ford, Long, & Fineout-Overholt, 2014), but the concept is much newer as applied to clinical research staff. In 2014, the notion of employing competencies for research professionals began taking hold. Steven Sonstein, in his presentation “Moving from Compliance to Competency: A Harmonized Core Competency Framework for the Clinical Research Professional” articulated the concept, which was being driven by the Joint Task Force for Clinical Trials Competency (Sonstein et al., 2014). The thought is to utilize competencies, which can be grouped into high- level conceptual domains, as a way to build a framework that can be used in a variety of situations. Certainly, in workforce training, the competencies can better define performance criteria and aid designers in matching training opportunities to measurable outcomes ( Jones et al., 2012). Similarly, these competencies can create the foundation of standardized job descriptions for professionals engaged in the field of clinical research.

The JTFCTC developed draft competencies that were used as a starting point. Our CRPWG took JTFCTC as base competencies, and then worked in pairs to edit the competencies under each of the eight domains:

  1. Scientific Concepts and Research Design,
  2. Ethical Considerations, Patient Care and Safety,
  3. Medicines Development and Regulation,
  4. Clinical Trials Operations (GCPs),
  5. Study and Site Management,
  6. Data Management and Informatics,
  7. Leadership and Professionalism, and
  8. Teamwork and Communication.

Edits were made to make the competencies more specific to our academic medical center (AMC), and more broadly encompass social/behavioral studies. Ultimately, the third domain (Medicines Development and Regulation) had fewer competencies relevant to staff and was merged with Domain 4 to yield 7 domains for clinical research staff.

Once revised competencies were established, the CRPWG worked to create a tool that described levels for each competency. Each level was assigned a number (1, 2, 3, 3.5, 4, or 5) that corresponded to specific job descriptions. Those competency levels assigned a 1 would be responsibilities included the Clinical Research Specialist (CRS) job description; those with a 2 would be found in the CRS Senior; those with a 3 or 3.5 would be found in our tiered CRC, CRNC, or Regulatory Coordinator positions (3 representing responsibilities associated with the lower part of the tier, 3.5 the higher), 4 was associated with CRNC Senior, CRC Senior, and RPL positions, and 5 was typically associated with our ARPM/RPM/RPL Senior level positions. Versions of Bloom’s taxonomy (Anderson & Sosniak, 1994) were used to guide the workgroup as they worded the competencies. This allowed them to underscore that movement through the competency levels involves not just increasing experience level—instead, it requires the employee to develop deeper critical thinking skills, and move along the continuum from simple to complex tasks.

This tool would eventually be used to map our incumbent staff and serve as a basis for additional functions such as developing job descriptions, conducting performance evaluations, and training.

Table 1. Sample of leveled competencies.

Table 1. Sample of leveled competencies.

Figure 2. Mapping tool for employees.

Figure 2. Mapping tool for employees.

Development of job descriptions

Job descriptions for clinical research professionals at Duke have remained relatively constant for fifty years. Comments from research staff and managers suggested that it was difficult to tell positions apart—for example, the junior level clinical research coordinator description looked very much like the senior level. The descriptions employed somewhat vague terminology and therefore left much room for interpretation by managers and staff. This led to progress through levels being driven primarily by number of years in the position, not by increasing competency.

The revised descriptions were designed to bring greater clarity to the job responsibilities for each position, and provide a clear path for career progression. The job descriptions were structured under the seven competency domains described above. Each competency level was described clearly underneath each domain. The differences between job descriptions became much clearer. For example, the Clinical Research Coordinator (CRC) job description may describe the responsibility for screening as “Screen participants for all studies independently.” The next level position, CRC Senior, clearly denotes that the level of responsibility related to screening is higher—and does so via distinguishing fonts: “Screen participants for all studies independently and provide oversight and training to study team members who screen participants.” Here, the differentiation between levels of responsibilities with regard to screening is definitive.

The working group developed job descriptions in collaboration with staff currently in equivalent positions as well as with those who were recently reclassified in the pilot mapping process. Feedback from the community was overwhelmingly positive about how the job descriptions were structured.

Implementation

Pilot process

Once the working group had completed the first version of the model and draft job descriptions with associated competency levels, they worked to test the process by mapping a pilot group of staff. Sixty staff members were identified in the pilot units.

The mock process, which evolved into the foundation for our final mapping process, included the following elements:

Request packet of information from research unit head: 

We asked the unit leaders to provide a) an organizational chart including all research staff, b) CVs from each staff member to be mapped (with months/years of each position articulated), and c) job descriptions from each staff member.

Employee completes questionnaire about their job responsibilities:

Once the packet was received, each research staff member was sent a link to a REDCapTM survey tool to complete. All data for this initiative were collected and managed using REDCap electronic data capture tools hosted at Duke University. REDCap (Research Electronic Data Capture) is a secure, web-based application designed to support data capture for research studies, providing:

  1. An intuitive interface for validated data entry;
  2. Audit trails for tracking data manipulation and export procedures;
  3. Automated export procedures for seamless data downloads to common statistical packages; and
  4. Procedures for importing data from external sources (Harris et al., 2009).

This specific use of the REDCapTM tool asked each employee to provide basic information about employment (name, unit, current position), and then asked them to select the level of responsibility associated with each competency (Figure 3). Again, the competencies were arranged by domain. Employees could also indicate that the competency was not a part of their job, and add comments to describe other duties not included in the questionnaire. The employee then recorded the name and email address of their manager. The form automatically routed to the manager. This form took approximately thirty minutes to complete.

Figure 3. Process used for mapping incumbent staff.

Figure 3. Process used for mapping incumbent staff.

Manager reviews employee questionnaire:

The manager received a link via email with a subject line of “Please review job classification tool for your employee, [first name] [last name].” The manager reviewed the employee’s response to each item and if s/he felt that it did not appropriately represent the level of competency, the manager had the option to edit the field. The manager had the opportunity to add comments, and once the survey was complete, the working group was notified via automated email. A copy of this tool’s data dictionary is available upon request.

A summary report is created:

The information from the REDCapTM tool was exported and then mail merged into a summary report. The report provided basic information about the employee, and displayed the manager’s report of the employee responsibilities (each responsibility rated as 1, 2, 3, 3.5, 4, 5 or 0 if not part of the job), as well as an indicator of the magnitude of discrepancy between the employee and manager report. For example, a magnitude of 1 indicated that the employee thought he/she was one level higher on one competency than the manager reported. A magnitude of -8 indicated that the manager believed the employee was one level higher in 8 competencies (or 2 levels higher in 4, etc.). All raw data from employee and manager were available if further review was needed.

Map to new classification:

The reports, CV, and job descriptions were reviewed by the committee to map employees into the new classifications. The summary report, derived from the job responsibility tool, drove the discussion. In general, a report that displayed many of the same numbers would suggest a good match with the job description associated with that number (for example, many 3’s likely suggest a CRC title). The review of these documents was conducted by at least two members of the working group. This process was not intended to provide employees with a promotion or to be utilized as corrective action, but rather to be reflective of the competencies they display in their daily job. Any employee that could not easily be mapped into a new description was marked for further discussion with the unit manager and medical director.

Meet with unit leaders to review results:

The results of the mapping process were shared with the faculty leaders and their staff leads. During a one-hour meeting, the committee members reviewed the process utilized to determine the classification for each employee and reviewed the results.

Final mapping

The pilot process described above worked well and was utilized, with only small changes, with our full cohort of approximately seven hundred clinical research staff. The committee opted to make a few edits to the REDCapTM tool so the summary report was more useful. We eliminated the employee’s report of percentage of time spent in each activity since we found that they had a very difficult time accurately reporting this information, and it was of little use to our committee members. We instead asked employees to list their top 5 responsibilities and we eliminated the request for the employee job description since it overlapped considerably with the information provided in the tool and CV.

The committee members reviewed, in pairs, each unit’s clinical research staff using the process depicted in Figure 4.

Figure 4. Breakdown of mapping by classification.

Figure 4. Breakdown of mapping by classification.

In the final mapping process, 589 employees had their REDCap tool information reviewed by the committee, of which 32 were determined to require classifications that fell outside of the clinical research structure. Figure 4 depicts how the 557 remaining employees mapped into the various new titles. Of those that were mapped, 15.6% were mapped into a higher level position than their previous classification, and 0.5% position were mapped into a lower position. 83.9% moved into an equivalent position.

Figure 5. Timeline of Clinical Research Professionals Working Group activities.

Figure 5. Timeline of Clinical Research Professionals Working Group activities.

Reflection

The work of the initiative thus far has been focused on improving the job classifications for research professionals by using a competency-based framework; however, the initiative has always been intended to extend simply beyond job descriptions.

With the competencies well-established, the committee will use them for several related initiatives, with information available for other institutions found here: https://medschool.duke. edu/research/clinical-and-translational-research/duke-office-clinical-research/about-clinical- research-and-navigating-research-duke/staffing-clinical-research:

  1. A tool to match responsibilities to title and derive a job description. A survey, based on the REDCap tool created for the mapping process, has been developed so hiring managers can select the competencies expected of their upcoming new hire. The results of the hiring manager’s responses are manually reviewed by committee members to derive a title, which is provided to the hiring manager and central HR recruiter, and a job description is generated. This ensures that competencies are used consistently when seeking new staff. The team has developed a pilot process, which has been used to select over 100 new positions, and is working to automate this process in the coming year.
  2. Assessment of competencies. As employees work to move through the tiers, their competency level in each domain will need to be assessed. The committee has derived objective assessments of key competencies that can be managed centrally or by trained managers.
  3. Performance evaluation tool. Now that competencies are well-established, managers have asked to use them in the context of performance appraisals. The CRPWG will be working with HR in the coming year to incorporate competencies into the formal performance evaluation process.

While those in other professions have been considering the use of competencies for career structure in a broad sense, (Furtado et al., 2015), when the Clinical Research Professionals Working Group first convened, we were likely the first to apply the competencies to staff within an academic medical center. The initiative required a strong collaboration between a centralized research support office (Snyder et al., 2016), institutional leadership, and a variety of professionals in human resources, and required a significant amount of time and effort.

This work follows that of others who have successfully used competency frameworks to improve their workforce (e.g., Glover & Frounfelker, 2013; Hoge, Tondora, & Marrelli, 2005). Duke anticipates that this competency-based framework will allow us to enhance the professionalism and competency of its workforce. A strong workforce will enable higher quality research and ultimately better patient care and health outcomes.

References

Anderson, L. W., & Sosniak, L. A. (1994). Bloom’s Taxonomy: Univ. Chicago Press.

Furtado, T., Julé, A., Boggs, L., van Loggerenberg, F., Ewing, V., & Lang, T. (2015). Developing a global core competency framework for clinical research. Trials, 16(2), 1.doi:10.1136/bmjgh-2016-000260.

Glover, C. M., & Frounfelker, R. L. (2013). Competencies of more and less successful employment specialists. Community Mental Health Journal, 49(3), 311-316. doi:10.1007/s10597-011-9471-0

P.A. Harris, R. Thielke, R. Taylor, J. Payne, N. Gonzalez, J.G. Conde. Research Electronic Data Capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 2008

Hoge, M. A., Tondora, J., & Marrelli, A. F. (2005). The fundamentals of workforce competency: Implications for behavioral health. Administration and Policy in Mental Health and Mental Health Services Research, 32(5-6), 509-531. doi:10.1007/s10488- 005-3263-1

Johnson, J. A. (2013, December 23). Brief history of NIH funding: Fact sheet. Congressional Research Service Report, 43341.

Jones, C., Browning, S., Gladson, B., Hornung, C., Lubejko, B., Parmentier, J., . . .Sonstein, S. (2012). Defining competencies in clinical research: Issues in clinical research education and training. Research Practitioner, 13(2), 99-107.

Melnyk, B. M., Gallagher-Ford, L., Long, L. E., & Fineout-Overholt, E. (2014). The establishment of evidence-based practice competencies for practicing registered nurses and advanced practice nurses in real-world clinical settings: Proficiencies to improve healthcare quality, reliability, patient outcomes, and costs. Worldviews on Evidence-Based Nursing, 11(1), 5-15. doi:10.1111/wvn.12021

National Academies of Sciences, Engineering, & Medicine. (2016). Optimizing the nation’s investment in academic research: A new regulatory framework for the 21st century. Washington, DC: The National Academies Press.

O’Neil, H. F. (ed.). (2014). Workforce readiness: Competencies and assessment. New York: Psychology Press.

Snyder, D. C., Brouwer, R. N., Ennis, C. L., Spangler, L. L., Ainsworth, T. L., Budinger, S.,. . . Stacy, M. (2016). Retooling institutional support infrastructure for clinical research. Contemporary Clinical Trials, 48, 139-145. doi:10.1016/j.cct.2016.04.010

Sonstein, S. A., Seltzer, J., Li, R., Silva, H., Jones, C. T., & Daemen, E. (2014). Moving from compliance to competency: A harmonized core competency framework for the clinical research professional. Clinical Researcher, 28(3), 17-23. doi:10.14524/CR-14-00002R1.

Speicher, L. A., Fromell, G., Avery, S., Brassil, D., Carlson, L., Stevens, E., & Toms, M. (2012). The critical need for academic health centers to assess the training, support, and career development requirements of clinical research coordinators: Recommendations from the Clinical and Translational Science Award Research Coordinator Taskforce. Clinical and Translational Science, 5(6), 470-475. doi:10.1111/j.1752- 8062.2012.00423.x

Stevens, E., & Daemen, E. (2015). The professionalization of Research Coordinators. Clinical Researcher, 29(6), 26-31.

Keywords

Competency, clinical research, workforce


#VolumeXLVIIINumber2
#competency
#clinicalResearch
#workforce
0 comments
20 views

Permalink