Blog Viewer

Creation of a Structured Performance-Based Assessment Tool in a Clinical Research Center Setting

By SRAI JRA posted 05-02-2020 12:00 AM

  

Volume LI, Number 1

Creation of a Structured Performance-Based Assessment Tool in a Clinical Research Center Setting

Authors

Marcus R. Johnson
Cooperative Studies Program Epidemiology Center-Durham, Durham Veterans Affairs Health Care System
Department of Public Health, Brody School of Medicine, East Carolina University
Gillings School of Global Public Health, The University of North Carolina at Chapel Hill

A. Jasmine Bullard
Cooperative Studies Program Epidemiology Center-Durham, Durham Veterans Affairs Health Care System

Grant Support
The project reported/outlined here was supported by the Department of Veterans Affairs, Cooperative Studies Program (CSP).

Abbreviations

CSP Cooperative Studies Program
ORD Office of Research and Development
VA Department of Veterans Affairs
VAMCs VA Medical Centers
SMART Specific, Measurable, Attainable, Realistic, Time-Bound
PEP Performance Evaluation Process
PRP Performance Review Period
PEG Performance Element Category
ELT Executive Leadership and Administration Team

Abstract

Employee performance is a critical factor in the success, or failure, of any organization. Therefore, it is paramount that the leadership and/or management team in an organization establishes and implements an approach that can effectively assess and evaluate the performance of its employees in an objective manner. Research administrators are often involved with the performance evaluation process at their respective institutions. However, there is a limited amount of publicly available information on the use of work performance and assessment methods in research settings. The primary aim of this pilot project was to establish a structured performance-based assessment tool that would allow for an objective and clearly articulated evaluation of staff performance at our clinical research center. The secondary aim was to determine if a structured performance-based assessment tool would improve staff satisfaction with the Center’s overall performance evaluation process (PEP). A baseline survey was conducted to examine employee perspectives of and satisfaction with the current performance evaluation process. A follow-up survey was conducted after the mid-year performance review period and implementation of the new PEP, including goals templates and performance evaluation guidance documents. The results of the baseline survey showed that staff had mixed reviews of the overall performance evaluation process (somewhat satisfied-33%, very dissatisfied, dissatisfied, neutral, satisfied -all 16%) and all thought the evaluation criteria could be improved (100%). The results of the follow-up survey showed that staff reviews of the overall mid-year performance evaluation process had improved (63% satisfied, 12% very satisfied, 25% somewhat satisfied) and that 50% of respondents were satisfied with the ease of use and clarity of the templates that were used to record their progress towards achieving their goals. Staff shared additional suggestions for strengthening and better aligning the templates with Center-specific roles and activities. Overall, the leadership/management team at our research Center was successful in creating a performance-based assessment approach that facilitated a more objective and clearly articulated evaluation of staff performance. There are numerous challenges to effectively evaluating staff performance in both research and non-research organizations. As a result, the strategies outlined here may be transferable to other types of work settings.

Keywords: Management; Performance; Clinical Research; VA; CSP.

Background

Employee performance is a critical factor in the success, or failure, of any organization and the level of productivity has been demonstrated as being the single most important determinant of a country’s standard of living (Economic Policy Institute, 2000; Fauth et al., 2009; Nielsen & Randall, 2012). Therefore, it is paramount that the senior leadership and/or management team in an organization establishes and implements an approach that can effectively assess and evaluate the productivity and performance of its employees in an objective manner. Preferably, an organization’s employee performance assessment plan should involve its staff as key stakeholders during the process. Their participation should be encouraged by senior leadership since doing so provides an opportunity for them to become more engaged in decisions related to the determination of what their overall value is to the organization. Research administrators are often involved in hiring, management, and the performance evaluation process at their respective institutions (Kaplan, 1959; Tauginiene, 2009). Furthermore, many research positions have varying levels of complexity in their roles due to a variety of considerations, e.g. navigation of intricate study protocols, required knowledge of compliance and regulatory considerations, existing nuances between human subjects and basic science research, varying levels of leadership and/or management roles, etc. (Merry et al., 2010; Mentz & Peterson, 2017; Antes et al., 2016; Baer et al., 2011a). These and other factors legitimize the need for a structured, objective, performance evaluation tool that research administrators can use to adequately assess their staff’s performance. Employee engagement benefits organizations and has been demonstrated as having a positive impact on employee health and wellness, productivity, and retention (Burton et al., 2017; Harter et al., 2010; Tullar et al., 2016). There is a significant amount of literature on work performance and assessment methods (Amerine et al., 2017; Byrne et al., 2016; Shanafelt & Swensen, 2017; Wu et al., 2016) but there is a limited amount of publicly available information on their use in research settings.

The Department of Veterans Affairs (VA) is the United States’ largest integrated healthcare system and provides comprehensive care to more than 8.9 million Veterans each year (2017). The Cooperative Studies Program (CSP), a division of the Department of Veterans Affairs (VA) Office of Research and Development (ORD), was established as a clinical research infrastructure to provide coordination and enable cooperation on multi-site clinical trials and epidemiological studies that fall within the purview of VA (2018a). The Cooperative Studies Program Epidemiology Center – Durham (CSPEC-Durham) is one of several epidemiology centers established by the Cooperative Studies Program (CSP) and serve as national resources for epidemiologic research and training in the U.S. Department of Veterans Affairs (VA) (2014, 2018b). The Center is comprised of three functional areas (Core groups) as follows: Project Management Core, Computational Sciences Core, and the Executive Leadership and Administration Core (ELT). Its workforce consists of research investigators, project managers, statisticians, programmers, research assistants, data managers, medical residents/fellows, and graduate student trainees. The CSPEC-Durham’s current study portfolio consists of 17 active studies, and its primary areas of focus are cancer outcomes and Gulf War research.

The primary aim of this pilot project was to establish a structured performance-based assessment tool that would allow for an objective, and clearly articulated evaluation of staff performance at our clinical research center. The secondary aim was to determine if a structured performance-based assessment tool would improve staff satisfaction with the Center’s overall performance evaluation process. The findings may inform individuals or groups in research administration and leadership roles seeking to improve their current staff performance evaluation process.

Methods

Identification of Areas for Improvement in Employee Performance Evaluation Process

Over the course of several months, prior to the start of the VA Fiscal Year 2018 (FY18) performance review period (10/1/2017-9/30/2018), the Center’s Executive Leadership and Administration Core (ELT) met periodically to review and assess the Center’s performance evaluation process. This review was initially conducted based on informal feedback from Center staff that they were not satisfied with the performance evaluation process (PEP) as it was performed at that time. As part of the Center’s effort to create a culture of continuous process improvement, the ELT engaged in efforts to identify the weaknesses and potential areas of improvement in the Center’s PEP. The review identified a major weakness in the Center’s PEP in that its format led to a more subjective determination of what staff performance was, rather than the evaluation being based on clear, agreed-upon expectations between the ELT and each respective staff member regarding what their level of work performance should have resembled. For example, one of the Center’s positions had Performance Element Categories (PEGs) such as “Supports CSPEC and CSP Programs” and “Collaborates, Mentors, and Supports Center Mission.” Both criteria are vague and ambiguous in nature, and neither of these examples contain enough substantive information for a management team to be able to objectively assess an employee’s performance in that particular position.

Performance Evaluation Guide and Supplemental Document Development

Based on the findings of the Center’s PEP review, the ELT initiated a pilot project to develop a performance evaluation guide that could be employed to assess staff performance in a structured and more objective manner. Of note, this project was constructed as an operational quality improvement initiative and not a research project. Development of the performance evaluation guide (Appendix A) occurred over several months and was designed with the intent that it would be used to assess the performance of Center staff based on their achievement of pre-defined performance goals. Center employees were asked by the ELT to deliberate on what they wanted to accomplish over the course of the performance review period (PRP) and to create SMART goals that aligned with those expectations. Goals were to be specific, measurable, attainable, realistic, and time-bound (SMART) (Bjerke & Renger, 2017; Bovend’Eerdt et al., 2009; Tichelaar et al., 2016). To facilitate their efforts, Center management provided staff with two supplemental templates (one used to capture their goals for the upcoming PRP and the other used to track their progress/achievement of those goals for review during their mid-year performance assessment) and examples of acceptable SMART goals that were identified online via various websites. Some staff members developed their goals subsequent to their initial review of the supplemental templates and goal examples, while others requested additional information and guidance on how best to develop their SMART goals. Additional clarification was provided to this subset of staff members either via email or in one-on-one in-person meetings with a member of the ELT.

The performance evaluation guide was distributed to Center staff prior to a scheduled staff meeting, at which ELT discussed the evaluation guide’s purpose and its use for the upcoming FY18 PRP. During the staff meeting, employees had the opportunity to ask preliminary questions about the evaluation guide and to give initial feedback on the tool. Staff members provided several suggested revisions to the tool after their review and the ELT then incorporated this feedback into a subsequent version of the document prior to utilizing it for the upcoming PRP. Staff were also informed that Center management would work with each employee individually to ensure that their determined goals were aligned with the needs of the Center, and to come to a consensus on what the staff member’s goals would be for the upcoming PRP.

Implementation, Evaluation, and Feedback

An anonymous baseline survey (Figure 1) was conducted to examine employee perspectives of and satisfaction with the Center’s current performance evaluation process. After the mid-year performance review and utilization of the guidance documents, an anonymous follow-up survey (Figure 2) was used to evaluate if employee perspectives and satisfaction had changed subsequent to what was reported in the baseline survey. The surveys were administered through REDCap, an online data capture application for research studies and operations (Harris et al., 2009). Surveys were designed to be quick and convenient for staff to complete and included both multiple choice and open-ended question/comment fields.

Figure 1. Baseline Survey Questions.


A total of six baseline surveys were completed and returned (n=6/11) for a 55% response rate, and a total of eight follow-up surveys (n=8/8) were completed and returned for a 100% response rate. This outcome constituted an overall survey response rate of 74% (n=14/19). New employees that were within their 90-day probation/trial period, supervisors/performance evaluators (ELT), volunteers, and contract employees did not participate in the survey.

Figure 2. Follow-Up Survey Questions.



Results

Baseline Survey

The response rate for the baseline survey was 55% (n=6/11). The results of the baseline survey showed that staff had mixed reviews of the overall performance evaluation process (somewhat satisfied - 33%, very dissatisfied, dissatisfied, neutral, satisfied - all 16%) (Table 1). Most were either very dissatisfied (33%) or somewhat dissatisfied (33%) with information received about the evaluation process before their review. Staff also had mixed reviews about the evaluation criteria, or lack thereof, used to rate their performance (dissatisfied, somewhat satisfied - both 33%) and performance feedback from their supervisor/evaluator (somewhat satisfied - 50%). Most respondents agreed with their last performance evaluation rating (83%) and all thought the evaluation criteria could be improved (100%). The use of SMART goals was encouraged by ELT prior to this pilot project but had not been mandated, and respondents expressed that the evaluation process was mysterious, with no concrete examples of Center-specific SMART goals. Staff also expressed frustration that there was not a dedicated training effort provided on how to write SMART goals, or a standard reference provided to learn about them. Furthermore, the survey results showed that there was a desire from staff to receive suggestions from the ELT on how to get a higher performance rating, and they also revealed staff members’ desire for additional one-on-one assistance with crafting their SMART goals.

Table 1. Baseline and Follow-Up Survey Results


Follow-Up Survey

The response rate for the follow-up survey was 100% (n=8/8). The results of the follow-up survey showed that staff reviews of the overall mid-year performance evaluation process had improved (satisfied - 63%, very satisfied - 12%, somewhat satisfied - 25%). Most staff were either satisfied (50%) or very satisfied (38%) with information received about the evaluation process before their review. Staff still had mixed reviews about the new evaluation criteria, but none were dissatisfied (neutral, somewhat satisfied, satisfied, very satisfied - all 25%). All respondents were either very satisfied (63%) or satisfied (37%) with performance feedback from their supervisor/evaluator. The survey also revealed that the two templates developed by the ELT could still benefit from additional revisions, but half (50%) of respondents were satisfied with their ease of use and clarity. Staff shared that the templates could be better aligned with Center-specific roles and activities.

Overall, the Center was successful in developing and implementing a structured, performance evaluation guide that outlined what level (%) of goals were necessary to achieve one of three levels of achievement: Exceptional, Fully Successful, or Unacceptable, for each of an employee’s PEGs. For context, each employee has 4-5 PEGs in their performance appraisal plan that encompass a broader theme of service, e.g. Supports CSPEC and CSP Programs, Customer Service, Program Planning and Management, etc. and are weighted as either “Critical” or “non-Critical”. Each Center employee created SMART goals that were relevant to each of the PEGs listed in their performance appraisal plan. It is important to note that these levels of achievement were then used to assign a final performance rating (Outstanding, Excellent, Fully Successful, Minimally Satisfactory, and Unacceptable) based on the collective levels of achievement for their PEGs (Table 2).

Table 2. Final Performance Rating Table


These performance ratings were then able to be clearly aligned to rating-based performance award recommendations. This approach yielded a more objective employee rating than the Center’s previous PEP format because the evaluation was based on clear, agreed-upon expectations between the ELT and each respective staff member regarding what their level of work performance should have resembled. To rate an employee’s performance, the ELT only had to measure the employee’s achievement (or non-achievement) of clearly outlined goals, as opposed to subjectively rating their performance on position responsibilities that may not have been clearly described to the employee and/or not be specific to the position due to the generalized and ambiguous nature of the previous performance evaluation criteria.

Discussion

Organizations are only as successful as their employees, and their contributions to an institution’s missions, goals, and objectives, as measured through their performance and productivity, are critical for leadership and management teams to be able to assess (Mankins, 2017; Vali et al., 2015; Loeppke et al., 2009). Research administrators are often tasked with the responsibility of evaluating staff performance, in conjunction with other management duties (Kaplan, 1959; Tauginiene, 2009), and being able to utilize a tactic that facilitates an objective, unbiased, performance appraisal process would most likely be advantageous to them. Considering that many research positions have varying levels of complexity in their roles due to a variety of factors, e.g. navigation of intricate study protocols, required knowledge of compliance and regulatory considerations, situations in which research staff work across multiple studies due to limited or delayed research funding, varying levels of leadership and/or management roles, etc. (Purdom et al., 2017; Baer et al., 2011b; Larkin et al., 2012), research administrators would also likely benefit from a structured approach that alleviates some of the challenges associated with evaluating the performance of staff in complex roles. Our efforts demonstrated that the creation of a structured performance-based assessment tool that allowed for an objective and clearly articulated evaluation of staff performance was feasible in a clinical research center setting. The use of this strategy was also effective in improving staff satisfaction with the overall performance evaluation process in this setting.

Performance evaluation tools have been developed to assess the performance of research institutions (Rajan et al., 2012; Schapper et al., 2012) but the amount of publicly available literature on their use to assess individual research staff performance is limited (Ekeroma et al., 2016). Ekeroma, Shulruf, McCowan, Hill, and Kenealy (2016) described their efforts to “develop a research performance-appropriate tool for clinicians working in low-resource settings such as those in the Pacific Islands” (p. 2). Their work was significantly different than ours in that their performance tool was targeted specifically to assess the research productivity of clinicians (physicians, midwives/nurses) in low-resource countries. Furthermore, their development process included “a modified Delphi technique that established a consensus among identified research experts for the most appropriate research indicators for the Pacific Islands” (Ekeroma, 2016, p. 2). Our performance-based assessment tool is not limited to a specific type of research position, nor is it intended for use in a specific type of research setting, e.g. clinical, biomedical, epidemiologic, etc. One of its primary strengths is that the foundation of the tool is based on pre-defined SMART goals that both the individual employee and our Center management agreed on prior to the start of the performance evaluation process. Therefore, each staff member’s goals are inherently tailored to their specific role and this allows the approach to be seamlessly utilized across any type of position in a research setting. Additionally, since this work was conducted in a clinical research setting, the SMART goals that were created were generally predisposed to be research-specific, but this approach should be adaptable to other settings. Lastly, the stakeholders that were involved in the development of our tool were our Center’s ELT and staff members, as opposed to involving a panel of research experts that would be used in a Delphi method approach (Humphrey-Murto et al., 2017; Diamond et al., 2014).

We believe that the primary reason for the success of this pilot project, in terms of both the development of the performance-based assessment tool and the improvement of staff satisfaction with the Center’s overall performance evaluation process, is related to the involvement of Center staff in the development process for the tool. A stakeholder can be defined as a person, group, or organization involved in or affected by a course of action, while stakeholder engagement refers to the process by which an organization involves people who may be affected by the decisions it makes or who can influence the implementation of decisions (Lemke & Harris-Wai, 2015). Substantial evidence has now been provided that stakeholder involvement is essential for management effectiveness in clinical research, and feedback from stakeholders has critical value for research managers inasmuch as it alerts them to the social, environmental, and ethical implications of research activities (Pandi-Perumal et al., 2015). The Center’s staff served as both stakeholders and active participants during the development of the performance evaluation guide, as well as during the development of their respective SMART goals that outlined what they wanted to accomplish over the course of the PRP. Furthermore, the ELT initially decided to review the Center’s performance evaluation process to identify its weaknesses and potential areas of improvement based on informal feedback from Center staff that they were not satisfied with the PEP as it had been performed previously. Therefore, our staff’s participation with this undertaking was critical in ensuring both its initial success and will also be important for the sustainment of our efforts to continuously improve our Center’s performance evaluation process.

There are two significant limitations of our work that should be further discussed due to their potential impact on our findings and the possibility that they may present challenges to its implementation in other settings. The first is related to the sample size of staff that participated in the survey component of our evaluation process for this initiative. New employees that were within their 90-day probation/trial period, supervisors/performance evaluators (ELT), volunteers, and contract employees did not participate in the survey and because of these exclusion criteria, the number of employees that were eligible to take the survey decreased. At the time that the baseline survey was distributed, there were 23 total employees at the CSPEC-Durham and after excluding the aforementioned employee types, only 11 employees were eligible to take the baseline survey. Furthermore, there were 21 total employees working for the Center at the time that the follow-up survey was disseminated, and after excluding employees that met the criteria listed above, only 8 employees were eligible to take the follow-up survey. These figures represent a decrease of 10% in the number of employees that were eligible to take the baseline survey (48%) and follow-up survey (38%), respectively. The changes in the composition of staff between the baseline and follow-up survey was also significant. Although the number of ELT members that served as supervisors/performance evaluators remained the same during the time between the two surveys (n=2), there were slightly less new, contract, and volunteer employees at the time when the baseline survey was administered (n=10) than when the follow-up survey was administered (n=11). The differences in the composition of Center staff between the two surveys may have had an impact on the results of the survey.

Furthermore, it is possible that the exclusion of new employees undergoing a 90-day performance evaluation, supervisors/performance evaluators (ELT), volunteers, and contract employees in this process yielded results that might have been different had these types of employees been considered eligible to participate in this effort. The rationale behind the exclusion of new employees from taking the baseline and follow-up surveys was that their performance would not be evaluated to the same extent as more established employees given that they were within their 90-day probation/trial period, still learning the nuances of their position, and gaining familiarity with the Center, CSP, and the larger VA. Supervisors/performance evaluators (ELT) receive their performance evaluations from CSP leadership and were not included in this effort as survey participants since the individuals that perform their evaluations were not initially included as stakeholders in this initiative. Contract employees at our Center often receive salary funding from multiple departments, perform work across various areas, and have multiple supervisors. We made the decision to not include our contract workers in this pilot given the complexity of their roles and reporting structures. Lastly, volunteers also receive a different type of performance evaluation than full-time, paid staff and were excluded from participating in this effort given their unique roles and contributions to the Center as unpaid staff with an interest in contributing to improving the overall health and well-being of our nation’s Veterans.

Secondly, the setting in which this pilot project was conducted may have had a potential influence on our results. From an organizational perspective, the CSPEC-Durham is housed in a clinical research program within a large, integrated healthcare system that is managed by the United States federal government. Therefore, neither Center staff nor members of the leadership team were unduly influenced by financial considerations in their decision-making efforts. This point is noteworthy because of its potential impact on the transferability of this strategy to other settings such as for-profit clinical research organizations or healthcare systems. In these types of settings, a greater emphasis could be placed by a supervisor or leadership team on employee goals in the context of their potential to increase revenue for the organization. For example, a supervisor might request that an employee either increase their number of targeted goals or take on specific goals that would generate additional revenue for the organization. The additional stress of having to develop and agree upon goals in the context of revenue or other financial implications could potentially alter the collaborative process that should exist between the supervisor and employee as they work together to develop the employee’s goals. The likelihood of developing goals that are important to both the organization and the employee may be decreased if the organization’s “bottom-line” ends up being a constant theme during this process and as a result, a higher number of goals that are of no interest to the employee may be selected by the employer. The importance of receiving stakeholder buy-in and the need for employees to be involved in decision-making as it relates to their positions and work areas, has been demonstrated as key factors in employee engagement, and were critical aspects of our approach (Amerine et al., 2017; Hung et al., 2006). Having buy-in from both parties (employer and employee) is paramount not only to the success of this type of effort, but also to its potential to be sustained over time.

Lastly, the survey results that we received may have been different if the timepoints that were used to distribute the baseline and follow-up survey were altered. The baseline survey was conducted in December 2017 and the follow-up survey was distributed to staff after their mid-year performance reviews were held (June 2018). It is possible that conducting the follow-up survey after completion of the fiscal year, i.e. post-September 2018 as opposed to the mid-year, may have resulted in the receipt of different responses. Furthermore, the follow-up survey was not distributed until late June 2018, while the mid-year performance evaluations were held in April 2018. It is possible that the survey results were subject to recall bias due to the two-month time period between the mid-year evaluations and distribution of the follow-up survey.

Our project demonstrated several notable strengths considering the aforementioned limitations.

To date, the amount of publicly available literature on the use of performance evaluation tools to assess individual research staff performance is limited. This approach was novel in that regard and our work establishes that a structured, performance-based assessment tool can be developed in a collaborative process involving both the employer and employee in a clinical research center setting. It also provides evidence that this type of tool is conducive to increasing staff satisfaction with the overall performance evaluation process in this setting. The collaborative nature of the development process for the performance evaluation guide and the evaluation process itself, were also notable strengths. It is imperative that staff feel involved in the decision-making process for determining the metrics that will be used to assess their performance, and the increase in staff satisfaction with the overall evaluation process served as a reminder of the benefit of this strategy. The diversity of perspectives and experiences of all parties involved undoubtedly strengthened the performance evaluation guide and the overall evaluation process.

In conclusion, the utilization of a performance-based assessment tool was an effective approach to objectively assess staff performance in a clinical research center setting. The tool was also successful in improving staff satisfaction with the overall performance evaluation process in this setting. Additional work is needed to determine the effectiveness of this strategy in other research institutions, and other organizations in general. Future iterations of this approach at our Center may likely include the employee types that were excluded from this initial pilot as their perspective and experience would likely benefit the overall process. The implementation of a “balanced scorecard” approach within the performance-based assessment tool will also likely be explored due to its potential benefit to strengthen the alignment between our organization’s strategy and mission statements with Center employees’ goals and the overall PEP (Kaplan & Norton, 1992; Inamdar et al., 2002). Assessing staff performance in a clinical research setting is complex due to a myriad of factors associated with the nature of research positions and as a result, the identification of strategies that can be employed to reduce the burden and challenges associated with the performance evaluation process are valuable to research administrators who are involved with this process at their respective organizations.

Disclaimer

The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs or the government of the United States.

Authors’ Note

This initiative was supported by the VA Cooperative Studies Program. We would like to thank the members of CSPEC-Durham (both current and past) that participated on this project. We would also like to thank Grant D. Huang, MPH, PhD and David Burnaska, MPA of the VA Cooperative Studies Program Central Office.

Marcus R. Johnson, MPH, MBA, MHA
CSP NODES National Program Manager
Durham VA Health Care System
508 Fulton Street (152)
Durham, NC, 27705, United States of America
Telephone: (919) 452-1464
marcus.johnson4@va.gov

A. Jasmine Bullard, MHA
Quality Research Specialist/Project Manager, CSP Epidemiology Center - Durham
Durham VA Health Care System

Correspondence concerning this article should be addressed to Marcus R. Johnson, MPH, MBA, MHA, CSP NODES National Program Manager, CSP Epidemiology Center-Durham, Durham VA Health Care System, 508 Fulton Street (152), Durham, NC, 27705, United States of America, marcus.johnson4@va.gov.

References

Amerine, L. B., Eckel, S. F., Granko, R. P., Hatfield, C., Savage, S., Forshay, E., Crisp, B., Waldron, K., Burgess, H. C., & Daniels, R. (2017). Improving employee engagement within a department of pharmacy. American Journal of Health-System Pharmacy, 74(17), 1316-1319. https://doi.org/10.2146/ajhp160740

Antes, A. L., Mart, A., & DuBois, J. M. (2016). Are leadership and management essential for good research? An interview study of genetic researchers. JERHRE: Journal of Empirical Research on Human Research Ethics, 11(5), 408-423. https://doi.org/10.1177/1556264616668775

Baer, A. R., Zon, R., Devine, S., & Lyss, A. P. (2011a). The clinical research team. Journal of Oncology Practice, 7(3), 188-92. https://doi.org/10.1200/JOP.2011.000276

Baer, A. R., Devine, S., Beardmore, C. D., & Catalano, R. (2011b). Clinical investigator responsibilities. Journal of Oncology Practice, 7(2), 124-8. https://doi.org/10.1200/JOP.2010.000216

Bjerke, M. B., & Renger, R. (2017). Being smart about writing SMART objectives. Evaluation and Program Planning, 61, 125-127. https://doi.org/10.1016/j.evalprogplan.2016.12.009

Bovend’Eerdt, T. J., Botell, R. E., & Wade, D. T. (2009). Writing SMART rehabilitation goals and achieving goal attainment scaling: A practical guide. Clinical Rehabilitation, 23(4), 352-351. https://doi.org/10.1177/0269215508101741

Byrne, Z. S., Peters, J. M., & Weston, J. W. (2009). The struggle with employee engagement: Measures and construct clarification using five samples. The Journal of Applied Psychology, 101(9), 1201-1227. https://doi.org/10.1037/apl0000124

Burton, W. N., Chen, C. Y., Li, X., & Schultz, A. B. (2017). The association of employee engagement at work with health risks and presenteeism. Journal of Occupational and Environmental Medicine, 59(10), 988-992. https://doi.org/10.1097/JOM.0000000000001108

Diamond, I. R., Grant, R. C., Feldman, B. M., Pencharz, P. B., Ling, S. C., Moore, A. M., Wales, P. W. (2014). Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies. Journal of Clinical Epidemiology, 67(4), 401-9. https://doi.org/10.1016/j.jclinepi.2013.12.002

Economic Policy Institute. (2000). The link between productivity growth and living standards. Retrieved December 11, 2018, from http://www.epi.org/publication/webfeatures_ snapshots_archive_03222000/

Ekeroma, A. J., Shulruf, B., McCowan, L., Hill, A. G., & Kenealy, T. (2016). Development and use of a research productivity assessment tool for clinicians in low-resource settings in the Pacific Islands: A Delphi study. Health Research Policy and Systems, 14, 9. https://doi.org/10.1186/s12961-016-0077-4

Fauth, R., Bevan, S., & Mills, P. (2009). Employee performance in the knowledge economy: Capturing keys to success. Psychology Research and Behavior Management, 2, 1-12. https://doi.org/10.2147/PRBM.S4216

Harris, P. A., Taylor, R., Thielke, R., Payne, J., Gonzalez, N., & Conde, J. G. (2009). Research electronic data capture (REDCap)–A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42(2), 377-81. https://doi.org/10.1016/j.jbi.2008.08.010

Harter, J. K., Schmidt, F. L., Asplund, J. W., Killham, E. A., & Agrawal, S. (2010). Causal impact of employee work perceptions on the bottom line of organizations. Perspectives on Psychological Science, 5(4), 378-389. https://doi.org/10.1177/1745691610374589

Humphrey-Murto, S., Varpio, L., Wood, T. J., Gonsalves, C., Ufholz, L. A., Mascioli, K., Wang, C., & Foth, T. (2017). The use of the Delphi and other consensus group methods in medical education research: A review. Academic Medicine, 92(10), 1491-1498. https://doi.org/10.1097/ACM.0000000000001812

Hung, D. Y., Rundall, T. G., Cohen, D. J., Tallia, A. F., & Crabtree, B. F. (2006). Productivity and turnover in PCPs: The role of staff participation in decision-making. Medical Care, 44(10), 946-951. https://doi.org/10.1097/01.mlr.0000220828.43049.32

Inamdar, N., Kaplan, R. S., Bower, M. (2002). Applying the balanced scorecard in healthcare

provider organizations. Journal of Healthcare Management, 47(3), 179-96. https://doi.org/10.1097/00115514-200205000-00008

Kaplan, N. (1959). The role of the Research Administrator. Administrative Science Quarterly, 4(1), 20-42. Retrieved November 25, 2018 from https://www.jstor.org/stable/2390647

Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard-measures that drive performance. Harvard Business Review, 70(1), 71-79.

Larkin, M. E., Lorenzi, G. M., Bayless, M., Cleary, P. A., Barnie, A., Golden, E., Hitt, S., & Genuth, S. (2012). Evolution of the study coordinator role: The 28-year experience in Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC). Clinical Trials, 9(4), 418-25. https://doi.org/10.1177/1740774512449532

Lemke, A. A., & Harris-Wai, J. N. (2015). Stakeholder engagement in policy development: Challenges and opportunities for human genomics. Genetics in Medicine, 17(12), 949-957. https://doi.org/10.1038/gim.2015.8

Loeppke, R., Taitel, M., Haufle, V., Parry, T., Kessler, R. C., & Jinnett, K. (2009). Health and productivity as a business strategy: A multiemployer study. Journal of Occupational and Environmental Medicine, 51(4), 411-428. https://doi.org/10.1097/JOM.0b013e318 1a39180

Mankins, M. (2017). Great companies obsess over productivity, not efficiency. Retrieved December 11, 2018 from https://hbr.org/2017/03/great-companies-obsess-over-productivity-not-efficiency?autocomplete=true

Mentz, R. J., & Peterson, E. D. (2017). Site Principal Investigators in multicenter clinical trials: Appropriately recognizing key contributors. Circulation, 135(13), 1185-1187. https://doi.org/10.1161/CIRCULATIONAHA.116.026650

Merry, L., Gagnon, A. J., & Thomas, J. (2010). The research program coordinator: An example of effective management. Journal of Professional Nursing, 26(4), 223-231. https://doi.org/10.1016/j.profnurs.2009.12.002

Nielsen, K., & Randall, R. (2012). The importance of employee participation and perceptions of changes in procedures in a teamworking intervention. Work & Stress, 26(2), 91-111. https://doi.org/10.1080/02678373.2012.682721

Pandi-Perumal, S. R., Akhter, S., Zizi, F., Jean-Louis, G., Ramasubramanian, C., Freeman, R. E, & Narasimhan, M. (2015). Project stakeholder management in the clinical research environment: How to do it right. Frontiers in Psychiatry, 6, 71. doi:10.3389/fpsyt.2015.00071

Purdom, M. A., Petersen, S., & Haas, B. K. (2017). Results of an oncology clinical trial nurse role delineation study. Oncology Nursing Forum, 44(5), 589-595. https://doi.org/10.1188/17.ONF.589-595

Rajan, A., Sullivan, R., Bakker, S., & van Harten, W. H. (2012). Critical appraisal of translational research models for suitability in performance assessment of cancer centers. Oncologist, 17(12), e48-e57. https://doi.org/10.1634/theoncologist.2012-0216

Schapper, C. C., Dwyer, T., Tregear, G. W., Aitken, M., & Clay, M. A. (2012). Research performance evaluation: The experience of an independent medical research institute. Australian Health Review, 36(2), 218-223. https://doi.org/10.1071/AH11057

Shanafelt, T., & Swensen, S. (2017). Leadership and physician burnout: Using the annual review to reduce burnout and promote engagement. American Journal of Medical Quality, 32(5), 563-565. https://doi.org/10.1177/1062860617691605

Tauginiene, L. (2009). The roles of a Research Administrator at a university. Public Policy and Administration, 30, 45-56.

Tichelaar, J., Uil den, S. H., Antonini, N. F., van Agtmael, M. A., de Vries, T. P., & Richir, M. C. (2016). A ‘SMART’ way to determine treatment goals in pharmacotherapy education. British Journal of Clinical Pharmacology, 82(1), 280-284. https://doi.org/10.1111/bcp.12919

Tullar, J. M., Amick, B. C., III, Brewer, S., Diamond, P. M., Kelder, S. H., & Mikhail, O. (2016). Improve employee engagement to retain your workforce. Healthcare Management Review, 41(4), 316-324. https://doi.org/10.1097/HMR.0000000000000079

U.S. Department of Veterans Affairs. (2014, April 28). VHA Directive 1205: VHA Cooperative Studies Program (CSP). Veterans Health Administration.

U.S. Department of Veterans Affairs. (2017). Veterans Health Administration: About VHA. Retrieved May 3, 2017 from https://www.va.gov/health/aboutvha.asp

U.S. Department of Veterans Affairs. (2018a). VHA Cooperative Studies Program. VHA Directive 1205. Retrieved July 1, 2018, from https://www.va.gov/vhapublications/publications.cfm?pub=2

U.S. Department of Veterans Affairs. (2018b). Veterans Health Administration, Office of

Research & Development. Cooperative Studies Program Epidemiology Center – Durham,

NC. Retrieved November 7, 2018, from https://www.research.va.gov/programs/csp/centers.cfm

Vali, L., Tabatabaee, S. S., Kalhor, R., Amini, S., & Kiaei, M. Z. (2015). Analysis of productivity improvement act for clinical staff working in the health system: A qualitative study. Global Journal of Health Science, 8(2), 106-116. https://doi.org/10.5539/gjhs.v8n2p106

Wu, H., Sears, L. E., Coberley, C. R., & Pope, J. E. (2016). Overall well-being and supervisor ratings of employee performance, accountability, customer service, innovation, prosocial behavior, and self-development. Journal of Occupational and Environmental Medicine, 58(1), 35-40. https://doi.org/10.1097/JOM.0000000000000612


#JournalofResearchAdministration
#VolumeLINumber1
0 comments
12 views

Permalink