Do We Measure Up? How Research Administration Offices Evaluate Their Services | Pulse

by Zoya Davis-Hamilton on Thursday, December 21, 2017

In this issue of the Catalyst, we revisit valuable issues.

Republished from Pulse, July 2014.


Service evaluations help users communicate their needs, and provide research administration offices with data about where they are meeting their goals, and where there are areas for improvement. We surveyed offices of research administration in order to find the current “pulse” of how they are evaluating their services.


How do you evaluate whether the services of your research administration office are meeting your goals? Wanting to know the answer to this question from colleagues, we presented research administrators on the RESADM-L and subscribers of the weekly email briefs of the Report on Research Compliance with five questions. The questions were:

  1. How does your office of research administration evaluate its services?
  2. What specific indicators do you look at?
  3. How do you collect the data?
  4. How do you ensure that the results make sense?
  5. What do you do with the findings?

A total of 175 research administrators responded to our survey between June 18th and July 2nd of 2014. The survey results provided several interesting insights into how we as a community are evaluating the services our offices provide. Twenty-two percent - or almost a quarter of those who responded - do not evaluate their services. Of those who do engage in evaluations, 79% use informal feedback from customers, 39% administer satisfaction surveys, and 28% utilize formal metrics, with the total exceeding 100% because survey participants were able to report the use of more than one method. We further learned that the majority of offices solicited feedback from customers (65%) or examined data from extant reports (62%). It was less common (33%) to make a special effort to collect data for evaluation purposes.

Quantitative Metrics

In terms of metrics used, research administration offices most commonly employed the number of processed proposals /awards/ accounts (75%) to evaluate their services. We call these indicators quantitative measures, as they measure something quantifiable. Survey comments revealed that in addition, institutions use a variety of specialized metrics, including:

  1. noticeable changes from year to year in the number of proposals and awards;
  2. the number of first time faculty proposals;
  3. turnaround time (for account set up, for contracts, etc.);
  4. on-time submissions of progress reports, and;
  5. number of complaints, which arguably may belong with customer satisfaction.

Other quantitative metrics of note included amount of sponsored funding per evaluation cycle (47%), and other formal metrics at 6%. Evaluating the complexity of projects was rated at 19%.

Customer Satisfaction

Almost as common as the use of quantitative metrics, many research administration offices, (68%), use customer satisfaction as a key metric in evaluating their services. Upon reflection this is not surprising, as our offices exist to provide support to investigators (though many object to be described as “customers”.) Somewhat unexpectedly, 35% of the survey participants look at success rates during the evaluation of the success of their offices in meeting goals. We hope that this information is used by the respondents’ institutions wisely because most of the time research administrators have limited ability to improve funding rates.

Making Sense of Results

Evaluating your institution is a great idea, but it can be hard to understand the results in a vacuum. The majority of respondents to our survey (89%) ensured that their evaluation results made sense by comparing results to their own previous results. Respondents also reported that they compare results with peer institutions (20%), developed metrics in consultations and with feedback from members of their organizations (12%), and used existing outside evaluation tools with proven validity (6%). Sadly, despite these methods 15% of the survey participants still report having doubts about validity of their evaluations.

Using Your Findings

Finally, we learned that for those who conducted some sort of evaluation of their research administration office, findings did not have practical implications 21% of the time. While this is a rather significant occurrence, we are nevertheless heartened that 70% of findings triggered adjustments in systems/processes, with 30% of findings affecting performance evaluations and 24% affecting the number of FTEs in their office. Other reported uses of evaluation findings included help in motivating staff, correction of “misimpressions,” development of training and outreach initiatives, availability of seed funding for new projects, overall “evaluating the process” and “making improvements.”

Summary
Our survey revealed that most offices of research administration conduct some kind of evaluations of their services, most commonly including a combination of informal feedback and existing data, which are then compared to previous internal evaluations. It appears that satisfaction surveys and the use of formal metrics, though less widespread, are gaining prominence, with a significant percentage of the community employing these tools. While 15% of those who conduct evaluations have doubts about their validity, the vast majority have enough confidence in their findings to use them to adjust systems and processes.

Grateful recognitions to Sarah Marina from the Office of Proposal Development at Tufts University for input into the survey development, collaboration on writing, and copyediting of this column. SRA will continue to check the “pulse” of research administrators on various topics throughout the year. These results will be published in the SRA Catalyst. Look for the next column in October of 2014. If you have any topics or questions that you want to see addressed in Pulse in the future, please let us know. Send feedback, ideas, questions and inquiries to Zoya Davis-Hamilton at zoya.hamilton@tufts.edu.

Note: As the survey was an open access poll and anonymous, results may be slightly skewed by multiple responses from within one institution. Given the size of our sample we feel this impact is likely slight; however, it’s something to consider for anyone thinking of doing future research in this area. In addition, several respondents who filled out the survey just after it was released were unable to select multiple answers to the questions. We greatly appreciated the alert from the community about this mishap in the survey design, and we fixed it quickly enough to be reasonably confident that it did not sway the results.

Credits:
Pamela Miller, University of California, Berkeley –topic of the survey
Theresa Defino, Editor, Report on Research Compliance –help with survey distribution


Authored by:
Zoya Davis-Hamilton
Associate Vice Provost, Research Administration & Development
Tufts University

Sarah Marina
Assistant Director, Research Development
Tufts University


Tagged with: