Blog Viewer

Enhancing Institutional Research Capacity: Results and Lessons from a Pilot Project Program

By SRAI JRA posted 09-15-2018 12:00 AM

  

Volume XLIX, Number 2

Authors

  • Leslie Bienen, Oregon Health and Science University-Portland State University Joint School of Public Health
  • Carlos J. Crespo, Oregon Health and Science University-Portland State University Joint School of Public Health
  • Thomas E. Keller, Portland State University School of Social Work; Center for Interdisciplinary Mentoring Research at Portland State University
  • Alexandra R. Weinstein, Oregon Health and Science University-Portland State University Joint School of Public Health; Portland State University School of Social Work

Background on BUILD Initiative

The National Institutes of Health (NIH) considers increasing diversity of the U.S. biomedical workforce to be of such paramount importance that, in 2013, NIH leadership allocated 240 million dollars from the Common Fund to establish the BUILD (Building University Infrastructure Leading to Diversity) Initiative (“Building University Infrastructure Leading to Diversity”, n.d; “RFA-RM-13-016”, n.d). The major aim of the BUILD initiative, which is ongoing as of October 2018, is to encourage development and evaluation of innovative approaches for effectively engaging and retaining undergraduate students from diverse backgrounds in biomedical research (Valantine & Collins, 2015). In 2013, universities that met two criteria could apply for the first round of BUILD funding: 1) received less than 7.5 million dollars of NIH funding annually, averaged over the previous three years; and 2) enrolled a high percentage of low-income students. BUILD had the goal of identifying institutions that educate traditionally underrepresented student populations and substantially enhancing research and training capacity at those institutions (“RFA-RM-13-016”, n.d). Ten BUILD applicants were ultimately awarded five-year grants, at varying levels of funding, with the possibility of renewal for another five years. (For a full description of BUILD and to learn about the ten successful round one BUILD sites, see https://www.nigms.nih.gov/training/dpc/pages/build.aspx.

Portland State University (PSU), in Portland, OR, received a BUILD award and named our initiative BUILD EXITO (Enhancing Cross-Disciplinary Infrastructure and Training at Oregon). PSU is the primary institution. We have ten partner institutions: Oregon Health & Science University (OHSU), a research-intensive academic health center in Portland; four community colleges in Oregon and Washington that contribute a large number of transfer students to PSU; and six other partners, both two-year and four-year institutions, that span the Pacific Rim, with locations in Alaska, Hawaii, Guam, American Samoa, and the Northern Mariana Islands (see Table 1). Renewal applications were due in June 2018 and BUILD EXITO applied to renew our funding.

Table 1: Partner Institutions and Research Learning Communities

BUILD EXITO’s model and importance of pilot project program

To increase capacity for externally funded research, BUILD sites could include funding for pilot-project awards in their budgets. The primary goal of EXITO’s pilot project program was to stimulate faculty research and to provide additional opportunities for research faculty to mentor students in intensive research placements. The aim of this paper is to share our insights and methods for implementing a rigorous pilot project program at a non-research-intensive university that has a high number of underserved and diverse undergraduate students. As Richardson et al. (2017) explain, the EXITO model “is guided by socio-ecological theory [and offers] a three-year research training pathway for scholars in the biomedical, behavioral, social, clinical and bioengineering disciplines” (p. 133). BUILD EXITO’s model has at least seven fundamental components: student outreach and engagement; integrated curricular enhancements; intensive research experiences; multifaceted developmental mentoring; supportive community services; rigorous evaluation and quality improvement; and faculty and institutional development (Richardson et al., 2017). Research and faculty development occurs through multiple mechanisms, including: holding curriculum development conferences; research learning communities; the pilot project mechanism; ongoing mentor training and support; and developing campus infrastructure and services to support scholars with diverse backgrounds. (EXITO’s website has more details about our partners, evaluation, mentoring activities, and how scholars are recruited, trained, and retained in the program, at https://www.pdx.edu/exito/program-model.

Our paper presents a detailed description of how we implemented a comprehensive pilot project program that featured a competitive proposal process. We incorporated strategies described and recommended by others who have implemented similar pilot project programs elsewhere, including: development of a cohort of researchers who participated in pre-award workshops on writing, revising, and submitting grants (Banta et al., 2004; Godreau et al., 2015; Rust et al., 2006); offering individualized and group coaching and consultation (Brutkiewicz, 2012; Feldman & Acord, 2002; Huenneke, Stearns, Martinez, & Laurila, 2017; Rice et al., 2014); and furnishing PI applicants with two rounds of proposal reviews conducted by external experts so that PIs had an opportunity to do an extended revision of their proposal based on expert feedback (Gordin, 2004). Post-award, we also supported grantees in writing papers to disseminate their results (this work is ongoing as of this writing) and in writing proposals to other funders, including a workshop specifically on applying for mentored career awards. PIs’ dissemination efforts and proposal submissions are outcomes of the main EXITO intervention and, as such, are not discussed in detail here. The workshops were key program elements that supported faculty in achieving EXITO outcomes. Faculty applying for a pilot project grant, as well as students working with them, could participate in all or any of the workshops.

Pilot project PIs were also encouraged to join or establish an EXITO Research Learning Community (RLC). An RLC is a research team typically headed by an established investigator who already has external funding. EXITO Scholars are embedded within these mentor-rich communities in supported research placements that allow them to spend concentrated time working during the summer and academic year to learn about, and contribute to, real-life research projects. BUILD EXITO now supports 116 RLCs, mostly at PSU and OHSU, with an additional 14 at the University of Alaska Anchorage, 10 at University of Hawaii, and three at University of Guam. The pilot project program, therefore, creates additional placements in which students work on actual research projects and learn about conceptualizing a study, investigating a hypothesis, presenting a poster, writing a manuscript, attending a conference, and participating in other research activities. The pilot project program also provides a pathway for junior faculty to learn from a team of more senior researchers, through RLCs and in other fora, as we explain in detail in the Methods section (see also full RFA in Appendix I).

We provide extensive detail about our methods here, because when we first devised the model for our pilot project program, which involved recruiting large numbers of external expert reviewers, we heard repeatedly that “it couldn’t be done.” We report here that it can be done and explain in detail our methods for finding a large number of qualified external reviewers and for supporting PIs through the proposal process. However, in order for others to replicate, improve upon, or build off EXITO’s pilot project program, they need to know precisely what we did and what the rationales were for our decisions. In the Discussion we explain elements of the pilot project program that are significantly different from other programs seeking to place students into research settings and why they are important. We also discuss lessons learned from our program, both from successes and mistakes, as well as challenges to success.

Methods

Overview of the pilot project RFA

BUILD EXITO has released and funded two pilot project Requests for Applications (RFA), RFA1 and RFA2. The RFAs for these two rounds of funding were nearly identical and, for the sake of brevity, wherever possible we treat them here as a single RFA. One notable difference between RFA1 and RFA2 involved the timing of deadlines. All deadlines for RFA2 were more spread out, so that PIs had more time to revise their proposals between the first and second submission, and we had more time to secure outside expert reviewers (see Figures 1 and 2).

Applications had to adhere to all requirements and materials pertaining to submission of an R03, with one major difference: we required inclusion of a mentoring plan. The mentoring plan was a scorable section, giving it similar weight to an innovation or environment section. The mentoring plan had to include information about how the PI would mentor an undergraduate student working on the project, and it could include a proposal for the PI to be mentored him/herself by one or more senior researchers. In RFA1 we did not specify a page limit for the mentoring plan; in RFA2, we limited the mentoring plan to a single page and also provided sample mentoring plans to give the plans more consistency in structure across applications.

After release of the RFA, applicants submitted a one-page Letter of Intent (LOI), with a hard deadline. Because one of the goals of the pilot projects is to put PIs through a mentored dress rehearsal for applying to NIH, we treated all deadlines as firm. First submissions of proposals were due about six weeks after LOIs.

Figure 1: Timeline of RFA 1.Thinnest bars signify a one-day event

Figure 2: Timeline of RFA 2. Thinnest bars signify a one-day event

PI eligibility and outreach to faculty

Full-time faculty at any BUILD EXITO partner institution were eligible to apply for up to $50,000 of funding for pilot-project research that had to be completed within a one-year funding cycle. We recruited faculty at PSU and our partner institutions using five outreach methods. First, at PSU, we sent targeted e-mails to faculty through department chairs in all relevant biomedical departments, including social work, arts and sciences, public health, engineering, and others. Second, we used social media available at PSU. Third, we made announcements in an institution-wide “Funding Opportunities” list that is maintained and shared weekly at PSU. Fourth, we featured the announcement in the weekly faculty newsletter (we used similar funding newsletters at OHSU). Fifth, at our partner institutions for RFA1, we relied on EXITO newsletters and emails to EXITO faculty leads.

For RFA2, we made a concerted effort to reach faculty at our partner institutions and encourage them to apply. To increase applications from our partner institutions, some of which do not have established infrastructure for publicizing funding opportunities, we asked key contacts at those institutions how each institution communicates with faculty regularly (social media, faculty newsletter, internal bulletin board, etc.). We then reached out to the Communications Office at each college or university and made a detailed request to have the Pilot Project Funding opportunity included in the most appropriate communication platform. We created target communications based on the platform, unique institutional factors (such as location or school focus), and space available. Additionally, we asked our EXITO faculty and leadership at partner institutions to share targeted emails with colleagues. To encourage applications from partner institution faculty, we also hosted a brown bag information-sharing lunch during the EXITO summer curriculum conference and in advance of the RFA2 deadline, when we knew many EXITO faculty from our partner institutions would be at PSU. At the lunch, the pilot project coordinator gave a presentation on the RFA and we answered questions from participants about the application process, the mechanics of the program, and how to get paired with EXITO scholars. At the lunch we also gathered email addresses from potential PIs and later reached out to them individually to encourage them to apply and to share upcoming deadlines.

Support for pilot project applicants

Workshops

To support PIs in preparing their proposals, we held four workshops. Workshops were conducted in-person at PSU and were simultaneously live-streamed to partner institutions. We also made video of workshops available to be watched at any time through a link on our website, using the capture software Echo360. We scheduled workshops to account for time differences at partner institutions and posted a Frequently Asked Questions page on our website for PIs.

The first workshop was a technical workshop, led by the EXITO PI and the pilot project coordinator. This workshop described the purpose of the RFA and important details such as how applications should be submitted, various deadlines, the review process, requirements around inter-institutional collaboration, and eligibility criteria. The second workshop, led by the pilot project coordinator, was on proposal writing and grantsmanship, focusing on the NIH R03 mechanism. The third workshop, also led by the pilot project coordinator, was held after the PIs received their first set of reviews and focused on techniques for revising proposals and writing resubmission letters. The fourth workshop, led by one of the OHSU EXITO PIs, was on K Awards and other mentored career awards.

Other support

Throughout the application and review period the pilot project coordinator answered questions via email and met individually in person or by telephone with PIs who needed support in writing or had technical questions (e.g. about budgeting, biosketches, research plan strategies, etc.) Applicants could also receive help as needed from research administrators at their home institutions, or via PSU if they were at an institution that did not have research support staff. In addition, one of the EXITO PIs attended an institutional Departmental Research Administrators meeting to explain the RFA and reaffirmed the need to adhere to NIH standard protocol even though this was an internal pilot project program. The presentation highlighted that one of the ultimate goals is to increase institutional capacity to submit grants to NIH at PSU and at our four-year partner institutions.

Compliance with NIH requirements and protocols

PIs had to comply with all NIH requirements in trainings and protocols since the funding came from NIH. At PSU, this meant doing CITI training and working with PSU IRB and IUCUC departments as necessary. If PIs were at non-PSU institutions, they worked with their respective departments and completed whatever trainings their universities required. Pilot project staff and/or administrative staff supported PIs if they needed help working with IUCUC or with the IRB process, when necessary. If staff at other partner institutions were not available, PSU staff were available to help. NIH/NIGMS BUILD officers and program directors reviewed the pilot projects carefully to make sure all ethics requirements, human subjects, and animal use requirements were properly met.

Evaluation

After each workshop, we asked participants to fill out a brief survey on whether the workshops were helpful and to suggest information that might be useful in future workshops. Attendance was not mandatory but we tracked attendance for each one. BUILD EXITO Scholars were welcome to attend any workshop with or without their mentors.

Structure of review process

Our primary objective was to follow closely the NIH R03 submission and scoring process. We asked applicants to format and compile all documentation in accordance with NIH requirements. Applicants submitted proposals to research administrators at their own institutions, or at PSU if their institution did not have such staff, who reviewed them for compliance with NIH standard submission requirements and informed applicants of necessary corrections.

All submitted proposals were reviewed and scored by at least three external reviewers. Four proposals in each RFA had four reviewers. Proposals occasionally ended up with four reviewers because we always emailed more reviewers than we were looking for to guarantee we had at least three. Reviewers knew the PIs’ names and were asked to identify potential conflicts of interest with PIs, but PIs did not know names of their specific reviewers. We shared the names of all of the reviewers on the EXITO website after the ranking process was completed so PIs could see the entire list and identify any reviewers with whom they may have had a conflict.

Distributing and recovering reviews

We emailed each proposal, along with a scoring worksheet (a modifiable Word document) and guidelines for review to the appropriate reviewers. We entered all reviewers and PIs into color-coded spreadsheets so we could track who had returned reviews and who had not, a complex process for hundreds of reviewers. We sent timed reminders to reviewers throughout the course of the review process to urge them to return their feedback before the due date. For the first reminder, emails were general in tone. The next set of reminders was more targeted and addressed the reviewer by name. Finally, reviewers who had not returned their reviews by the deadline received up to three phone calls, using the phone number published on their departmental page, requesting that they submit their materials (see Discussion for lessons learned about recovering reviews). We uploaded all returned reviews to a password-protected Google Drive folder. We did not return reviews to PIs piecemeal, as we wanted all applicants to get their reviews back at the same time to prevent some from having a competitive advantage over others. Instead, we waited until every PI had three reviews returned, and then we sent all reviews to the applicants within a 24-hour period.

Quality control and consistency of reviews

Whenever we received a review, we checked it within two days of receiving it for adherence to NIH’s scoring guidelines and for quality. If reviewers made errors, such as providing lengthy comments but no numeric score for a particular section or forgetting that the mentoring plan needed a separate score, we sent the review back and requested the error be fixed. In RFA2, we changed our scoring template to add a dropdown menu for numeric scores.

Mechanics of revising and resubmitting

After PIs received their first round of reviews, they had about six weeks to revise and resubmit (see Figures 1 and 2). We encouraged all PIs, regardless of scores received, to revise and resubmit. To provide support at this stage, the pilot project coordinator held a proposal revision workshop which was live-streamed to individuals at partner institutions so they could participate remotely via chat. PIs then revised and resubmitted to their research administrators, using the same process as for their initial submission. Proposals were sent back to the same reviewers for a second round of scoring, with an accompanying one-page resubmission letter from the applicants responding to reviewer comments and detailing how the proposal had been strengthened.

Opportunity for reviewers to change scores in RFA2

For RFA2 only, we gave reviewers the opportunity to change their scores based on other written reviewers’ scores and comments, so that our review process would more closely mimic a Study Section at NIH, where reviewers can alter their scores based on others’ feedback. Once we had received all three reviews, we sent them all to each reviewer for a given proposal and communicated to reviewers that if we did not hear from them in a week’s time, we would assume that they wished their scores to be left unaltered.

Reviewer identification, recruitment, and demographics

Our goal was to recruit at least three experts to review each proposal, and for the same reviewers to review the initial and revised proposals. Reviewers had to be from a non-EXITO institution and could not have published or worked with pilot project PIs previously.

Reviewer identification

We used five primary methods to identify reviewers.

  1. NIH RePORTer: We used the NIH RePORTer, an online database of all research funded by NIH, to find names of people who had received NIH funding in fields closely related to the proposed projects, using keywords derived from applicants’ LOIs. We preferentially contacted researchers whose work had been funded recently and whose work closely resembled the proposed research of the applicants. We selected reviewers who were more established in their fields over junior faculty or post-doctoral fellows, using credentials and information furnished through individuals’ academic and/or departmental webpages.
  2. PubMed: We searched PubMed for key terms related to each proposal to find experts in specific research areas.
  3. NIH Study Sections: We asked PIs to list at least two NIH Study Sections appropriate for their pilot project were they to submit the same project to NIH for review. We then used Study Section rosters to identify reviewers. We also investigated whether Study Section participants had expertise that overlapped with the particular proposal in question. We primarily assessed expertise in the relevant area by looking at potential reviewers’ own funded research, and by searching PubMed for their publications.
  4. Project Scientist: Every BUILD site is assigned an NIH Project Scientist, separate from the Project Officer. Our Project Scientist assisted us in the implementation of EXITO activities and was instrumental in identifying a small number of NIH-funded external reviewers for projects where we encountered barriers securing reviewers.
  5. Colleague recommendations: We recruited a small portion of reviewers through recommendations from other reviewers. We used this method as a last resort and always verified through PubMed and/or NIH RePORTer that the suggested person was an expert in the topic area.

Before we used any reviewer, we verified with him or her that the PI and the reviewer did not know each other.

Reviewer recruitment

We sent individual emails to every potential reviewer outlining the overall mission of EXITO and the pilot project mechanism and asking them to participate in EXITO as an external reviewer. This email also included the title of the project and its PI, the timeline of the requested review, and a description of the honorarium (one hundred dollars per review). We also asked about conflicts of interest. Since we were aiming for three reviewers per proposal, we initially contacted between four and six reviewers per proposal, depending on the topic, before we had any acceptances or refusals. Potential reviewers who declined our request would sometimes refer us to a colleague they thought would be better suited for the project, whom we would subsequently contact if we thought the reviewer was a good fit based on the same criteria we used to identify our initial reviewers. If an email received no answer, we sent the same email one more time. We limited any nonresponder to two emails for the initial recruitment. In no case did we send bulk emails.

In RFA2 only, we recruited from NIH Study Sections. We did not do this in RFA1, but it is likely that many reviewers from RFA1 had also participated in NIH Study Sections. For RFA2, we also re-recruited some reviewers who had done outstanding jobs in RFA1, if we had RFA2 proposals that were in similar areas as proposals from RFA1.

Reviewer demographics (RFA2 only)

We did not have a scientific method of balancing gender and ethnicity of reviewers, nor would it have been possible to do so as making sure we simply had three qualified reviewers was a significant challenge. However, we made a concerted effort when compiling our initial list of names to have at least half female-associated names, and to solicit reviewers when indicators of potential racial/ethnic diversity were apparent.

Proposal ranking and selection for funding

Once proposals had received a final review and a final score, all proposals and reviews were read by and discussed with the NIH Project Scientist assigned to EXITO. BUILD awards are cooperative agreements with NIH that entail active involvement by NIH program officers and project scientists. The Project Scientist is different from the Project Officer and has a role similar to a Co-Investigator. We ranked all proposals numerically by averaging all reviewers’ Overall Impact scores for that proposal, in addition to the Approach and Mentoring scores. For RFA1, we used the Mentoring score to help differentiate proposals where the Impact average score was identical or nearly identical. Mentoring scores were assigned by reviewers as a separate score, similar to an innovation or environment score, based on the PI’s plan to mentor an undergraduate embedded in the project, and potentially a plan for the PI him/herself to receive mentoring by a senior scientist in their field. For RFA2, nearly all the mentoring plans received very high scores and thus were not useful for differentiating projects, likely because we provided sample mentoring plans to applicants. Therefore, we used the Approach score for RFA2, as it offered more variability, a change with which our NIH Project Scientist agreed. When proposal scores were too closely clustered to differentiate them based on numeric scores, we took into consideration faculty status of the PI (junior faculty received more weight), whether the PI was at a partner institution with less research capacity (affirmative received more weight) and diversity of the funding portfolio. Our NIH Project Scientist provided input in the ranking process described above.

After all the proposals were ranked, a six-person committee (the pilot project coordinator, EXITO supporting staff, key EXITO investigators with external grant review experience, and our assigned NIH Project Scientist) met to discuss the ranking criteria and to adjust proposal rankings if necessary. This committee then submitted the top fifteen proposals for discussion and final selection to the EXITO Steering Committee. The EXITO Steering Committee is comprised of Presidents from several EXITO institutions, Provosts, Vice Presidents for Research, Chief Diversity Officers, the NIH Project Officer and two external community representatives. The EXITO Steering Committee and our NIH program officers had access to the secure Google Drive where the proposals, abstracts, and scores were stored. The Steering Committee approved and forwarded the top ten proposals from RFA1 and the top eleven from RFA2 to NIH for final approval. NIH BUILD staff made the final funding decision after a thorough review.

Post-award management

One of the EXITO project coordinators conducted the majority of post-award management, in coordination with other EXITO staff. For RFA1, we held quarterly meetings to check in with PIs. At the last meeting, the PIs presented their findings and made suggestions for improving the pilot project process and discussed the types of support they might need going forward after the funding period ended. EXITO Scholars were welcome to attend all meetings. We also asked PIs to submit quarterly reports that tracked progress on their scientific aims, career development (whether they had submitted other proposals for funding), project dissemination (whether they had submitted posters, papers, and attended conferences), scheduled meetings with EXITO students, whether they were spending the money on schedule, and on any barriers to completing their work. Appendix II is the quarterly report form. RFA2 PIs received their funds in March 2018 so their post-award management began in April 2018.

Results

Number of faculty researchers participating in pilot projects

Table 2 shows the number of applicants that went through the pilot project process for RFA1 and RFA2. Sixty-six applicants applied across the two RFAs. Attrition rates were similar for RFAs 1 and 2. As noted before, we had a higher number of applicants from our partner institutions in RFA2. In RFA1 we had one applicant not from OHSU or PSU, and in RFA2, we had five applicants not from OHSU or PSU. This difference is not reflected in the table because we combined data for the two RFAs. The higher number of applicants from partners resulted in two funded projects from non-Portland partners among the final ten funded projects for RFA2.

Table 2: Partner institution participation across two Pilot Project RFAs

Participation in workshops

Because Echo360 does not show how many people are logged on remotely, we may have incomplete records of how many people attended the workshops. However, via the chat mechanism of Echo 360 we could roughly ascertain the numbers of people attending remotely. On average, 30-45 people attended each workshop, inclusive of both online and in-person attendees.

Attrition rate of PIs

Despite the varied challenges the longer and shorter timing provided, attrition of PIs between the first and second proposal submission was the same in both RFAs—6%, or two from each RFA. In each of these cases but one, extenuating circumstances applied such as a health issue or the timing of field work making it impossible to complete the second submission. In one case, no reason was given.

Outcome of reviewer recruitment process

We had asked PIs to list three suggested reviewers in their LOIs in case we could not find three reviewers through the above methods. However, we did not use any of the PIs’ suggested reviewers, except in one case where we noticed after the fact that one of the reviewers we recruited was also listed by the PI as a suggested reviewer. Therefore, all of the reviewers were recruited using one or more of the methods outlined in the methods section. We disqualified approximately three reviewers for each RFA who knew the PI of the project we were asking them to review. For RFA1 we did not track acceptance rate of reviewers as we had a short timeline for finding reviewers. In addition, since in RFA1 we had a greater number of reviewers who reviewed more than one proposal, the recruitment percentages are not strictly comparable. We also relied more heavily on the program staff’s professional networks to facilitate recruitment in RFA1. While we did not track the sources from which we drew reviewers for RFA1, we collected this information for RFA2, presented in Figure 3.

For RFA1, we recruited 89 reviewers to review 33 applications. Although every proposal had three or more reviewers, several reviewers reviewed more than one proposal in RFA1. For RFA2, with more time to recruit reviewers, we had only two reviewers who reviewed more than one proposal.

For RFA2, we recruited 103 reviewers to review 33 applications. Overall, 32% of the individuals we contacted agreed to serve as reviewers. Nine of the reviewers for RFA2 had reviewed for us in RFA1. The stipend was very small, only one hundred dollars, and was likely not a major incentive to participate. Several reviewers turned down the stipend, either because they could not accept the money if they worked for NIH, the Veteran’s Administration, or other federal departments, or because the paperwork was not worth the small fee.

For those who agreed to review, for both RFAs, we had very high retention from first submission to final submission and only had to replace one reviewer due to drop out.

Figure 3: Sources of External Reviewers for RFA2

Quality control of reviews

Consistency and adherence to NIH scoring metrics improved from RFA1 to RFA2, likely because of the addition of a drop-down menu for score entries. In RFA1, fifteen reviewers returned reviews with minor errors, compared to only three reviewers in RFA2. In addition, the rushed timeline for RFA1 may have contributed to reviewers having less time to error-check their work. On approximately five occasions, including both RFAs, we also asked reviewers to correlate their comments and their numerical scores more closely in accordance with NIH’s scoring guidelines, which they did.

Opportunity to change scores in RFA2

As explained previously, we gave RFA2 reviewers the opportunity to change their scores after seeing other reviewers’ scores. Only two reviewers out of 103 elected to adjust their scoring of the proposal after seeing their colleagues’ feedback and no one changed a section score by more than one point (e.g., from a three to a four on Innovation, etc.). This may have been because reviewers only had one week to notify us if they wished to change a score. However, many reviewers (at least ten) communicated to us that they were pleased to see that other reviewers were in agreement with their own scores.

Reviewer demographics

We did not collect racial, ethnic, or gender data from reviewers in RFA1. For RFA2, after all reviews had been returned, we asked reviewers to self-report their race and/or ethnicity using NIH’s categories. Reviewers were mostly white, and Asian was the largest non-white category. We oversampled individuals with female-associated names in reviewer recruitment, trying to populate half our reviewer list with women, and we were relatively successful in this regard, with 39% of reviewers being women. Nevertheless, our reviewer demographics reflect the non-diverse demographics of biomedical researchers in general. This skew was likely exacerbated because we aimed for more accomplished researchers with strong records of funding to review the proposals (Ginther et al., 2011). Last, we do not necessarily have a completely accurate picture of our reviewer demographics, as 54% (56/103) of reviewers did not answer our email query about their race/ethnicity.

Number and diversity of students participating in pilot projects

As of this writing, students are still being matched with RFA2 projects. For RFA1, 17 students participated: 9 EXITO students, 7 non-EXITO undergraduates, and 1 graduate student. Although we collect survey data from EXITO students about their self-reported demographics (first-generation college goers, race/ethnicity, experience in foster care, disability, and other metrics that would qualify them as underrepresented in biomedical research) we did not collect these data from pilot project student participants specifically. However, the EXITO program evaluators report that, out of 285 EXITO students who returned surveys, only 26 students did not put “yes” for at least one category from the above list (M. Honore, personal communication, May 29, 2018.). If pilot project students are reflective of EXITO students generally, then ~90% of them would fall into the category of underrepresented in biomedical research fields. In addition, the 7 non-EXITO students were recruited from programs at PSU that focus on first-generation college goers, from the Louis Stokes Alliance for Minority Participation (LSAMP (link is external)), a program at PSU dedicated to supporting the success of students underrepresented in STEM majors, or from programs run by faculty at PSU dedicated to underrepresented groups such as advancing women of color in science, or students from disadvantaged backgrounds. We do not have final outcome data yet, such as enrollment in graduate biomedical research programs, as the first full cohort of EXITO students graduated in June 2018.

Table 3: Gender and Self-reported Race/ethnicity of Reviewers, RFA2

Evaluation of program success

Defining success for conducting the pilot project program is complex and is different than defining success of the pilot project program outcomes, which are related to achieving the overall aims of BUILD EXITO. Therefore, we focus here on four elements of the pilot project program that we evaluated as best we could, given variations across RFA1 and RFA2. Some of the variations across the two RFAs occurred as we evaluated RFA1 and tried to improve the experience for PIs in RFA2. We also actively tried to increase the number of PIs from our partners, which necessitated changes to our outreach to partners, for example. The four questions that guided our evaluation were:

  1. Were all proposals matched with appropriate reviewers and did all PIs receive an initial and a final set of reviews from the same three reviewers?
  2. Were reviews constructive and relevant?
  3. Did PIs attend the workshops and did they find them helpful?
  4. Was feedback from our NIH program officers positive, and what did they want to see changed if they had criticisms?
We present evidence pertaining to each of these questions in Table 4.

Table 4: Benchmarks of Success and Assessment of Whether Benchmark was met for BUILD EXITO Pilot Project Program

Discussion

Although literature exists on engaging traditionally underrepresented students in research as a way to increase diversity in the biomedical workforce, less has been published about using such programs to enhance faculty research capacity at educational institutions that serve those students. We describe here in detail our pilot project program because we designed and tested its feasibility as part of a large long-term project, BUILD EXITO, that will also collect significant amounts of data on its success in at least two dimensions. First, we will examine the program’s ability to enhance capacity for research by faculty at low-resource universities. Second, we will determine whether the program is successful as a vehicle for increasing numbers of underserved students entering in and staying in biomedical fields by exposing them to mentored research projects. We do not yet have enough data on PIs who participated in the pilot project process to report on the outcomes described above, or on students who participated, as the first full cohort of EXITO students will graduate in spring 2018. Therefore, we report here on the lessons, challenges, and successes of our program model at creating a framework to support these important goals of building research support at low-resource institutions with diverse faculty and students. This framework will ultimately both increase diversity of faculty who engage in research and provide more opportunities for underrepresented minorities at those institutions to undergo mentored research experiences, potentially from faculty who are also from underrepresented groups (Fakayode, S. O. et al., 2014). The biomedical workforce still is overwhelmingly non-Hispanic white. The National Science Foundation and the NIH reported that only 5% of PIs on funded projects are from underrepresented minorities, as of 2012 (Coalition for Urban Serving Universities, Association of Public and Land-Grant Universities, Association of American Medical Colleges, 2016). Universities with high ethnic and racial diversity and high numbers of first-generation college attenders, such as PSU and our EXITO partners, can serve as important pools from which to draw students into research (Allen-Ramdial & Campbell, 2014; Auchincloss, L. C., et al. , 2014). Additionally, our pilot project program specifically sought to increase capacity at our partner institutions, which were originally selected because of their diverse student and faculty populations and their locations around the Pacific Rim. One of our goals for the second round of BUILD funding, if EXITO is refunded, is to engage a higher number of diverse faculty in the pilot project process and thus create a true cohort of faculty who can support each other and create the benefit of a cohort.

The pilot project program built institutional capacity for faculty through the following ten mechanisms:

  1. Providing seed money for future research projects
  2. Providing experience in grant writing and grantsmanship through workshops
  3. Strengthening future research proposal submissions by supplying research faculty with two sets of NIH-like reviews and giving PIs support and methods for responding to them
  4. Providing opportunities for faculty who may not be experienced in proposal submission to put together an entire proposal, including budgets, biosketches, IRB and other supporting documents, and generating these documents in NIH format
  5. Providing opportunities for faculty to work with research staff at their home institutions, so in the future when they submit proposals they will already know the research support team and what to expect from working with them
  6. Providing an opportunity for faculty to produce first authored publications and posters to support future funding proposals
  7. Finding collaborators at their home or other institutions who work in their fields
  8. Learning about other mechanisms such as K awards
  9. Learning how to conduct research and adjust to unplanned data-gathering hitches on a one-year timeline
  10. Gathering pilot data to support future larger research efforts

Through the above mechanisms, the pilot project program helped build institutional capacity for externally-funded research at PSU and EXITO partner institutions and, in turn, created more research training opportunities for undergraduate scholars. Although the pilot project program provided a mechanism for students to be embedded in research projects, the pilot project program was not the primary means of placing students in research opportunities through BUILD EXITO. Our primary, and more cost-effective mechanism for providing research placements is through RLCs, and pilot projects can help to foster the growth of these communities (see Build EXITO model website).

The pilot project process, by closely mirroring an NIH R03 mechanism, helped demystify the NIH and other grant-writing processes (Porter, 2004), and provided a finished product that has gone through a rigorous review, thereby instilling confidence and allowing for practice of actionable skills and behaviors related specifically to grant submission (Rust et al., 2006). In addition, as Godreau and colleagues (2015) pointed out, connecting faculty with research administration for support with grant preparation, and for overcoming IRB and IACUC hurdles, lowers barriers for faculty in submitting future proposals.

Finally, a series of four workshops supported faculty in achieving or working toward the above goals. The pilot project program workshops provided detailed instructions for junior faculty on proposal writing, grantsmanship, methods to respond to reviewer comments, and applying for mentored career awards. Faculty applying for a pilot project grant, as well as students working with them, could participate in all or any of the workshops.

Successes versus outcomes and how we defined “success”

We specifically did not include in this paper evaluations that, for example, compare successful applications to external funders between junior faculty who went through the program and those who did not. This metric is more correctly an outcome of the entire EXITO intervention and here we wanted to report on our methods and results of the pilot project program per se, not of EXITO as a larger intervention. We will conduct these analyses when RFA2 PIs have had time to gather their pilot data and apply for other funding, and as an explicit outcome of BUILD EXITO’s overall model. Our preliminary data on outcomes such as funding success are pointing to a high rate of success of applications for PIs funded under RFA1—around 31% success rate one year after RFA1 is complete (unpublished data). However, data collection is ongoing and we are working on establishing analytical methods for these complex data.

Challenges and lessons learned

We learned a great deal from RFA1, and again from RFA2, and we summarize some important lessons below.

Challenges of reviewer recruitment

The biggest challenge of both RFA1 and RFA2 was recruiting all the reviewers we needed and retrieving both sets of reviews from the reviewers within specific timelines. Because we did not want some PIs to get their reviews before others, as that would have allowed some PIs longer to work on their proposals, we had to get all the reviews sent out and back in a very narrow window of time. This task was much harder for RFA1 because we had less time and because we had not yet developed a comprehensive system for recruiting reviewers, such as using NIH Study Sections. As a result, for RFA2 we were able to find reviewers, even for very technical proposals, by using Study Sections. This technique enabled us to more easily and quickly find reviewers whose expertise was an excellent fit for the proposal in question. During RFA1, we sometimes emailed fifteen or twenty reviewers before finding three who were a good fit for the proposal, particularly for highly technical proposals. For RFA2, typically we emailed between six and eight people to find three reviewers because we had refined our recruitment methods.

Lessons learned:

  1. Having multiple methods of recruiting reviewers was key.
  2. Recruiting reviewers from NIH study sections was a particularly efficient method of finding highly qualified reviewers who were well matched with topics, particularly for highly technical proposals which can be challenging to match with reviewers.

Challenges of timing

Timing the RFA events was challenging, and potentially affects the quality of the experience for reviewers, for PIs, and for the coordinator and project staff charged with running the program. During RFA1, the entire schedule was compressed because of the timing of the BUILD funds release, leaving the pilot project program team with little time to recruit reviewers (see Figure 2). This put stress on the program staff who had to find a substantial number of reviewers (more than eighty) in only a few weeks, and on reviewers by requiring them to get their reviews back very quickly, which in turn made reviewer recruitment more difficult as the short timeline was
off-putting for potential reviewers. In response to the compressed timing of RFA1, we constructed a longer timeline for RFA2. This longer timeline, though it operated more smoothly overall than RFA1, led to the unintended consequence of difficulty recovering the reviews. The longer timeline extended into the summer, when academics are often on vacation, out conducting field work, or traveling to conferences. This made getting the reviews back challenging as reviewers were away from their emails. A happy medium between the short timeline of RFA1 (three months) and the longer timeline of RFA2 (eight months), would be ideal. In addition, with so much time to complete their second review in RFA2, several of the reviewers forgot about them completely which created hurdles to communicating with reviewers and recovering the reviews in a timely manner.

Lesson learned:

A six-month timeline for the entire process would be ideal, as it would eliminate disadvantages of both the too-short and the too-long timeline.

Challenges around reviewer scoring

We quality-checked every review that came in for adherence to NIH’s scoring standards, to make sure that every section had a score, and to check that the numerical scores and comments matched. We returned more reviews in RFA1 to be corrected, likely because of the short timeline and the lack of a pull-down menu for numeric scores. The most common error was forgetting to score the mentoring section. For two reviews over both RFAs, we asked the reviewer to rescore a particular section because the comments indicated more minor concerns than were reflected in the numerical score.

We also found it challenging at times to get reviewers to understand the mentoring component of the mechanism, particularly for RFA1. This confusion was aggravated by the fact that the investigators themselves did not necessarily understand how to write a mentoring plan in RFA1. Some reviewers expected to see a mentoring plan in place for the applicant in addition to the EXITO scholar, whereas other reviewers scored such mentoring plans harshly, and said that the plan for mentorship of the faculty member was unnecessary. By the second round of submission in RFA1 this problem had largely been resolved, and for RFA2 it was not an issue as we had clarified the instructions for the mentoring plan in the RFA and provided the PIs with sample mentoring plans. Another struggle was to get reviewers not to score the investigators poorly for lacking publications and research experience, as the goal of the mechanism was to help PIs gain this experience. When this was reflected in reviews, the pilot project coordinator coached PIs in responding to the reviewers in their resubmission letters, and this became a non-issue for the final set of reviews.

Lessons learned:

  1. A longer total time line reduced errors by reviewers.
  2. A sample mentoring plan given to PIs reduced confusion and inconsistency about what a mentoring plan was, and thus reduced reviewer confusion as well.
  3. Sending materials to reviewers about how to score the proposals did not eliminate common mistakes by reviewers, and reviewers had to be coached individually about unusual elements of our RFA such as the mentoring plan.
  4. A pull-down menu to select a score from 1-9 seemed to greatly reduce the number of reviewer errors around inadvertently leaving a section unscored.

Challenges of the ranking process

We found it challenging to rank projects from multiple disciplines against one another. In standard NIH ranking, projects are judged against other projects in the same Study Section within one institute. Here, all projects from various disciplines were ranked against one another. While we cannot make a generalization that applies to every project, we noticed that PIs from the bench sciences tended to have more training in proposal writing and to produce more traditionally organized and hypothesis-driven proposals, which in turn received more favorable scores. This may have placed social science proposals at a disadvantage when ranking them against traditional biomedical proposals. Nonetheless, more RFA1 awards went to applicants in social work than any other academic unit, and the first pilot project that became an independent NIH-funded grant was from an investigator from the School of Social Work at PSU. In addition, the Steering Committee recognized the need for a balanced portfolio of social and bench science grants when establishing the final recommendation. This is important because some studies on student persistence in STEM fields that have looked at the importance of research experiences have concluded that opportunities to conduct and participate in social science research may be particularly relevant for retaining underrepresented minorities in research (Martin, Marcus, Curtis, Eichenbaum, & Drucker, 2016).

Lessons learned: 

  1. PIs from social science disciplines may need more support in proposal preparation and writing.
  2. It is important to weight the overall portfolio and account for diversity of projects when ranking proposals, not simply numeric scores, in order to provide a diversity of projects for students to engage in.

Challenges around engaging PIs from partner institutions

We increased our engagement efforts with partner PIs because in RFA1 the vast majority of applications were from PSU and OHSU. With few applications from four-year partner-institution faculty (two-year faculty were not eligible to apply), no projects from our partner institutions in RFA1 received funding. During RFA2, we changed our methods for communicating about the pilot projects by engaging more staff on site at our partner institutions and figuring out what faculty newsletters were available at our partner institutions. Importantly, we also held a face-to-face lunch session with many junior faculty, who subsequently completed applications, during a time when we knew many of our partner faculty would be on site at PSU. In this way we were able to encourage PIs who might lack confidence to apply and give them a chance to sign up with their individual emails to receive information about upcoming deadlines. The pilot project coordinator then reached out to these PIs individually, rather than relying solely on faculty newsletters or campus-wide outlets. We also managed to retain PIs from our partners at high rates, as we did for all PIs. We attribute this high retention rate to the fact that all PIs were encouraged to revise and resubmit a final proposal, regardless of the strength of their score on the first round. We emphasized that the process was as important as the final funding, and that initially low scoring proposals could still be funded as only the final score counted in proposal ranking. Several proposals that received low scores on the first round of scoring received high scores for the final review and at least one such proposal was funded in each RFA.

Lessons learned:

  1. Using information outlets specific to our partner campuses seemed to increase applications from partner PIs.
  2. Holding an in-person information session to gather individual emails, as well as reassure PIs who might not feel confident in their ability to write a proposal, likely increased the pool of PIs from our partners considerably.
  3. More partner PI applications greatly increased likelihood of funded PIs at partner institutions.
  4. Individual outreach to PIs who scored low on the first submission likely increased retention of PIs, as did the process of only counting the final score in the ranking system.
  5. Future pilot projects RFAs will likely limit PI eligibility to PSU and our four-year partners, excluding OHSU, in order to maximize funds for universities and faculty that can benefit more from the resources.

Future iterations of the pilot project program

As we consider what we will do if BUILD EXITO is renewed for five more years after the initial funding period ends in 2019, we must consider how best to assess and maintain the effects of our pilot project program on participating faculty. We are tracking proposal submissions, success rates of submissions, and numbers of publications and posters by PIs who were awarded funding related to their pilot projects. We will also include data for analyses from faculty who participated in the pilot project program but were not funded. We must figure out how to compare these two groups, however, as they are not exactly the same because the unfunded participants were not able to gather data from their pilot project. However, as recipients of most of the intervention, via receiving reviews, attending workshops, and gaining support to interact with university research administrators, these PIs should also have bolstered their capacity to engage in research at their institutions. This is particularly true for unfunded PIs who went through RFA1 and RFA2, of which there were at least four. One PI from RFA1 was subsequently funded in RFA2. We have begun collecting data on successful submissions from funded RFA1 PIs and are very pleased with our initial results. We plan to publish these analyses in a separate paper when we have collected data from both RFAs.

Conclusion

Our methods and lessons can be used to help disseminate a model to enhance institutional research capacity through a pilot project program. Our participants were predominantly junior faculty from a wide variety of biomedical, behavioral, social, clinical, and bioengineering disciplines. This diversity in disciplines created challenges for the program, particularly for recruiting reviewers. Diversity in disciplines of PIs also creates some ranking challenges when weighing scores, but these challenges can be addressed by deliberately seeking to create a diverse portfolio of funded projects and not necessarily only funding the top ten numerically scored projects. We sought to overcome these challenges with our targeted and multi-sourced recruitment methods; with individualized support for program participants; and through workshops that addressed both basic and sophisticated challenges of proposal writing. Our experience demonstrates that it is possible to use and adhere to a rigorous process to recruit external reviewers and to support faculty in the submission process. Universities planning to invest funds to support new faculty in research can benefit from our experiences and from the strategies we used to overcome hurdles we faced in this process. In future iterations of the pilot project program, we will aim to recruit a more diverse pool of reviewers; continue to enhance and expand our efforts to recruit PIs from our partner institutions; and continue to find ways to permanently institutionalize the support for junior faculty that the pilot program provided. We also will investigate the possibility of actively recruiting a more diverse pool of faculty researchers and providing support for them to engage in the process as a true cohort, rather than as individual researchers scattered across multiple institutions, both so they can support each other and so they can better support diverse students (Salto, L. M. et al. 2014; Villarejo, M. et al. 2008).

As we implement the second stage of RFA2, we are refining our methods to keep these unfunded PIs engaged in the pilot project program and encouraging them to use the resources the program expended on them. Many of the proposed projects were meritorious, above and beyond the ten or elevan we could fund in each cycle. Armed with two sets og three reviews of their projects and support for writing proposals, many of the PIs should be able to increase their success at gaining funding even though they were not funded through EXITO. If our BUILD EXITO funding is renewed, we will apply the lessons learned from our first two RFAs to RFA3 so that we can further refine and improve our pilot project program, and permanently institutionalize it at PSU and at our partner institutions.

Authors’ Note

Acknowledgments: The authors thank our project scientist and program officers at NIH for their helpful input and guidance through the pilot project program development and implementation. We also thank Portland State University’s dedicated sponsored projects team, particularly Kathleen Choi, Alex Leeding, and Lauren Russell, for their patience and fortitude in working with us. Cynthia Morris, BUILD EXITO lead at OHSU is, as always, a wise counselor. Other people who worked hard to make the program successful include, but are not limited to, Nelson Aguila, Drake Mitchell, Tom Becker, Caitlyn Beals, Jennifer Lindwall, Matt Honore, De’Sha Wolf, Adrienne Zell, and Cassandra Ramirez. Many thanks also to our RFA1 graduate assistant, Erin Coppola..

Disclaimers: This work was supported by grants from the National Institutes of Health: UL1GM118964; RL5GM118963; TL4GM118965. The authors have no conflicts of interest to disclose.

Leslie Bienen, DVM, MFA
Pilot Project Coordinator
Oregon Health and Science University- Portland State University Joint School of Public Health
Portland, Oregon, USA 97239
Tel. (509) 951-1118
Email: lbienen@pdx.edu

Carlos J. Crespo, DrPH
Oregon Health and Science University- Portland State University Joint School of Public Health
Portland, OR 97239

Thomas E. Keller, PhD
Portland State University School of Social Work
Center for Interdisciplinary Mentoring Research
Portland State University
Portland, OR 97239

Alexandra R. Weinstein
Oregon Health and Science University- Portland State University Joint School of Public Health
Portland State University School of Social Work
Portland, OR 97239

Correspondence concerning this article should be addressed to Dr. Leslie Bienen, Pilot Project Coordinator, Oregon Health and Science University- Portland State University Joint School of Public Health, Portland, Oregon, 97239, United States of America, lbienen@pdx.edu.

References

Allen-Ramdial, S.-A. A., & Campbell, A. G. (2014). Reimagining the pipeline: Advancing STEM diversity, persistence, and success. BioScience, 64(7), 612–618. https://doi.org/10.1093/biosci/biu076

Auchincloss, L. C., Laursen, S. L., Branchaw, J. L., Eagan, K., Graham, M., Hanauer, D. I., & Dolan, E. L. (2014). Assessment of course-based undergraduate research experiences: A meeting report. CBE-Life Sciences Education, 13(1), 29–40. https://doi.org/10.1187/cbe.14-01-0004

Banta, M., Brewer, R., Hansen, A., Ku, H.-Y., Pacheco, K., Powers, R., … Tucker, G. (2004). An innovative program for cultivating grant writing skills in new faculty members. Journal of Research Administration, 35(1), 17.

Brutkiewicz, R. R. (2012). Research faculty development: An historical perspective and ideas for a successful future. Advances in Health Sciences Education, 17(2), 259–268. https://doi.org/10.1007/s10459-010-9261-4

Coalition for Urban Serving Universities, Association of Public and Land-Grant Universities, Association of American Medical Colleges. (2016). Increasing diversity in the biomedical research workforce: Actions for improving evidence [White paper]. Retrieved May 2, 2018, from http://www.aplu.org/library/increasing-diversity-in-the-biomedical-research-workforce

Fakayode, S. O., Yakubu, M., Adeyeye, O. M., Pollard, D. A., & Mohammed, A. K. (2014). Promoting undergraduate STEM education at a historically black college and university through research experience. Journal of Chemical Education, 91(5), 662–665. https://doi.org/10.1021/ed400482b

Feldman, H. R., & Acord, L. (2002). Strategies for building faculty research programs in institutions that are not research intensive. Journal of Professional Nursing, 18(3), 140–46. https://doi.org/10.1053/jpnu.2002.124486

Ginther, D. K., Schaffer, W. T., Schnell, J., Masimore, B., Liu, F., Haak, L. L., & Kington, R. (2011). Race, ethnicity, and NIH research awards. Science, 333(6045), 1015–1019. doi:10.1126/science.1196783

Godreau, I., Gavillán-Suárez, J., Franco-Ortiz, M., Calderón-Squiabro, J. M., Marti, V., & Gaspar-Concepción, J. (2015). Growing faculty research for students’ success: Best practices of a research institute at a minority-serving undergraduate institution. Journal of Research Administration, 46(2), 55–78.

Gordin, B. (2004). A development program for junior faculty submitting National Institutes of Health grant applications. Journal of Research Administration, 35(1), 12.

Huenneke, L., Stearns, D., Martinez, J., & Laurila, K. (2017). Key strategies for building research capacity of university faculty members. Innovative Higher Education, 42(5), 421–435. https://doi.org/10.1007/s10755-017-9394-y

Martin, Y. C., Marcus, A., Curtis, R., Eichenbaum, J., & Drucker, E. (2016). Strength in numbers: A model for undergraduate research training and education in the social and behavioral sciences. Journal of Criminal Justice Education, 27(4), 567–583. https://doi.org/10.1080/10511253.2016.1150499

National Institute of General Medical Sciences. (2016). Building University Infrastructure Leading to Diversity (BUILD) Initiative. Retrieved November 28, 2017, from https://www.nigms.nih.gov/training/dpc/pages/build.aspx

Porter, R. (2004). Off the launching pad: Stimulating proposal development by junior faculty. Journal of Research Administration, 35(1), 6.

RFA-RM-13-016: NIH Building Infrastructure Leading to Diversity (BUILD) Initiative (U54). (n.d.). Retrieved November 28, 2017, from https://grants.nih.gov/grants/guide/rfa-files/RFA-RM-13-016.html 

Rice, T., Liu, L., Jeffe, D., Jobe, J., Boutjdir, M., Pace, B., & Rao, D. (2014). Enhancing the careers of under-represented junior faculty in biomedical research: The Summer Institute Program to Increase Diversity (SIPID). Journal of the National Medical Association, 106(1), 50–57. https://doi.org/10.1016/S0027-9684(15)30070-5

Richardson, D. M., Keller, T. E., Wolf, D., Zell, A., Morris, C., Crespo, C. (2017). BUILD EXITO: A multi-level intervention to support diversity in health focused research. BMC Proceedings, 11(S12), 19, 133-147. doi:10.1186/s12919-017-0080-y

Rust, G., Taylor, V., Herbert-Carter, J., Smith, Q. T., Earles, K., & Kondwani, K. (2006). The Morehouse Faculty Development Program: Evolving methods and 10-year outcomes. Family Medicine, 38(1), 43. Retrieved from http://www.stfm.org/fmhub/fm2006/ January/George43.pdf

Salto, L. M., Riggs, M. L., Leon, D. D. D., Casiano, C. A., & Leon, M. D. (2014). Underrepresented minority high school and college students report STEM-pipeline sustaining gains after participating in the Loma Linda University Summer Health Disparities Research Program. PLoS ONE, 9(9), e108497. https://doi.org/10.1371/journal.pone.0108497

Valantine, H. A., & Collins, F. S. (2015). National Institutes of Health addresses the science of diversity. Proceedings of the National Academy of Sciences, 112(40), 12240–12242. https://doi.org/10.1073/pnas.1515612112

Villarejo, M., Barlow, A. E. L., Kogan, D., Veazey, B. D., & Sweeney, J. K. (2008). Encouraging minority undergraduates to choose science careers: Career paths survey results. CBE-Life Sciences Education, 7(4), 394–409. https://doi.org/10.1187/cbe.08-04-0018

Keywords

Pilot projects; stimulating faculty research; grant writing; faculty development; research infrastructure; research administration; faculty mentoring; BUILD initiative; diversity in research; student mentoring.

Have any questions? Contact the editor »


#ResearchAdministration
#BUILDInitiative
#VolumeXLIXNumber2
#PilotProjects
#StimulatingFacultyResearch
#FacultyDevelopment
#ResearchInfrastructure
#FacultyMentoring
#DiversityInResearch
#StudentMentoring
#grantWriting
0 comments
73 views

Permalink