Measuring Institutional Capacity for Grantsmanship: Constructing a Survey Tool for Institutions to Assess Institutional Support for Faculty and Administrators to Pursue Grant Funding

By SRAI JRA posted 10-17-2024 03:38 PM

  

Volume LV, Number 2

Measuring Institutional Capacity for Grantsmanship: Constructing a Survey Tool for Institutions to Assess Institutional Support for Faculty and Administrators to Pursue Grant Funding

Lauren Gant, MA
Senior Professional Research Assistant 
School of Education and Human Development
University of Colorado Denver

Christine Velez, MA
Associate Director
School of Education and Human Development
University of Colorado Denver

Mónica Torres, Ph.D.
Chancellor
NMSU Community Colleges

Abstract

Measuring the level of institutional capacity for grantsmanship within higher education informs administrators about the needs of their organization and where resources and institutional supports can be implemented to support faculty and staff. Receiving grant funding can lead to implementing cutting-edge programming and research support, which could improve the quality of education provided and, ultimately, student retention. While conducting an institutional capacity needs assessment is crucial for making data-informed decisions, there is a significant gap in institutional capacity research; specifically, there is no valid and reliable assessment tool designed to measure institutional capacity for grantsmanship. The present study aims to develop an assessment tool for higher education institutions to evaluate support systems and identify the needs of their faculty and administrators for grant writing efforts. The current study used a mixed-method approach over three phases to understand the indicators behind measuring institutional capacity for grantsmanship. We developed six reliable scales—promoting grant proposal writing, proposal writing (for faculty), proposal writing (for administrators), proposal writing (all respondents), submitting grant proposals, implementing grant activities, and managing awards. This study contributes to our understanding of institutional capacity and produced a reliable assessment tool to support grantsmanship.

Keywords: Institutional capacity, grant proposal writing, survey development, grantsmanship, needs assessment, equity

Introduction

Grant awards are a substantial source of income for higher education institutions that can fund cutting-edge programs and curricula, which enhance the institution's credibility and contribute to student retention (Stoop et al., 2023). Securing grant funding also supports research and evaluation endeavors that create opportunities for internal and external collaboration and partnerships and drive faculty career advancement (Krzyżek-Liburska, 2023). Given the significant potential for growth and innovation that accompanies acquiring grant funds, higher education institutions are increasingly interested in evaluating and expanding their organization’s capacity to support grantsmanship. However, grant awards are highly competitive, and faculty and administrators' experience and skill in grant writing and management can vary widely (Garton, 2012; Glowacki et al., 2020; Goff-Albritton et al., 2022; Porter, 2007). Applying for and managing grants is a multifaceted process that requires an understanding of different funding sources available, individual sponsor requirements, and how to create a compelling proposal, navigate the grant submission process, and maintain the award if the submission is accepted (Cunningham, 2020). Given the complexity of the grantsmanship process and the varying needs and interests among faculty and administrators to pursue funding, higher education institutions must implement institutional support systems to build capacity across their organizations (Krzyżek-Liburska, 2023).

Higher education institutions often place significant pressure on faculty and administrators to be the drivers of pursuing grantsmanship (Goff-Albritton et al., 2022; Scarpinato & Viviani, 2022). Universities and colleges often include applying for grant funding within faculty’s job descriptions and make eligibility for promotions and tenure predicated on successful grant acquisition (Goff-Albritton et al., 2022). However, while faculty and staff may be well-versed in their discipline's literature and research areas, this does not guarantee they have the skills and knowledge necessary to pursue grant opportunities (Glowacki et al., 2020; Porter, 2007). Faculty within a department represent differing career stages, levels of experience, and connections to networks of partners. Therefore, institutions need to provide support to accommodate these differences. Research has shown that faculty with access to institutional support and mentorship are more likely to acquire funding successfully (Krzyżek-Liburska, 2023). Comparatively, administrators are tasked with implementing effective structural supports to equip faculty with the knowledge and skills to pursue grantsmanship. Administrators must ensure their staff have the training, skills, and availability necessary to support faculty often while working with limited budgets (Scarpinato & Viviani, 2022). Therefore, institutions need to consider the responsibilities and needs of faculty and administrators to guide the types of institutional support implemented to ensure staff can confidently navigate the grantsmanship process.

In support of building capacity for grantsmanship, colleges and universities often offer institutional support such as grant writing workshops, institutional review boards, and dedicated offices or point persons who provide personalized assistance and communicate the resources available within the university (Krzyżek-Liburska, 2023). Grant writing workshops are frequently available to faculty and staff, providing crucial information and advice regarding the multifaceted grantsmanship process, which has proven effective in receiving a grant award (Glowacki et al., 2020). Internal review boards are another resource involving diverse experts within the organization who review and provide feedback on the design, sampling, and methodology of research and evaluation projects. Lastly, offices dedicated to supporting grant writing are implemented to provide personalized support through all stages of the grantsmanship process, including identifying funding opportunities available while considering eligibility criteria, mission alignment, deadlines, regulations, proposal development, budgeting advice, and management. Such institutional support and resources effectively provide higher education staff with knowledge and skills to enhance the staff's capacity to pursue grantsmanship. The importance of institutionalizing support systems within higher education cannot be overstated; however, institutions must also consider the unique needs of diverse faculty members when determining where institutional capacity could be established or expanded. Specifically, organizations should gather feedback on faculty perspectives to effectively support their individual faculty, thereby building capacity across the institution.

When considering which institutional supports need to be implemented or augmented to increase institutional capacity for grantsmanship, institutions should target which stages of grantsmanship that their faculty and staff identify as needing support. Institutional support is necessary at every stage of the grantsmanship process, including identifying funding opportunities, proposal writing, grant submission, grant implementation, and award management. Given that institutional support bolsters skills and knowledge in certain areas of the grantsmanship process, it is crucial to identify staff needs based on their varying experience and skill sets. Identifying funding opportunities can be daunting, especially among early-career faculty who may not know potential internal and external funding sources and their associated requirements and deadlines. Faculty must also understand how to manage those external partnerships (Memorandum of Understanding, subcontracting) and interact with grant offices and granting agencies (Krzyżek-Liburska, 2023). Therefore, identifying funding opportunities is influenced mainly by being attuned to networks of funders, which positions earlier-career faculty and faculty from smaller institutions at a disadvantage in obtaining grant funding successfully (Krzyżek-Liburska, 2023).

The other elements of successful grantsmanship are no easier for faculty members unfamiliar with the process. Proposal writing and submission is a multifaceted process that requires adequate institutional support. Grant writing varies greatly from the academic writing formatting and style, making it challenging for even esteemed faculty to know how to be competitive in obtaining grant awards (Garton, 2012; Glowacki et al., 2020; Goff-Albritton et al., 2022; Krzyżek-Liburska, 2023; Porter, 2007). The grant submission process is also tedious and complicated. It involves learning and navigating grant submission portals, interacting with an institutional review board, developing a project budget, and learning contractual and compliance procedures for accepting the award. Grant implementation and award management also involve complex processes: carrying out the grant activities, managing the budget, and managing contracts. Overall, the convoluting grantsmanship process requires institutional support at every stage to successfully build organizational capacity. Institutions need to consider the diverse needs of their institutions and measure the staff and faculty’s perceived effectiveness and weakness of current institutional support to make data-driven decisions about institutional needs.

Need for Instrument 

While much of the existent literature regarding institutional support for research focuses on effective institutional strategies to build capacity for grantsmanship, there is a gap in empirical research addressing the extent to which faculty feel supported by their institutions to pursue and manage grant funding and can report their preferences for needed research support services (Goff-Albritton et al., 2022). Every institution and department has diverse faculty and staff at different career stages, with varying experience, interests, and needs; therefore, a needs assessment will allow an institution to make data-informed decisions to meet the unique needs of individuals, departments, and institutions instead of implementing uniform standards or programming. A survey instrument will allow institutions to evaluate support systems and identify the needs of their faculty and administrators who engage in grant-writing efforts to provide clearer pathways for obtaining resources (Honadle, 2018).

Understanding the degree to which faculty feel supported to pursue funding allows administrators the ability to make informed decisions and distribute necessary resources to build or improve effective support systems (Honadle, 2018). While some existing literature examines faculty’s perceptions of institutional support, the methods involve a qualitative approach through focus groups and interviews; however, there is a need to create a standardized, reliable approach to measure attitudes quantitatively. Qualitative data collection can be time-consuming and resource-heavy for institutions to replicate within their organization, especially if they want to collect longitudinal feedback. In addition, a validated instrument can ensure institutions are asking the right questions to capture the multifaceted steps needed to assess institutional capacity. Without understanding what it means to measure institutional capacity, institutions are left without clear guidance to implement institutional support. Further, even fewer studies include both faculty and administrators' perspectives on institutional support; therefore, a needs assessment mechanism is needed to gather administrator and faculty perspectives to understand the institutional capacity to support grantsmanship within an organization.

A standardized, open-access, free assessment tool also contributes to equity because it benefits all institutions that seek to understand how to build or improve support systems. The potential impact for smaller, underfunded universities and community colleges is amplified because these institutions may not have the capacity to thoroughly evaluate faculty and administrators’ perspectives on institutional support. Small departments and colleges need data to drive internal decision-making to ensure their limited budgets are allocated to areas identified by their faculty and administration. Specifically, Hispanic-serving institutions are historically underfunded and underrepresented in grant applications, so building a tool to bolster institutional support to build capacity creates more equitable opportunities to pursue grant opportunities. In grant applications and awards, diversity is crucial to supporting underrepresented institutions in implementing innovative programming, curriculum, and research. In service of equity, the current study seeks to provide a tool that all institutions can use to identify gaps and distribute the resources necessary to be competitive to acquire grant funding.

This multi-part study aims to construct a set of scales measuring institutional capacity for grantsmanship. The tool is intended to provide more equitable opportunities for institutions with a variety of staff knowledge and experience with grant writing, smaller institutions, and underrepresented institutions to build infrastructure to make them more competitive for grant funding. Creating a uniform and free survey instrument has the potential to equip institutions with the knowledge to make data-informed institutional-level decisions to drive building institutional capacity. The current study helps fill a gap in the literature regarding shared knowledge of the multifaceted approach to measuring institutional capacity while creating a practical tool to serve many institutions in pursuing grant funding opportunities.

Context

In the fall of 2018, New Mexico State University (NMSU) and California State University Northridge (CSUN) received funding through the National Science Foundation (NSF) to establish the first Hispanic Serving Institution (HSI) NSF HSI National STEM Resource Hub (the Hub). The Hub aims to support HSIs in building science, technology, engineering, and math (STEM) education capacity to increase STEM student retention and degree completion. Specifically, the Hub offers various services, workshops, and training to equip HSIs with the resources necessary to pursue NSF grant funding to support STEM education and pedagogy, especially among organizations with little or no experience applying for NSF funding. In pursuit of supporting the Hub’s mission, a team of external evaluators and representatives from NMSU and Doña Ana Community College (a branch campus of NMSU) collaborated to develop an institutional capacity for grantsmanship survey tool that would assess the extent to which faculty and administrators felt their organization provided grantsmanship resources and support.

Instrument Development

Developing the initial instrument for the present study was a collaborative effort between external evaluators and representatives from NMSU and Doña Ana Community College to effectively measure institutional capacity for grantsmanship. Additionally, this study's co-principal investigator (PI) is a member of the Hub leadership team and an experienced higher education administrator. The initial survey design drew on the PI’s years of experience attending grant workshops and conferences where higher education representatives discussed their lack of information concerning institutional capacity to support grantsmanship. Specifically, the Hub sponsored a series of free grantsmanship workshops for faculty, staff, and partners who were either affiliated with an HSI or wanted to collaborate with HSI partners, designed to bolster faculty skills in different areas of the grantsmanship process. Admission priority for the grantsmanship classes was granted to faculty within the first ten years of their academic tenure-track appointment and faculty representing diverse geographical locations and institutions. The grantsmanship workshops covered various topics, including examining the different stages of grantsmanship and the critical infrastructure needed to support and receive grants. The workshops were also structured to facilitate meaningful collaboration and networking opportunities during the sessions. Therefore, the HSI grant workshops created rich opportunities for higher education representatives from diverse backgrounds to share their experiences with the grantsmanship process, effective institutional supports, and the need for a more quantitative approach to examine the needs of faculty and administrators.

Attending the Hub grantsmanship workshop sessions allowed the PI to listen to the needs of faculty and administrators working in higher education. These discussions provided preliminary construct validity for the dimensions of grantsmanship used in the design for an initial survey. An evaluation team was then consulted to assist in refining the survey instrument. Evaluators recommended retaining 41 survey items and organizing the survey to include five constructs: 1) identifying funding opportunities, 2) proposal writing, 3) submitting grant proposals, 4) implementing grant activities, and 5) managing awards.

While there were five total constructs in the survey, proposal writing was subdivided into three scales: proposal writing (faculty only), proposal writing (administrators only), and proposal writing (all respondents). The proposal writing scales were designed to gather and analyze insights from both administrators and faculty separately on institutional capacity. Given their differing roles around grantsmanship, the Hub PI author decided to include administrators and faculty. Administrators were thought to have insights into what is needed to quickly obtain resources to strengthen institutional resources, while faculty are more involved in implementing programming. Collecting their responses separately was intended to provide a comprehensive picture of institutional capacity and encourage discourse regarding the needs of their organization.

An initial set of 41 survey items was developed to explore the identified dimensions of institutional capacity for grantsmanship. Items were written in the forward direction (a high score represents high institutional capacity). Respondents were asked to record their answers on a 4-point Likert scale; the wording for the anchor scale points varied to suit the item). Following the construction of the initial survey instrument, a three-part study using a multi-method approach was used to test its utility.

Methods

The present study aimed to develop a set of valid and reliable scales to measure institutional capacity for grantsmanship using a mixed-method approach over three phases. The first phase consisted of an item reliability analysis on pilot survey responses related to the different dimensions of institutional capacity for grantsmanship from a small sample. The second phase was designed to increase the instrument's validity by conducting interviews with survey participants to help refine the scales. The third phase involved administering the survey to a new and larger sample and using the results to explore the scales’ dimensionality and reliability. Overall, the study used triangulation by incorporating qualitative and quantitative methods across multiple data sources to confirm the accuracy of the findings in the study's final phase. Therefore, the current study used a thorough process to ensure the development of a comprehensive needs assessment tool.

Phase 1

A pilot study was conducted in January 2022 using the first draft of the survey, which included 41 items based on the five constructs.

Sample

The survey was sent to a small convenience sample of Hub members representing diverse institutions. The sample included Hub members from 14 different HSI institutions from seven states and Puerto Rico. The sample sites were chosen because they represented a diverse range of funding sources (public or private), institutional types (community college, 4+ year college, or research university), institutional sizes based on student enrollment (small = less than 5,000, medium = 5-15,000, large = over 15,000), and geographic locations. A total of 90 representatives from these institutions were invited to complete the survey.

A Qualtrics survey was administered online and remained open for 21 days. Survey reminders were sent 5 and 13 days after the initial survey launch. Survey respondents were informed that their participation was voluntary, that their responses were confidential, and that responses were being used to test the reliability of the scales within the survey. Participants were not provided with incentives to participate in the survey. The invitation to complete the survey was written by the Hub investigator author with their signature and email address to increase the likelihood of participation.

Table 1 shows the characteristics of individual respondents and their organizations. The 26 respondents included 18 faculty/staff (69%) and eight administrators (31%). Overall, most respondents represented public institutions (92%) that were considered Hispanic (96%) or Minority Serving (4%). Approximately a third of organizations represented (35%) were community colleges/associate degree-granting institutions. Many respondents (43%) had been at their institutions for over ten years..

Table 1 

Phase 1 Sample Characteristics (n=26)

Process

An analysis was conducted to test the internal consistency reliability (Cronbach’s α) for the initial 41 items within each of the seven scales. Given the small sample size (n=26), this was considered an exploratory analysis; however, the results provided insights into areas of improvement before launching the survey to a larger sample.


Findings

Table 2 shows the reliability scores for each item and the scale’s overall score.

Table 2 

Item Reliability Analysis for Initial Measures (n=26)

Four scales yielded an internal consistency reliability over 0.7 which is deemed acceptable (Aiken & Groth-Marnat, 2009). The scale reliability findings provides a measure of validity as it indicates the items in the scale are measuring the same attribute. The four scales that exhibited high reliability scores were: identifying funding opportunities (α= 0.760), proposal writing (administrators only) (α= 0.856), implementing grant activities (α= 0.871), and managing awards (α= 0.919). The factor analysis revealed that while the four aforementioned scales met the criteria for a reliable scale, some items would increase the scale score if removed from future survey versions.

Three scales did not meet the 0.7 threshold to be considered a reliable scale, including proposal writing (faculty only), proposal writing (all respondents), and submitting grant proposals. The proposal writing (faculty only) scale had low reliability (α=0.497). One possible explanation identified as potentially influencing the low reliability of the scale involved the use of “I” statements, which was inconsistent with the other scales. While the other scales contained statements acknowledging institutional supports, the proposal writing (faculty only) scale used measures such as “I know who to go to for” or “I have.” Therefore, the team speculated that the phrasing of the survey questions could potentially place too much focus on the faculty’s personal responsibility to be knowledgeable of resources at the institution to support proposal writing. In contrast, the other scales focused on institutional support systems.

The proposal writing (all respondents) scale exhibited the lowest reliability score (α=0.056), with some items in the scale having a negative Cronbach’s alpha value. There are several reasons that an item reliability analysis could lead to a negative value, including an inefficient or small sample size or a sampling error. A negative score means the statements must be removed, modified, or tested on a new and robust sample. Therefore, the scale could be dropped from the study, extensively modified, or tested on a larger sample size. Overall, this highlights the need for all the scales to be tested on a larger sample size in future studies to be considered reliable and generalizable.

Lastly, the submitting grant proposals (all respondents) scale had a low reliability (α=0.699). Within this last scale, one item, “Provide access to institutional documents necessary for submission of grant proposals to granting agencies,” would increase the scale's reliability (α=0.746) if the item were modified or removed from the scale. Therefore, improvements could be made to that item to increase the scale's reliability.

Overall, the small sample size in Phase 1 eliminated the ability to suggest that the findings were generalizable. However, Phase 1 yielded four scales with a high internal consistency (α<0.7) and insights into items within those scales that could be modified to improve the overall reliability of the scales. Phase 1 also revealed that three scales had very low reliability and could either be dropped or modified to increase the score. In sum, while Phase 1 helped explore the internal consistency of scale items on a small convenience sample, further analyses were needed to continue exploring the survey's dimensionality and reliability.

Phase 2

The purpose of Phase 2 was to increase the study's validity by conducting interviews with pilot survey participants from Phase 1 to help refine the scales. Including qualitative information is imperative to help gather insights from survey respondents to draw meaningful interpretations of the quantitative data (Creswell & Creswell, 2022).

Process

In anticipation that follow-up discussions would help interpret any areas that might have low reliability, the final question in the Phase 1 survey asked respondents if they would be willing to participate in a brief informal interview regarding their experience with the survey. Interviewees were offered a $15 gift card for their participation. In March 2022, four interviews were conducted. Participants were asked to describe how they interpreted each survey question in the scales that exhibited low reliability.

Findings

Interview results revealed that participants had different interpretations of the phrasing of a few survey questions which may have led to the low reliability among the scales in Phase 1. Specifically, the interviewees expressed that the phrasing of some of the pilot survey questions were vague, and the items should be modified to be more specific and provide examples. Interview results also suggested the need to rephrase the questions that focused on an individual’s knowledge of various grant management processes. These items were modified to ask directly about institutional support systems.

Table 3 shows the changes made to the pilot survey questions, including updated phrasing of existing survey questions and additional questions recommended by the interview respondents.

Table 3

Overview of Survey Item Changes from Phase 1 to Phase 2

Overall, the interview results provided valuable insights to help improve the reliability and validity of the survey instrument. In response to the interview results, the low-reliability items were rewritten to explicitly focus on institutional support instead of personal knowledge and rephrased to be more specific to avoid different interpretations of the measures. Interviewees' first-hand experience with grant writing support allowed them to suggest improvements and additions to the survey content. Once the survey items were rephrased, the survey was prepared for a second administration.

Phase 3

The purpose of the third phase was to administer the refined survey to a new and larger sample to examine the scales’ reliability and dimensionality.

Sample

In October 2023, the updated survey was administered to a new, larger sample. While Phase 1 used a subset of Hub members, Phase 3 invited all Hub members, either faculty/staff or administrators, to participate in the survey (n=1,207).

A Qualtrics survey was administered and remained open for two weeks; survey reminders were sent three business days and one week after the initial email. Respondents were informed that their responses were confidential, their participation was voluntary, and they would not receive incentives. The Hub principal investigator wrote the survey invitation email to increase the study's credibility and the likelihood of participation.

Overall, 286 survey responses were received from the 1,207 respondents invited, resulting in a 24% response rate. The low response rate could be attributed to the survey being sent using a Hub member list, which may not account for personnel changes. Of the 286 responses, 230 surveys were complete (80%). Three complete surveys were excluded from the final sample because they were from graduate students. Therefore, the final sample consisted of 227 respondents, including representatives from 150 departments at over 100 institutions across the United States and Puerto Rico.

Table 4 describes the characteristics of the respondents in the sample and the institutions they represent. They were primarily representatives from public institutions (81%), Hispanic Serving (90%), and from an organization with a size greater than 10,000 (57%). Approximately one-third of the respondents represented a community college/associate degree-seeking institution (24%). Respondents represented mostly faculty and staff (56%) and those with more than ten years of experience at their institution (42%).

Table 4

Phase 3 Sample Characteristics (n=227)

Findings

The project team conducted internal consistency reliability and confirmatory factor analyses on the more robust survey sample to determine the dimensionality of the items within the scales. Responses to the 33 questions asked of both administrators and faculty were loaded into a principal-axis factoring analysis with a varimax rotation. The items were standardized, and the analysis yielded six scales with eigenvalues greater than one, accounting for 67.48% of the variance; however, after reviewing the rotated factor matrix, several items loaded on multiple factors, which indicated a need for examining the placement of the items. Only three items loaded onto factor six with +0.3, and four items loaded onto factor five while also loading highly on other factors; therefore, only four factors were retained, accounting for 60.85% of the variance. Responses to all the items were then resubmitted through a principal-axis factoring and constrained to four factors. Overall, the factor analysis revealed that most items loaded onto the scales created in Phase 1, except for some minor modifications.

The factor analysis suggested combining the proposal writing (all respondents) and the identifying opportunities scale because all the items loaded very highly onto one factor. Therefore, those were combined, and the scale was renamed to promoting grant proposal writing. The authors felt that those two scales combined represented the initial stages of the grant writing process, and it made sense that they loaded onto one scale. Only three items were dropped from the overall analysis: 1) “supports research,” 2) “provides boilerplate information on student enrollment, graduation rates, and demographic information that I need for my grant proposal,” and 3) “Employs dedicated grant writers.” While these three items loaded highly with scales different from Phase 1, the authors felt they did not fit the categories thematically. Specifically, item 1 loaded highly onto the promoting grant proposal writing scale and the submitting grant proposals scale. However, since the item loaded onto several scales that did not fit thematically, the authors thought that the item was possibly too vague, so it was dropped from the survey. Item 2 loaded on the managing awards scale and the implementing grant activities scale; however, the authors felt that this item would only align with the theme of submitting grant proposals, so the item was dropped from the survey tool. Item 3 loaded onto the managing awards scale and the implementing grant activities scale; however, the authors felt this measure would only fit under the promoting grant proposal writing scale. Therefore, 30 total items remained and loaded onto four factors. While the three items were dropped from the overall scale loadings, researchers and evaluators could consider retaining these items as standalone survey items. Lastly, the item “Provides or outsources an Institutional Review Board process for human subjects” loaded highly with the other items in the submitting grant proposals scale, which the authors thought made sense, so the item was moved to that scale. Overall, other than the aforementioned adjustments, the items in the scale loaded onto the original scales identified in Phases 1 and 2, highlighting the validity of the scales.

The proposal writing (faculty only) and the proposal writing (administrators only) scales were run separately in a principal-axis factoring analysis because of the sampling difference. The proposal writing (faculty only) scale loaded onto one factor with an eigenvalue over 1, explained 51.79% of the variance, and was considered reliable (α<0.7). No items were dropped, and all items were loaded in the same direction. The proposal writing (administrators only) items loaded onto one factor with an eigenvalue over 1 are considered reliable and explained 57.90% of the variance. No items were dropped, and all the items loaded in the same direction.

Table 5 shows the final survey scales and items with the total reliability score. Overall, the results from Phase 3 yielded six reliable and valid scales (α<0.7) that measure the multifaceted aspects of institutional support for grantsmanship. Institutions can broadly use the resulting assessment tool to measure organizational capacity. 

Table 5

Factor Analysis 

Discussion

Pursuing grantsmanship is critical within higher education institutions to fund research opportunities and maintain a high standard of programming and curricula to support the career trajectories of both students and faculty (Stoop et al., 2023). Grant awards comprise a significant portion of university income, making the grantsmanship process highly competitive (Krzyżek-Liburska, 2023). While the value of receiving a grant award cannot be overstated, faculty are more likely to successfully acquire funds when provided with adequate institutional support at every stage of the grantsmanship process. Faculty within a single department can represent varying career stages and connections to networks of partners and differ in their ability to skillfully write grant proposals (Garton, 2012; Glowacki et al., 2020; Goff-Albritton et al., 2022; Porter, 2007). Given the highly competitive and complex nature of grantsmanship and the varying needs of faculty, institutions must be able to adequately measure the needs of faculty to build institutional supports that will build institutional capacity to support grantsmanship. Specifically, institutions need to measure the capacity around each stage of the grantsmanship process, including proposal writing, grant submission, grant implementation, and award management, to assess the organization's needs.

Currently, no validated survey tool exists to examine institutional capacity to support grantsmanship, making it challenging for administrators to implement support tailored to faculty needs. The purpose of the present study was to fill this gap in the literature by providing a reliable and valid assessment tool that can be free and accessible and used by higher education institutions to measure capacity for grantsmanship. The initial survey was thoughtfully designed by a co-investigator who is an experienced higher education administrator. The authors recognized the importance of listening to the needs of faculty and administrators and sought to produce a valuable product that can be broadly disseminated to institutions interested in building capacity. The survey was designed to capture the different stages of grantsmanship to help institutions target critical areas for institutional support to meet the unique needs of their faculty. Challenges accompany each stage of the grantsmanship process, so it is imperative to capture the various activities that comprise each stage to build support.

The current survey tool achieves validity and reliability through quantitative and qualitative data collection over three phases. Specifically, the final study produced six scales with high reliability (α<0.7) measuring various levels of institutional capacity for grantsmanship. The promoting grant proposal writing scale (α=0.889) captures the beginning stage of pursuing grantsmanship, which examines the extent to which faculty and administrators feel their institution provides support, such as disseminating notifications about funding opportunities specific to their work, having the infrastructure to support internal and external partnerships, incentives for grant writing, etc. Then, the two proposal writing scales for faculty (α=0.761) and administrators (α=0.849) are designed to collect feedback from both university positions on infrastructure to support writing, including having a designated point person for all inquiries, administrative support, resources and training materials, etc. These two scales will provide insight into whether administrators and faculty are aligned on their perception of institutional support for proposal writing. The submitting grant proposal scale (α=0.879) measures how administrators and faculty feel supported by their institutions to submit proposals. It includes providing support on navigating grant submission portals and remaining in compliance with contractual and financial/budgetary obligations, etc. The implementing grant activities scale (α=0.872) was designed to capture staff perceptions of institutional support during the implementation stage, including providing administrative support and facilities to begin grant activities. Lastly, the managing awards scale involves institutional support for budget management, contracts, program evaluation, and monitoring resources. Overall, the survey instrument was thoughtfully designed to capture faculty and administrators' views on the extent to which they feel supported by their institutions during the many stages of the grantsmanship process. This instrument will give higher education institutions the knowledge necessary to build institutional capacity to support grantsmanship.

Conclusion

This study resulted in a reliable and valid assessment tool containing six scales measuring activities associated with different stages of the complex grantsmanship process. While much of the grantsmanship literature details effective strategies to build capacity for grantsmanship, there is a lack of literature examining the extent to which faculty and administrators feel supported by their institutions to pursue grantsmanship and report their preferences for needed institutional supports (Goff-Albritton et al., 2022). Evaluating staff needs allows administrators to make data-driven decisions and allocate resources to build or enhance support systems (Honadle, 2018). The survey instrument was thoughtfully designed through a quantitative and qualitative approach to provide the best possible support to faculty and administrators who must communicate their needs to build institutional support to promote grantsmanship. The instrument will be available for free and open access by institutions as a resource to measure the institutional capacity within their organizations. We hope this instrument can be used widely to implement program policies and initiatives to ensure faculty and staff feel empowered to pursue grantsmanship.

Acknowledgments

The project described in the following pages is supported by the U.S. National Science Foundation under grant awards to New Mexico State University (1832338) and California State University, Northridge (1832345). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Science Foundation. 


The authors would like to thank Elba Serrano, MariaElena Zavala, Susan Connors, and Jarod Peterman for careful review and insights in the preparation of this manuscript; John Juarez, Margie Vela, and David Leech for their contributions and ideas to the initial development of the tool; Martha Desmond and Sonia Cooper for their support of this initiative.

Lauren Gant, MA*
Senior Professional Research Assistant
School of Education and Human Development
University of Colorado Denver
1391 Speer Blvd., Suite 340,
Denver CO 80204
lauren.gant@ucdenver.edu
303-335-7642

Christine Velez, MA
Associate Director
School of Education and Human Development
University of Colorado Denver
1391 Speer Blvd., Suite 340,
Denver CO 80204

Mónica Torres, Ph.D.
Chancellor
NMSU Community Colleges
Las Cruces, New Mexico 88003

*Correspondence concerning this article should be addressed to Lauren Gant, MA, Senior Professional Research Assistant, School of Education and Human Development, University of Colorado Denver, 1391 Speer Blvd., Suite 340, Denver CO 80204, lauren.gant@ucdenver.edu.

Author Bios

Lauren Gant, M.A. received her master’s degree in applied Sociology at the University of Northern Colorado. Her research interests include organizational and institutional sociology.  

Christine Velez, M.A., is the Associate Director of The Evaluation Center, University of Colorado Denver.

Mónica Torres, Ph.D., is the Chancellor of the NMSU System Community Colleges.

References

Aiken, L. R., & Groth-Marnat, G. (2009). Psychological testing and assessment. Pearson.

Creswell, J. W., & Creswell, J. D. (2022). Research design: Qualitative, quantitative, and mixed methods approaches (6th ed.). Sage.

Cunningham, K. (2020). Beyond boundaries: Developing grant writing skills across higher education institutions. Journal of Research Administration, 51(2), 41-57. https://eric.ed.gov/?id=EJ1293015

Garton, L. S. (2012, June 10-13). Grantsmanship and the proposal development process: Lessons learned from several years of programs for junior faculty. Paper presented at 2012 ASEE Annual Conference & Exposition, San Antonio, Texas. https://doi.org/10.18260/1-2--21439

Glowacki, S., Nims, J. K., & Liggit, P. (2020). Determining the impact of grant writing workshops on faculty learning. Journal of Research Administration, 51(2), 58-77. https://eric.ed.gov/?id=EJ1293016

Goff-Albritton, R. A., Cola, P. A., Walker, J., Pierre, J., Yerra, S. D., & Garcia, I. (2022). Faculty views on the barriers and facilitators to grant activities in the USA: A systematic literature review. Journal of Research Administration, 53(2), 14–39. https://eric.ed.gov/?id=EJ1362093

Honadle, B. W. (1981). A capacity-building framework: A search for concept and purpose. Public Administration Review, 41(5), 575. https://doi.org/10.2307/976270

Krzyżek-Liburska, S. (2023). Support systems for research proposals–Institutional approach. In J. Nesterak & B. Ziębicki (Eds.), Knowledge, economy, society: Increasing business performance in the digital era (pp. 173-181). Institute of Economics, Polish Academy of Sciences.

Porter, R. (2007). Why academics have a hard time writing good grant proposals. Journal of Research Administration, 38(2), 37-43. https://eric.ed.gov/?id=EJ902223

Scarpinato, K., & Viviani, J. (2022). Is it time to rethink how we support research: Teams, squads and mission? – An opinion. Journal of Research Administration, 54(1), 87–93. https://www.srainternational.org/blogs/srai-jra2/2023/03/14/is-it-time-to-rethink-how-we-support-research-team

Stoop, C., Belou, R., & Smith, J. L. (2023). Facilitating the success of women’s early career grants: A local solution to a national problem. Innovative Higher Education, 48(5), 907–924. https://doi.org/10.1007/s10755-023-09661-w

Permalink