The Use of Generative AI in Research Grant Applications: Balancing Innovation, Transparency and Integrity

By SRAI News posted 10-09-2025 04:18 PM

  

Operations & Workflow Management

The Use of Generative AI in Research Grant Applications: Balancing Innovation, Transparency and Integrity

Spotlight Story

 

GenAI is influencing research grant applications by aiding in drafting, data analysis, and knowledge mobilization, while emphasizing adherence to ethical standards, transparency, and confidentiality established by Canadian and U.S. agencies. The article highlights the community’s responsibility to ethically integrate GenAI, balancing innovation with integrity and originality.

 


 

Generative Artificial Intelligence (GenAI) has been profoundly transforming the research enterprise, impacting researchers, funders and research administrators. The use of GenAI is increasingly viewed as a strategic necessity rather than an option for post-secondary institutions. AI tools hold great promise for revolutionizing research processes, from generating ideas and conducting literature reviews to data analyses and modelling, budget justification, and creating plain language summaries for knowledge mobilization.  However, GenAI also presents potential risks. For example, AI systems are trained on data that can change over time, which can impact their reliability (NIST AI risk management framework). Additionally, these systems can be biased, potentially amplifying false information and exacerbating inequities (Sharma & Harris, 2024). Understanding the risks and adopting human-centric, socially responsible, ethical, and sustainable approaches in AI development and use helps mitigate associated risks. The Government of Canada’s guide on the use of generative artificial intelligence encourages users to follow FASTER principles: Fair, Accountable, Secure, Transparent, Educated, and Relevant, to reduce risks and ensure responsible use of AI. 

 

However, there is still hesitation among the research community in adopting GenAI for conducting research, developing grant applications and in research administration. For the research community, the key question remains: what is acceptable and what is not when using AI to develop, review, and manage grant applications? 

 

Canadian federal funding agencies' guidelines:

 

Canada's leading research funding agencies—CIHR, NSERC, SSHRC, and CFI—have clarified their guidance on the use of generative AI in the development and review of grant proposals. These guidelines are based on recommendations by a panel of external experts tasked by the three agencies and public consultations with the research community. The following two requirements, as outlined in the Tri-Agency Framework: Responsible Conduct of Research and the Conflict of Interest and Confidentiality Policy, guide the research community in the responsible use of AI tools:

  1. The named applicant is ultimately accountable for the complete contents of their application.
  2. Privacy, confidentiality, data security and the protection of intellectual property (IP) must be prioritized in the development and review of grant applications.

 

Development of grant applications: Researchers are permitted to use GenAI tools to draft, translate, and summarize parts of their proposals. However, the responsibility for thoroughly verifying the accuracy, completeness, and relevance of all GenAI-generated content rests with the researchers. The agencies require researchers to disclose AI use in their applications by citing and acknowledging all sources used in preparing their proposals. Ultimately, researchers are accountable for the integrity and quality of their final submissions. They should also be aware of the risks involved in using GenAI tools, including potential threats to the confidentiality and privacy of their data input in publicly accessible AI tools.

 

Review of applications: Reviewers must not use online platforms to maintain the integrity of the review process. Entering applications into online AI tools could breach privacy and copyright protections. This would violate the Conflict of Interest and Confidentiality Agreement for Review Committee Members, External Reviewers and Observers. Therefore, the use of publicly accessible online tools for assessing grant applications is strictly forbidden.

 

It is implied that similar privacy and confidentiality standards, along with the responsibility to protect research data and IP, also apply to research administrators when handling personal and sensitive information.

 

The U.S. major funding agencies' policies:

 

While Canadian funding agencies have provided guidance, major U.S. funders have already established policies governing AI use in grant development and peer review.

  1. To ensure fairness and originality in NIH research applications, NIH’s new policy on the use of AI takes effect on September 25, 2025. Under this policy, NIH will not consider applications that are significantly developed by AI or contain sections substantially created by AI. NIH also prohibits GenAI in peer review.
  2. The NSF prohibits reviewers from uploading any proposal content to public AI tools, viewing this as a breach of confidentiality and the integrity of the merit review process. However, the NSF encourages applicants to disclose AI use in their proposal development. Canadian teams co-applying to U.S. programs and Canadian reviewers should be aware of these restrictions.

 

These guidelines aim to balance innovation, transparency, accountability, and streamline administrative efforts, while upholding high research standards.

 

Using GenAI in Research Proposal Development: Applying the FASTER Principles

 

A wide range of free and commercial GenAI tools are available, with new functionalities emerging constantly. An increasing number of Canadian post-secondary institutions, including the University of Saskatchewan, Thompson Rivers University, the University of Victoria, McGill University, the University of Ottawa, the University of Toronto, and the University of Manitoba, have approved the use of Microsoft Copilot within their internal systems, relative to other GenAI tools. These deployments prioritize data privacy, ensuring that user inputs and outputs remain securely within the institution’s infrastructure and are not used to train public models (Microsoft 365 Copilot). However, users must still avoid entering personal and sensitive information. Consult the Office of the Privacy Commissioner of Canada’s guidance to understand how your personal data is protected and what safeguards are in place to ensure responsible use. Examples of low-risk, high-value applications in research grant applications (when no confidential data is involved) include: 

  1. Literature scan and synthesis: GenAI can analyze extensive literature and automate time-consuming research tasks, such as summarizing papers and extracting data (e.g., Elicit.com, Research Rabbit, Scholarcy, Typset.io). 
  2. Ideation and templating: GenAI tools can be effectively used for ideation, refining research questions, formatting bio sketches, standardizing references, templating knowledge‑mobilization and EDI sections, and converting reviewer feedback into revision plans.
  3. Plain language summaries for knowledge mobilization and impact: GenAI tools can assist in translating complex academic discoveries and outcomes into clear, accessible language to improve readability. This broadens accessibility and encourages greater public participation, thereby increasing the overall impact of the studies. However, it remains the responsibility of researchers to fact-check outputs and ensure cultural appropriateness. For example, when Indigenous data or knowledge are involved, researchers must adhere to OCAP (Ownership, Control, Access, Possession) and CARE (Collective Benefit, Authority to Control, Responsibility, Ethics) principles from the beginning. 
  4. Data analysis and creating budget templates: The built-in Copilot access in an Excel spreadsheet can be utilized for creating budget templates and data analysis.
  5. English and formatting: Tools such as Grammarly can help writers to improve the readability of their proposals for sentence structuring and flow.

The University of Saskatchewan has curated a set of resources to support ethical and effective use of AI in research. Readers are encouraged to visit the site.

 

Visualizing GenAI Use in Research: Transparency Through HMC Icons

 

As technology increasingly blurs the line between human and machine intelligence, it is essential to recognize the extent of GenAI involvement in research. The Dubai Future Foundation (DFF), through its whitepaper, has introduced a classification system to visually represent the WHAT and HOW of evolving human-machine collaboration (HMC) in research, its design, and publications. These HMC icons, ranging from “all human” to “all machine,” provide a simple visual representation of machine involvement in research, including ideation, literature reviews, design, data collection and analysis, translation and writing, and research outputs (academic papers, technical reports, videos, art, educational materials, and other multimedia content). While not mandatory, the use of these icons is encouraged to enhance transparency and clarity.

 

GenAI in Research Administration: Enhancing Efficiency Responsibly

 

Research administrators can leverage institutionally approved GenAI to streamline both pre- and post-award administrative processes by automating routine tasks.  In the pre-award stage, they can use GenAI to analyze and summarize lengthy funding announcements, match them with researchers' expertise, and automate the sharing of this information with faculty and colleagues. GenAI tools can help draft and edit documents, create and refine presentations, generate images for presentations, summarize documents, email threads, and meeting notes, as well as translate information (Government of Canada’s Guide on the use of generative artificial intelligence). Various tools are being explored to better support researchers, such as verifying proposals for completeness and formatting compliance. Gen AI tools can also automate post-award activities, including compliance checks, report drafting, risk monitoring, and project management (Mkabane & Kinkigi, 2024). However, research administrators must be mindful of ethical safeguards, data privacy, and institutional policies when using these tools.

 

  1. Cautions for the Research Community Confidentiality breaches: The research community must avoid uploading sensitive information to online AI tools, as this risks compromising data confidentiality and violating academic integrity. 
  2. Concerns about plagiarism, ghostwriting, and originality: Funders expect proposals to highlight the applicant's own ideas. Over-reliance on AI tools can foster plagiarism and diminish innovation, novelty, and creativity.
  3. Hallucinations and subtle factual errors: GenAI tools may facilitate citations, exaggerate novelty, or obscure methodology which can compromise proposal quality
  4. Privacy legislation compliance: Drafting proposals with research participant data, even “just metadata,” can trigger obligations under PIPEDA and FIPPA. It is essential to de-identify data whenever possible and keep any identifiable information off public AI tools.
  5. Indigenous data sovereignty: Using open-data approaches in Indigenous contexts can be harmful. The OCAP and CARE principles require jointly developed governance, not just ticking consent boxes. This raises a bigger question about whether AI systems can be trained on community data at all.

 

Takeaways: For the research community, AI is neither a shortcut to better science nor a threat to be avoided all together. When used thoughtfully and in alignment with principles of disclosure, confidentiality, ethics, privacy law, and Indigenous data sovereignty, it can be a powerful tool for ideation, editing, and administrative efficiency The research community should develop a foundational understanding of GenAI’s benefits, limitations, and responsible use.

 

Acknowledgement: The author used Microsoft 365 Copilot to summarize information from publicly available sources and to edit this document. The author does not endorse any AI tools mentioned in this write-up. Additionally, as AI technologies continue to evolve, the guidance of funding agencies may also change.

 

References:

1.      Sharma, A., & Harris, R. (2024, September 10). Empowering diversity part II: Potential cautions of AI on diversity, engagement and inclusion in research administration. SRAI Catalyst Newsletter.

2.      Mkabane, E., & Kinigi, R. (2024). Evaluating the role of AI in grants management: Integration and adoption of technology and innovation. International Journal of Financial Management and Research, 6(6), 1-11.

 

 

 

Authored by:

 

Anita Sharma, PhD
Director Research Services 
Thompson Rivers University
SRAI Catalyst Feature Editor

 

The Operations & Workflow Management Feature Editors want to hear from you! 
Submit now to SRAI's Catalyst: https://srainternational.wufoo.com/forms/srai-catalyst-article-submission/ 

 

#Catalyst
#October2025
#Operations&WorkflowManagement
#Spotlight 
#AI #ArtificialIntelligence #GenAI


#Spotlight

Permalink