Designing AI Together: Lessons From Building with Research Administrators

By SRAI News posted 2 days ago

  

Grant Development & Strategy |

 

AI in research administration works best when it's built with the people who do the work, not just for them. The open collaboration with Denver Health's research admin team allowed Atom Grants to develop an AI tool in a way no other roadmap ever could.

 


 

When people ask me what it’s like to build Artificial Intelligence (AI) for research administration, I tell them that it’s a team sport. Not a metaphorical one, a literal collaboration between our product team and the administrators who live inside these workflows every day.
When we began working with Denver Health and Hospital Authority (DHHA), I knew we’d learn a lot. I didn’t realize just how much the design partnership itself would shape our understanding of where AI truly belongs in research administration.
What follows are a few personal reflections from that journey.
 

Start With Reality, Not Possibility

The AI can do extraordinary things, but in research administration, the win is often much simpler: help people reclaim time from dense, repetitive work. In our early discovery calls, DHHA’s team walked us through the mechanics of building a proposal, including attachments, checklists, internal deadlines, and the back-and-forth between drafts.
Hearing them talk through their screens, their inboxes, the sheer choreography involved made something clear. Namely, that AI’s value here isn’t about creativity or prediction. It’s about precision and the reduction of cognitive load.
This is why one of the earliest breakthroughs came from something deceptively small. Amanda, one of DHHA’s leaders, asked whether the system could distinguish “draft” from “final” documents the same way other tools do. On the surface, it’s a status tag. In practice, it’s a way of reducing ambiguity across dozens of people touching a proposal. That’s where AI shines, not as a replacement for expertise, but as a stabilizer for the messy, human parts of workflow.
 

Co-Design Works Because It Surfaces the Hidden Needs

When people talk about co-design, they often imagine structured workshops. What it actually looks like is Sarah, DHHA’s grants manager, saying something like, “Our analysts keep manually creating subtasks for every required attachment. Could the system do that instead?” Or Dan pointing out that marking something complete didn’t make sense if the task wasn’t applicable in the first place.
These comments weren’t feature requests. They were windows into how the work actually happens.
The AI systems succeed or fail based on the quality of the assumptions they encode. Co-design forces those assumptions to be interrogated early. Internal deadlines are a great example. We had originally expected users to set deadlines manually. But watching DHHA walk through the process made it obvious that internal timelines, such as “drafts due five days out,” “finals due three days out,” form the real skeleton of a proposal. Once the team surfaced those requirements, we rebuilt the logic so AI could apply those patterns automatically.
No amount of outside speculation could have revealed those needs. Only the people doing the work could.
 

AI Thrives When It Has Structure to Hold

When we show people the automatic task-generation engine, which pulls requirements from RFPs and turns them into structured tasks, their reaction is usually a mix of surprise and relief. But what makes it powerful isn’t the extraction itself. It’s what happens after the structure is in place.
Once tasks are structured, AI can help users:
  • See what’s missing
  • Understand what’s coming next
  • Surface deadlines before they become disasters
  • Summarize the state of a proposal instantly
Over time, the system creates a shared language among PIs, analysts, and administrators. In one conversation, Sarah reflected on how important it is for people to understand why a task is required or why it may not apply in a given case. At DHHA, many clinician-scientists are balancing research with demanding clinical responsibilities, often motivated by impact rather than obligation. Making expectations explicit and grounded in purpose helps reduce friction and supports their ability to engage meaningfully, and AI can help by making this institutional logic more transparent, consistent, and dependable.
This is why building AI with research administrators is so important. Their mental models become the backbone of the system.
 

Humans Drive Adoption, Not Algorithms

One of my favorite moments in the partnership came when we looked at user analytics. The most active users weren’t faculty. They were administrators, logging in daily, clicking around, testing features, and sending thoughtful notes on what worked and what didn’t.
It reminded me that AI doesn’t magically integrate into an organization. People champion it. People challenge it. People spread it. In DHHA’s case, administrators served as internal translators, helping technology feel trustworthy rather than intimidating.
And trust, I’ve learned, is built through responsiveness. When Sarah emails at 9 pm asking whether a pilot grant checklist can be integrated into the system, and we say, “Yes, send it,” that matters. When Amanda wonders whether department-level dashboards would help her chairs, and we start sketching the design that afternoon, that matters. This is how AI becomes a partner, not another system people work around.
 

Looking Ahead: AI Will Only Be as Good as the People Who Shape It

If there’s one lesson I come back to, it’s this. Research administrators are not the end users of AI. They are its co-authors.
The most transformative ideas, department-level visibility, workload analytics, and smarter task logic didn’t come from our roadmap. They came from the lived experience of administrators trying to keep proposals moving amid tight deadlines, complex policies, and competing priorities.
The future of AI in research administration won’t be defined by generic automation. It will be defined by partnerships that treat administrators as experts in how work gets done.
And if the DHHA collaboration has shown me anything, it’s that when you design AI with people instead of delivering it to them, you don’t just build better tools. You build better systems, better understanding, and ultimately better outcomes for researchers and the teams who support them.
 
AI Use Statement: Claude AI was used as a writing assistant during the drafting process. Specifically, it was used to help structure the initial outline, refine paragraph flow, and suggest edits for clarity and concision. All of the ideas, arguments, data points, and subject matter expertise belong to the authors, drawn from their own experiences. The authors reviewed and revised every section of the article, and the final piece reflects their unique voices and perspectives throughout. 
 

 

Authored by:

image

Tomer du Sautoy
Co-Founder and CEO
AtomGrants.com

 

image

Amanda Breeden, MA, CRA
Associate Chief
Research Operations and Sponsored Programs
Denver Health

 

image

Sarah Haley, MBA, CRA
Grants Manager
Denver Health


 

The Grant Development & Strategy  Feature Editors want to hear your perspective! 
Submit now to SRAI's Catalyst: https://srainternational.wufoo.com/forms/srai-catalyst-article-submission-form/

 

#Catalyst
#March2026
#GrantDevelopment&Strategy
#ArtificialIntelligence #AI
0 comments
18 views