When we began working with Denver Health and Hospital Authority (DHHA), I knew we’d learn a lot. I didn’t realize just how much the design partnership itself would shape our understanding of where AI truly belongs in research administration.
What follows are a few personal reflections from that journey.
The AI can do extraordinary things, but in research administration, the win is often much simpler: help people reclaim time from dense, repetitive work. In our early discovery calls, DHHA’s team walked us through the mechanics of building a proposal, including attachments, checklists, internal deadlines, and the back-and-forth between drafts.
Hearing them talk through their screens, their inboxes, the sheer choreography involved made something clear. Namely, that AI’s value here isn’t about creativity or prediction. It’s about precision and the reduction of cognitive load.
This is why one of the earliest breakthroughs came from something deceptively small. Amanda, one of DHHA’s leaders, asked whether the system could distinguish “draft” from “final” documents the same way other tools do. On the surface, it’s a status tag. In practice, it’s a way of reducing ambiguity across dozens of people touching a proposal. That’s where AI shines, not as a replacement for expertise, but as a stabilizer for the messy, human parts of workflow.
When people talk about co-design, they often imagine structured workshops. What it actually looks like is Sarah, DHHA’s grants manager, saying something like, “Our analysts keep manually creating subtasks for every required attachment. Could the system do that instead?” Or Dan pointing out that marking something complete didn’t make sense if the task wasn’t applicable in the first place.
These comments weren’t feature requests. They were windows into how the work actually happens.
The AI systems succeed or fail based on the quality of the assumptions they encode. Co-design forces those assumptions to be interrogated early. Internal deadlines are a great example. We had originally expected users to set deadlines manually. But watching DHHA walk through the process made it obvious that internal timelines, such as “drafts due five days out,” “finals due three days out,” form the real skeleton of a proposal. Once the team surfaced those requirements, we rebuilt the logic so AI could apply those patterns automatically.
No amount of outside speculation could have revealed those needs. Only the people doing the work could.
When we show people the automatic task-generation engine, which pulls requirements from RFPs and turns them into structured tasks, their reaction is usually a mix of surprise and relief. But what makes it powerful isn’t the extraction itself. It’s what happens after the structure is in place.
Once tasks are structured, AI can help users:
This is why building AI with research administrators is so important. Their mental models become the backbone of the system.
One of my favorite moments in the partnership came when we looked at user analytics. The most active users weren’t faculty. They were administrators, logging in daily, clicking around, testing features, and sending thoughtful notes on what worked and what didn’t.
It reminded me that AI doesn’t magically integrate into an organization. People champion it. People challenge it. People spread it. In DHHA’s case, administrators served as internal translators, helping technology feel trustworthy rather than intimidating.
And trust, I’ve learned, is built through responsiveness. When Sarah emails at 9 pm asking whether a pilot grant checklist can be integrated into the system, and we say, “Yes, send it,” that matters. When Amanda wonders whether department-level dashboards would help her chairs, and we start sketching the design that afternoon, that matters. This is how AI becomes a partner, not another system people work around.
If there’s one lesson I come back to, it’s this. Research administrators are not the end users of AI. They are its co-authors.
The most transformative ideas, department-level visibility, workload analytics, and smarter task logic didn’t come from our roadmap. They came from the lived experience of administrators trying to keep proposals moving amid tight deadlines, complex policies, and competing priorities.
The future of AI in research administration won’t be defined by generic automation. It will be defined by partnerships that treat administrators as experts in how work gets done.
And if the DHHA collaboration has shown me anything, it’s that when you design AI with people instead of delivering it to them, you don’t just build better tools. You build better systems, better understanding, and ultimately better outcomes for researchers and the teams who support them.
AI Use Statement: Claude AI was used as a writing assistant during the drafting process. Specifically, it was used to help structure the initial outline, refine paragraph flow, and suggest edits for clarity and concision. All of the ideas, arguments, data points, and subject matter expertise belong to the authors, drawn from their own experiences. The authors reviewed and revised every section of the article, and the final piece reflects their unique voices and perspectives throughout.