Volume LVI, Number 2
Reflections on AI Implementation in Research Administration: Emergent Approaches and Recommendations for Strategic and Sustainable Impact
Amber Hedquist
Arizona State University
Max Castillon
Arizona State University
Megan Cooper
Arizona State University
Valerie Keim
Arizona State University
Tasha Mohseni
Arizona State University
Kimberly Purcell
Arizona State University
Abstract
This reflective inquiry reports on the experiences of a working group at Arizona State University (ASU) that, over the course of four months, built, integrated, and iterated artificial intelligence (AI) solutions into their daily work as research administrators. During this process, the group focused on creating AI solutions for the complex, repeatable, and time-consuming tasks across teams in Research Operations (ROps) at ASU. Through this collaborative reflection, featuring the insights of both the facilitators (n=2) and the research administrators (n=4) involved in the implementation process, this paper offers insights into early approaches and recommendations for strategically, effectively, and sustainably incorporating AI into the work of research administrators. Given the emergent concepts of usability, flexibility, and sustainability, this paper proposes three recommendations for AI implementation: develop educational materials, create space for iteration, and define roles and protocols.
Keywords: Artificial intelligence, Emerging technology, Technology integration, Collaboration, Leadership
Introduction
Generative artificial intelligence (AI) is increasingly accessible and applicable to research-related activities; however, early research has focused on improving the work processes of academic researchers, with few best practices for research administrators. Overall, AI is heralded for its capacity to automate rote tasks (Mallette, 2024), assist in idea generation (Yu-Han & Chun-Ching, 2023), and quickly synthesize complex information in more accessible outputs (Lyu et al., 2023). Given these affordances, university researchers have experimented with AI integration for their more laborious research tasks such as supporting the writing process (Babl & Babl, 2023; Chamurliyski, 2023) and analyzing data (DeJeu, 2024; Hedquist et al., 2024; Morgan, 2023). While these insights benefit academic researchers, there are minimal best practices for research administrators. AI is a promising solution for administrative work processes given its potential to reduce human labor (Zhang, 2024), save costs (Rizvi et al., 2023), and improve workflows (Vapiwala & Pandita, 2024); however, despite early explorations of AI in research administration contexts (Komperla, 2021, 2022), there are few evidence-based strategies for implementing the emerging technology into research administration.
To address this gap, this reflective inquiry reports on the experiences of a multi-team working group at Arizona State University (ASU) that, over the course of four months, systematically integrated AI into their daily operations. The working group was assembled in response to ASU’s AI Innovation Challenge: a university-wide initiative that offered staff, students, and faculty ChatGPT Enterprise licenses through OpenAI (ASU, n.d.-a; OpenAI, n.d.). The licenses differ from the free, public access version of ChatGPT. Namely, the Enterprise license ensures that ASU data is not used to train the models (“OpenAI, Arizona State University collaborate to advance AI in academia,” 2024), which facilitates internal sharing of AI tools and ensures a higher level of data privacy and security.
Nineteen research administrators received a license to create and implement potential AI solutions across teams within research operations (ROps). At ASU, research administration is a hybrid between centralized and decentralized support. The central office manages organization oversight and serves as the authorized organization representatives, while individual colleges, schools, and departments have in-unit research administrators who provide more in-depth support for faculty and staff. The administrators involved in this project belong to the central office, with expertise ranging from pre-award to compliance, to post-award. Given the diversity of the working group, we were able to create AI models that support central office processes, which could in the future be disseminated to department-level administrators to support their work as well.
Through a structured four-phase implementation process—Identify, Innovate, Integrate, and Iterate—the team developed twelve customized chatbots, known as “GPTs,” within the OpenAI interface (OpenAI, n.d.). In phase one, the Identify phase, the team surveyed nineteen colleagues to determine which processes across ROps could benefit from AI integration, focusing on repetitive and time-consuming tasks. During the Innovate phase, ten colleagues participated in a month-long series of workshops to collaboratively draft and refine the initial functionality of the bots. Then, the Integrate phase involved asking colleagues to incorporate the GPTs into their daily work and write about their experiences. Finally, in the Iterate phase, the team worked on incorporating the feedback to refine the bots for improved usability and utility for colleagues.
This reflection features the voices and experiences of two groups: the implementation facilitators (n=2) and the research administrators involved in the implementation process (n=4). The facilitators were responsible for educating the administrators about AI, leading informational and interactive workshops, and supporting the individual development of each bot. The administrators were responsible for participating in the workshops, building and iterating a bot, as well as incorporating the bots into their daily work. By featuring both voices, this report offers a holistic overview of the implementation process and experience. As such, this paper’s emergent findings and recommendations are grounded in multiple perspectives that afford new insights into AI integration.
Problem Statement
How can universities successfully and systematically integrate artificial intelligence (AI) into research administration?
Observations
This section is organized according to the four phases of an emerging heuristic for AI implementation. Proposed by Hedquist and Keim (Society for Research Administrators International., n.d.), the implementation process involves four phases: identify, innovate, integrate, and iterate. The working sessions were organized around these phases, which is also how the observations will be structured.
With the implementation heuristic as a guide, this section will detail the guiding questions of each phase, as well as a vignette from one employee and one facilitator about their experiences executing the objectives in that phase. A vignette is a brief and descriptive reflection about a moment in time; by using this style of reflection, our paper offers snapshots of critical moments in each phase from the perspectives of facilitators and employees.
Phase 1: Ideate
The goal of phase one was to collaborate to find processes that AI can improve so that human work is dedicated to more complex decision-making. The guiding questions in this phase were:
- Which organizational processes are currently inefficient or resource-heavy?
- Which tasks require repetitive decision-making that AI could streamline?
- Which team members or departments would benefit most from AI support?
To achieve these objectives, we started by having conversations with various stakeholders. We also sent out a survey to those we could not schedule time with due to availability limitations. Through both the surveys and meetings, we were able to identify a list of processes that could benefit from AI implementation.
To determine the most impactful AI applications, we selected processes that were high-volume, repetitive, and complex. These criteria led us to prioritize AI solutions such as the NSF FOA bot: a bot that could complete NSF checklists and develop review templates based on FOAs to save research administrators valuable time. Overall, our goal in the ideation process was to ensure that we found at least one AI opportunity in each of ROps teams, ranging from pre-award and post-award to compliance and contracts. While AI offers broad applications in RA—including grant application support and financial reporting—we prioritized areas that could immediately respond to the felt difficulties and opportunities designated by our colleagues.
The following vignettes highlight notable experiences from this phase of the process.
Employee Vignette
The first phase of this project focused on identifying processes that AI could manage to enhance human productivity. When speaking with the Research Compliance’s Institutional Review Board (IRB) team, I noticed that there was an opportunity to take advantage of AI to enhance our productivity, and we decided to see areas of opportunities to create a GPT to help us out. A primary goal of implementing AI was to take on time-consuming tasks, enabling humans to focus on more creative and strategic responsibilities, ultimately improving timelines. We noticed that researchers often had inquiries regarding tasks and documentation needed for their studies, and gathering the information and getting in contact with researchers delayed the process, so we concluded that an IRB GPT could be a great solution. In this phase, the team collaborated to identify those specific tasks that AI could effectively handle. For this project, every ASU team member and researcher stands to benefit from AI support, as the GPTs will streamline workflows, enhance timing metrics, and allow more time for innovation and complex problem-solving by providing them a tool that redirects them to the form or document they are looking for, the right answer for a question they may have or the right person to get in touch with in case they need any additional information.
Facilitator Vignette
When starting to identify who should be included in the collaboration, we took into consideration an individual’s level of interest in new technology and asked all teams within the department for interested individuals. Demonstrations were conducted for those participating in the working group so everyone could see what it looked like to interact with a GPT before identifying opportunities for a custom GPT. This let users who might be less adventurous with technology get familiar with the tool before trying to train their own GPT. Users were encouraged to work together if they preferred at any step of the process, whether it was to talk through an idea, create a custom GPT, or test one of the bots. The group also talked through whether there were any concerns with data used to train the custom GPTs; similarly, the group talked about what data custom GPTs should and should not request from users. For instance, we did not want to create a custom GPT that encouraged users to upload confidential, identifiable data. One consideration that was encouraged when creating a custom GPT was how often the training materials would need to be updated, and how cumbersome it would be to update. In addition to looking at the processes individuals were looking to support, the working group also evaluated how complex creating and maintaining the bots may be before any custom GPTs were created.
Phase 2: Innovate
The goal of phase two was to design models that address identified opportunities, enhancing human-centered workflows and improving operations. The guiding questions in this phase were:
- Are there ethical considerations in how the AI will be used or developed?
- How do we ensure the AI model is adaptable to future needs?
- What human oversight is required for the AI to function effectively?
Phase two was arguably the most labor intensive, as we both built AI models and reflected on them from an ethical, adaptability, and sustainability standpoint throughout the design process. This phase involved synchronous and asynchronous work to build the bots, as well as several educational materials that were circulated to support employee training.
The working group was comprised of research administrators with varying levels of AI familiarity. To bridge knowledge gaps, this phase was run by the implementation facilitators who jointly have expertise in technical communication and process optimization. Additionally, ASU’s IT department provided insights into data security and system integration on an ad hoc basis as informal consultants. As ASU staff members, we are grateful to have robust support from the Research Technology Office, IT specialists, and the expanding group of subject matter experts specializing in AI. This collaboration helped establish a foundation for AI literacy across the working group that prepared us for this phase.
The following vignettes highlight the building experience from an employee perspective as well as a reflection on the facilitator’s approach to training during this phase.
Employee Vignette
I developed the concept of a chatbot called the “Office of Research Integrity and Assurance (ORIA) Compass” which was solely meant to serve as a navigation tool for the research compliance website for all compliance areas. Upon initial training of the model for all compliance areas, I learned that not only human oversight is necessary, but expertise for each compliance area is essential. Further, it is essential that chatbots have a highly specific function for a specific purpose as there are character limits to the GPT’s instructions. Though GPTs have built-in Knowledge centers where the bot creator can upload additional instructions in the form of word files, the key here is that the proper expertise is required to vet through the GPT’s instructions and Knowledge center. Lastly, the bot should be periodically reviewed and monitored for changes in Federal, state, local, and institutional policies as well as changes in procedures specific to the compliance area being monitored.
Facilitator Vignette
I developed an activity called the "P.I.T. C.R.E.W." exercise (Appendix A), to meet my colleagues at their level of expertise with AI and help them quickly design and execute impactful bots. The first part of the exercise—P.I.T.—encouraged everyone to think about the purpose, instructions, and topic for their bot. The second part of the exercise—C.R.E.W. —was a mechanism through which to think about the utility and usability of their bot. In sum, they had to describe if the task was complex, repeatable, necessitating expertise, and widespread enough to justify a customized bot. Combined, these two acronyms were accessible to all colleagues and supported an expedited process of bot creation.
Phase 3: Integrate
The goal of phase three was to incorporate AI into workflows while educating employees to ensure smooth adoption and enhanced performance without disrupting daily operations. The guiding questions in this phase were:
- How do we train employees to interact with and manage AI?
- How will AI adoption affect roles and responsibilities within teams?
- What communication plan will ensure employees feel supported and involved in the process, and fully informed of the intended use of the bot?
Phase three was an opportunity for us to test our bots in the “real world,” as in putting the bots into the context of a workday. This required us to ask our colleagues to incorporate the bots into their daily work and report on their experiences. We structured this phase by asking our colleagues to test at least one bot, once a week, by asking at least one question. Then, we created a collaborative document wherein they could report on their experiences and recommendations for each bot.
Employee Vignette
Before integrating the Enterprise Research Administration (ERA) User Help bot, I reached out to colleagues in different roles, inviting them to test the bot’s knowledge and provide feedback. Throughout the testing process, my colleagues and I were pleasantly surprised by the accuracy of the bot’s responses. In fact, any bot responses that lacked vital information stemmed from gaps in our existing guides and knowledge databases. This insight was invaluable, as it highlighted areas where we needed to create additional instructions and process guides to aid bots and humans alike.
Facilitator Vignette
During this phase, I was responsible for sending the drafted bots to nineteen colleagues who expressed interest in testing the bots for a month. Initially, I was apprehensive about how much information to share. On one hand, I could inform my colleagues about how each bot was trained and its intended purpose; on the other, I could invite them to test out the bots, with their only context being the bot's name and its brief description in ChatGPT. I opted to pursue the latter. Though a bit ambiguous, this was a method for truly testing the utility of the bots as there is no way to guarantee that every user has an opportunity to be primed, so to speak, on the bot's functionality, purpose, training, and so on. This approach presented affordances—such as a "real life" testing scenario—as well as limitations, as it led to several colleagues expressing confusion and lacking a clear pathway for how to consider integrating the bot. Overall, this approach highlighted ways that improved communication about the bot’s descriptions could support more seamless integration.
Phase 4: Iterate
The goal of phase four was to continuously refine AI through user feedback and performance evaluations to ensure ongoing optimization and relevance. The guiding questions in this phase were:
- What specific Key Performance Indicators (KPIs) will be used to evaluate the AI’s effectiveness?
- How do we identify when it’s time to upgrade or replace the AI model?
- How will we gather qualitative feedback from employees and users and meaningfully incorporate feedback in model upgrades?
For this final phase, we spent significant time reflecting on the input from our colleagues to inform design adaptations to the AI models. Our colleagues provided feedback about how the AI was performing, as well as the types of metrics we could use to evaluate the AI.
Employee Vignette
During the testing and evaluation of a drafted AI bot, I had insight into additional information and materials that should be reflected in the next versioning process. This was an "aha" moment revealing how multiple perspectives and resources would need to collaborate to develop a fully fleshed-out AI bot. A typical user of the bot I reviewed would frequently reference the standard work instructions for research administrators in a particular system. My experiences within the same system often reference additional tools and resources not included in work instructions. These additional items include resources for research administrators who use the system and the faculty, staff, and student researchers who complete activities within the same system. These additional items include 1) the system's integrated "help text" tools (which provide on-demand information for specific fields in the system), 2) training materials (optional workshops and required online training), and 3) associated teams who act as experts who can assist faculty, staff, and student researchers with completing system activities or other requirements. The perspectives of these other potential users and a better understanding of their needs will be essential to the future iteration of this particular AI bot.
Facilitator Vignette
The biggest considerations during this phase were: What guidance would users need when interfacing with this bot? Are there any ethical or privacy concerns with the latest version of the bot? And is the bot producing accurate information for users? Careful discussion has also centered around the sustainability of the custom GPTs. At the time of the working groups, everyone had temporary enterprise licenses. The facilitators of the working group are evaluating options so the custom GPTs can be utilized long-term. For bots supporting processes that have a corresponding department KPI tracking processing time, KPIs before and after bot implementation will be compared to quantify personnel time saved by the custom GPT. Bots without a corresponding established KPI will rely on user feedback on their experience in using the bot. Some custom GPTs will supplement training experiences for new team members or help users troubleshoot processes in the institution's grant management system. Users will also be able to provide feedback to the creator of the bot if an error occurs when using the bot. Most of the bots have been trained with materials that update annually, but any custom GPTs officially launched will be included in ongoing continuous improvement processes in place for the related department process.
Emergent Concepts
This section proposes three emerging concepts arising from this project. To arrive at a set of concepts, the group met to discuss the objectives of each phase, share experiences, and read through the vignettes. Through a brainstorming session, the group finalized a list of emerging concepts, which are defined and exemplified in this section.
Usability: Prioritizing User Experience
Throughout the implementation process, we recognized the utility and importance of adopting user experience (UX) design principles, such as usability, to guide our work. At its core, UX is a discipline committed to moving “beyond ideals and principles held by designers to include evidence of what users really do, feel, and believe” (Mara, 2020, p. 1). Often, this evidence-based and user-driven approach to design manifests in concerns around usability, which is the user’s ability to “navigate through a variety of tasks that an end product was designed to facilitate” (Lauer & Brumberger, 2016, p. 249). Thus, by prioritizing usability, our team continually reflected on the extent to which our AI models were meeting the needs—and being successfully navigated by—our end users. In most cases, the end users of our bots were our colleagues in ROps—not principal investigators, as our project focused on improving internal processes. We achieved this reflection by establishing multiple points of feedback into the design process, starting with the initial design.
First, we worked to ensure that the bots we were building were reflective of earnest needs within ROps. Rather than predict what teams might need, phase one included extensive conversations with, and surveys of, employees across ROps to collect evidence on what processes could be supported by AI. Accomplishing this work involved significant time and effort, as it was occasionally difficult to converse with team members who were low on bandwidth. While this may have delayed the project timeline slightly, their input was invaluable. This exemplifies the importance of prioritizing usability over other project goals—such as a rigid timeline—to ensure that the products we created were meeting a user need. We continued this commitment to usability throughout the design process, especially once we had working bots to report on.
Later in the design process, our team recognized the importance of continuous feedback to fuel positive user experiences. For instance, by surveying our colleagues on their early experiences with our AI models, we learned how we could better meet their expectations. Several users noted that one bot—which was designed to provide information about a specific website—needed additional instructions for scenarios wherein the user wanted contact information or assistance with specific links. While we did not anticipate these needs, our survey mechanism helped us quickly pivot and iterate the bot to meet user needs.
Though usability is an important consideration, it is a difficult hurdle when considering the limited bandwidth that often persists in ROps as a discipline. As Schiller and LeMire highlight, research administrators are often burdened by “bureaucratic regulations, unwieldy processes, and burgeoning reporting standards,” which results in long days and endless to-do lists (2023, p. 9). As a result, it can be difficult to ask colleagues to volunteer their time to test and provide feedback on an emerging technological tool such as AI. Our team is continuing to work toward striking a balance between requesting usability support and not overburdening our colleagues. In the Recommendations section, we will offer suggestions on how to strike a balance for usability-minded AI implementation.
Sustainability: Ensuring Long-Term Success
Early on, the working group identified the importance of planning for multiple facets of sustainability. From an operations standpoint, we define sustainability as the ability to maintain our processes and products at the same quality for a long duration of time. Namely, we were concerned with how we would sustain the AI models as well as the human capital required to maintain their iteration and success in the workplace.
From the standpoint of the AI models, a big question that arose was where the AI models would ‘live,’ and how they would evolve over time. While we did not completely remedy this concern, we spent time during our weekly meetings discussing the implications of different methods for sustainability. For instance, we discussed how including certain AI bots on our websites could help users navigate content across our website ecosystem; however, when websites changed, it was unclear how we would proceed. In the ever-changing landscape of research administration, change is inevitable. If AI is to become a fixture of our work, where AI solutions are placed and how they are maintained needs to be a proactive rather than reactive decision. Meaning, as we worked to embody, that sustainability is a front-of-mind design consideration that is continually discussed and prioritized.
Secondly, human capital is critical for sustainability, as it refers to the importance of human skills, training, and attitudes (Shimazoe, 2021). In our case, our working group is comprised of employees across ROps who are volunteering their time for experimentation with AI. Currently, no team member is able to permanently shift their job responsibilities to provide long-term maintenance and oversight to AI implementation. As a result, there is a need to capture people’s expertise when they can offer it, while also planning for the moments wherein that expertise cannot be called upon and incorporated.
Flexibility: Accommodating Evolving Needs
By following the implementation heuristic, we were able to follow a systematized plan for AI implementation that also prioritized procedural flexibility across the team. Rather than asking each group member to be flexible and adaptable, the collaborative heuristic helped us channel what Hannah and Lam coin as functional flexibility: “team members’ ability to function effectively, efficiently, and economically within the subcultures of a group, unit, or team” (2023, p. 144). A key component of functional flexibility is a deep understanding of subcultures, as well as their language and values. Our team was able to channel this depth of understanding because we are all from different teams. In practice, this flexibility often meant facilitating dialogue across differing professional and methodological orientations. Working across such boundaries is difficult, but thoughtful and systematic facilitation that seeks to preserve and empower difference proved essential to our process (Hedquist et al., 2025). By fostering shared understanding through reflective discussion, we were able to reposition what “effective collaboration” meant for our project: not consensus or uniformity, but a generative process that made visible the diverse epistemologies shaping our work.
The main way that we practiced flexibility was by consistently reporting on our experiences during weekly meetings to identify adaptation opportunities. These reporting opportunities included both our personal reflections as well as insights from our various teams. For instance, one colleague raised a concern about a public-facing help bot providing inaccurate information to a Principal Investigator (PI). This affected our initial plans to make the bot a resource for PIs via the website; however, by accounting for potential misuse, we were able to proceed with the project in a way that reduced misinformation and harm. As illustrated by this pivot, flexibility in our processes ensured that we accommodated diverse needs and drove impact that would be long-lasting.
Additionally, as we look to the future, we are cognizant of the importance of flexibility as guidance regarding AI and institutionally supported tools may change. For instance, our institution’s technology office vetted the tool we used—ChatGPT—however it was important for us to acknowledge that access to the tool, and the university’s stance on it, could change. To accommodate these potential future pivots, we documented all our processes and training materials. By maintaining clear training documentation, we can be flexible if we are asked to move platforms or change software.
Though important, our concerns about usability were reflected in our difficulties in flexibility as we struggled to attract feedback from our colleagues about the bots. We were open to input throughout the design process to ensure flexibility and adaptability; however, many colleagues did not have the time to offer direction or feedback due to time constraints. While this was unfortunate, the moments we were able to exercise flexibility were fruitful and worthwhile. Furthermore, we were able to maintain a stance of functional flexibility by leaning into our understanding of our individual teams and their subcultures. Overall, as a general habit of mind, flexibility was fortuitous.
Recommendations
Building off the emergent concepts, this section proposes three recommendations for universities that are looking to incorporate AI into ROps: develop educational materials, create space for iteration, and define roles and protocols. The recommendations were developed collaboratively through a series of meetings and reflections within the working group. Each emergent concept is woven into the recommendations to further emphasize the importance of usability, sustainability, and flexibility in AI implementation.
Develop Educational Materials
Leaders from business and academic sectors are building infrastructure to educate the workforce on AI, which will be essential resources for AI implementation in research administration teams. In our case, OpenAI offered a suite of online tutorials and case studies to support students, staff, and faculty who were integrating AI into their work and classrooms (OpenAI Platform, n.d.). Similarly, ASU has developed best practices and courses to support AI education in the workforce (ASU, n.d.-b). We utilized this suite of educational materials—including video tutorials and office hours—to continually learn about AI as it has evolved. Given the evolving nature of AI, as well as varying comfortability with the technology, we recommend research administration teams prioritize education in two distinct ways: training employees how to use AI to build bots, and training employees and/or end users on how to interact with the AI bots that are created.
First, ROps teams will need to educate employees on how to train, use, and disseminate AI bots. Albeit no easy feat, we recommend turning to early scholarship on AI education and literacy (Bearman & Ajjawi, 2023; Cardon et al., 2023; Gupta & Shivers-McNair, 2024), as well as publicly available best practices (Google AI, n.d.). Utilizing existing frameworks, such as the implementation heuristic we offer in this article, can offer employees a starting point to stretch and grow their AI muscles, so to speak. One way to achieve these educational goals is through facilitated working sessions, or through the dissemination of tutorials and best practices. Whether synchronous or asynchronous, it is important that employees are not expected to sift through the chasm of AI materials alone; rather, by offering vetted and appropriate educational materials, employees can better understand the world of AI that they are stepping into. Though, education cannot stop at employees—it must extend to the prospective end user as well.
Perhaps you are building an AI model for your ROps colleagues, or perhaps you are building one for PIs to use to better understand resources at the institution; either way, these end users will need to understand how to interact with the bots you present to them. In our experience, we benefited from writing clear descriptions of the bots, situating them among our preexisting websites, and distributing tutorials and best practices to end users. Without these educational resources in place, there’s a risk that people may not know how to use the AI models effectively, leading to potential confusion or misuse. By prioritizing clear communication and providing accessible educational materials, research administrators can help ensure that users understand how to interact with the bots and feel confident using them. This proactive approach makes it easier to guide end users toward the support they need and reduces the likelihood of misunderstandings.
Create Space for Iteration
Given our experiences, we recommend allotting time and effort for iteration, despite potential disruptions to project goals. For instance, adopting a flexible and usability-centered approach to implementation may require a slower process. Though this may require a mindset shift—especially as research administrators are known for being systems-oriented and deadline-driven—adopting a stance of openness toward iteration can ensure a better product. Thus, the act of ‘creating space’ requires room in your project timeline, as well as mindfulness about how you will maintain a sustainable process for iteration.
Regarding timeline shifts, our project benefitted from iteration as we asked our colleagues to continually provide feedback on our ideas in phase one, as well as our drafted AI models in phase four. Before the project, we intentionally carved out these touchpoints to ensure ample time allotted for iteration. Through their feedback, we opened ourselves to intentional, evidence-based iteration. Creating space for this work was uncomfortable and labor intensive at times; however, the end products benefited, as did our end users.
In addition to planning time for iteration, it is important for research administration teams to identify means through which to maintain this space for iteration. By space, we refer to a continued commitment to adaptation of the AI models to best meet user needs. Sustaining this commitment can be tricky, as it requires the dedication of time and personnel to continued feedback collection and iteration. For instance, an employee may be required to check a feedback survey every week, or perhaps the working group agrees to meet once a month to discuss necessary iterations. Regardless of the setup, creating space for iteration is inconsequential if it is not paired with the maintenance of the space for a sustained amount of time. By prioritizing long-term space for iteration, the team is thereby committing themselves to prioritizing evolving user needs.
Define Roles and Protocols
To ensure sustainability, teams working with AI will need to clearly define the roles and protocols related to the bots. In our team, we were driven by the motivation to create proof-of-concept bots and experiment with what AI can do through the ASU AI Innovation Challenge. Though this experimentation is important and worthwhile, we repeatedly stopped throughout the process to ask ourselves how we might sustain this work if our proof-of-concept was successful and useful to our colleagues. In these moments of reflection, we transcribed our training materials and protocols for the bots so that they could be maintained long-term or moved to another software if needed. This action of continuous documentation was critical to our implementation protocol and ensured sustainability.
To offer an example, we built a bot that is an expert in Uniform Guidance, which has successfully supported the workflows of several colleagues. However, we had to ask several sustainability-focused questions, such as: When the Uniform Guidance policies are updated, who is responsible for updating the bots training? If the bot starts to produce inaccurate responses, who is responsible for fixing the bot? How often will we solicit feedback from users to ensure that it is usable? These questions, among others, are a focal point for our team as we consider which bots will continue to be prioritized in our work, who will be responsible for them, and what processes those individuals will need to follow. Albeit uncharted territory, we invite other scholars and administrators to share their experiences crafting AI-centered goals so that ROps teams can better plan for and execute AI implementation efforts.
Acknowledgements
This project was supported by Arizona State University’s AI Innovation Challenge, which afforded our team access to the capabilities of OpenAI’s Enterprise License.
We would also like to thank our colleagues who offered their time, expertise, and invaluable feedback as we worked to implement AI into our operations. Specifically, we would like to thank the following colleagues: Leslie Daniels, Jenny Dunaway, Deirdre Egan, Marilyn Gardner, Sarah Kern, Ashley Maasen, Megan Mitchell, Nakeisha Numrich, Sybil Nwulu, Jessica Robins, Lilian Sapiano, Lindsey Havranek Shapiro, Ramya Turaga.
Authors' Note
Originality Note
This manuscript reflects the original work of the authors Any references used in developing the manuscript are cited.
Amber Hedquist
Research Specialist
Rob and Melani Walton Center for Planetary Health
Arizona State University
Tempe, AZ, 85281
anhedqui@asu.edu
ORCID: 0009-0004-7637-9579
Max Castillon
Research Compliance Coordinator
Office of Research Compliance
Arizona State University | Knowledge Enterprise
Tempe, AZ 85287-7205
max.castillon@asu.edu
ORCID: 0009-0001-6547-6160
Megan R. Cooper
Research Advancement Administrator
Research Advancement Services
Research Operations – Pre-Award Services
Arizona State University
Tempe, AZ 85287-6011
MeganCooper@asu.edu
Valerie Keim
Research Operations Manager
Strategic Cross-Functional Support
Knowledge Enterprise
Arizona State University
Tempe, AZ 85281
val@asu.edu
ORCid: 0000-0003-2154-9767
Tasha Mohseni
Compliance Coordinator
Research Integrity & Assurance
Knowledge Enterprise
Arizona State University
Tempe, AZ 85281
tmohseni@asu.edu
Kimberly Purcell
Instructional Designer
Instruction and Outreach Team
Knowledge Enterprise
Arizona State University
Tempe, AZ 85281
Kimberly.Purcell@asu.edu
ORCid: 0000-0001-5663-0245
Corresponding Author
Correspondence concerning this article should be addressed to Amber Hedquist, Research Specialist, Arizona State University, Walton Center for Planetary Health, Arizona State University, 777 E University Dr, Tempe, AZ 85281, anhedqui@asu.edu.
References
Arizona State University. (n.d.-a). AI Innovation Challenge. Retrieved November 15, 2024, from https://ai.asu.edu/AI-Innovation-Challenge
Arizona State University. (n.d.-b). Artificial intelligence. Retrieved November 15, 2024, from https://ai.asu.edu/
Babl, F. E., & Babl, M. P. (2023). Generative artificial intelligence: Can ChatGPT write a quality abstract? Emergency Medicine Australasia, 35(5), 809–811. https://doi.org/10.1111/1742-6723.14233
Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160–1173. https://doi.org/10.1111/bjet.13337
Cardon, P., Fleischmann, C., Aritz, J., Logemann, M., & Heidewald, J. (2023). The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age. Business and Professional Communication Quarterly, 86(3), 257–295. https://doi.org/10.1177/23294906231176517
Chamurliyski, P. (2023). Enhancing efficiency in scientific writing in the field of plant growing: Collaborating with AI assistant ChatGPT for enhanced productivity. Acta Scientifica Naturalis, 10(3), 73–81. https://doi.org/10.2478/asn-2023-0023
DeJeu, E. B. (2024). Using generative AI to facilitate data analysis and visualization: A case study of Olympic athletes. Journal of Business and Technical Communication, 38(3), 225–241. https://doi.org/10.1177/10506519241239923
Google AI. (n.d.). Google responsible AI practices. Retrieved November 15, 2024, from https://ai.google/responsibility/responsible-ai-practices/
Gupta, A., & Shivers-McNair, A. (2024). “Wayfinding” through the AI wilderness: Mapping rhetorics of ChatGPT prompt writing on X (formerly Twitter) to promote critical AI literacies. Computers and Composition, 74, 102882. https://doi.org/10.1016/j.compcom.2024.102882
Hannah, M. A., & Lam, C. (2023). 6. Functional flexibility: Cultivating a culture of adaptability for the work of professional writing. In A. L (Ed.), Rewriting work (pp. 141–158). WAC Clearinghouse.
Hedquist, A., Hannah, M. A., & Caputo, C. (2025). Facilitation in TPC Research Practice: Navigating the Complexities of Collaborative Research. In Proceedings of the
43rd ACM International Conference on Design of Communication (pp. 234-235). https://doi.org/10.1145/3711670.3764650
Hedquist, A., Willers, H., & Hannah, M. A. (2024). Discipline-driven AI: Training a GPT to qualitatively code for technical and professional communication research. Proceedings of the 42nd ACM International Conference on Design of Communication, 266–268. https://doi.org/10.1145/3641237.3691686
Komperla, R. C. A. (2021). AI-enhanced claims processing: Streamlining insurance operations. Journal of Research Administration, 3(2), 95–106.
Komperla, R. C. A. (2022). Artificial intelligence and the future of auto health coverage. Journal of Research Administration, 4(2), 259–269
Lauer, C., & Brumberger, E. (2016). Technical communication as user experience in a broadening industry landscape. Technical Communication, 63(3), 248–264.
Lyu, Q., Tan, J., Zapadka, M. E., Ponnatapura, J., Niu, C., Myers, K. J., Wang, G., & Whitlow, C. T. (2023). Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: Results, limitations, and potential. Visual Computing for Industry, Biomedicine, and Art, 6(1), 9. https://doi.org/10.1186/s42492-023-00136-5
Mallette, J. C. (2024). Preparing future technical editors for an artificial intelligence-enabled workplace. Journal of Business and Technical Communication, 38(3), 289–302. https://doi.org/10.1177/10506519241239950
Mara, A. (2020). UX on the go: A flexible guide to user experience design. Taylor & Francis Group. http://ebookcentral.proquest.com/lib/asulib-ebooks/detail.action?docID=6247179
Morgan, D. L. (2023). Exploring the use of artificial intelligence for qualitative data analysis: The case of ChatGPT. International Journal of Qualitative Methods, 22, 16094069231211248. https://doi.org/10.1177/16094069231211248
OpenAI. (n.d.). Introducing GPTs. Retrieved March 1, 2024, from https://openai.com/blog/introducing-gpts
OpenAI, Arizona State University collaborate to advance AI in academia. (2024, January). Communications Today. https://www.proquest.com/docview/2916401779/citation/D2C693BD03D0402DPQ/1
OpenAI Platform. (n.d.). Tutorials. Retrieved November 15, 2024, from https://platform.openai.com
Rizvi, A., Rizvi, F., Lalakia, P., Hyman, L., Frasso, R., Sztandera, L., & Das, A. V. (2023). Is artificial intelligence the cost-saving lens to diabetic retinopathy screening in low- and middle-income countries? Cureus, 15(9), e45539. https://doi.org/10.7759/cureus.45539
Schiller, J. L., & LeMire, S. D. (2023). A survey of research administrators: Identifying administrative burden in post-award federal research grant management. Journal of Research Administration, 55(3), 9–28. https://files.eric.ed.gov/fulltext/EJ1412033.pdf
Shimazoe, J. (2021). Research managers and administrators in conflicting organizational cultures: How does their human capital help professional survival in knowledge-intensive organizations? Journal of Research Administration, 52(1), 102–140. https://files.eric.ed.gov/fulltext/EJ1293150.pdf
Society for Research Administrators International. (n.d.). Past winners—SRA International. Retrieved November 15, 2024, from https://www.srainternational.org/membership-experience/get-involved393/symposium591/past-winners271
Vapiwala, F., & Pandita, D. (2024). Streamlining talent management for modern business through artificial intelligence. 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS), 619–623. https://doi.org/10.1109/ICETSIS61505.2024.10459450
Yu-Han, C., & Chun-Ching, C. (2023). Investigating the impact of generative artificial intelligence on brainstorming: A preliminary study. 2023 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), 193–194. https://doi.org/10.1109/ICCE-Taiwan58799.2023.10226617
Zhang, T. (2024). Imagining a more artificially intelligent future: AI offers multiple solutions to save time and improve processes for stage and production managers. Theatre Design & Technology (TD&T), 60(2), 34–41. https://www.usitt.org/sites/default/files/2024-07/Free%20Article%20hickmanbrady_tdt_2024summer.pdf
Appendix A
PIT CREW Exercise
|
[Insert Name for AI Model]
Step 1: The P.I.T. Exercise
P - Purpose (What the AI is Helping Achieve): What will this bot accomplish?
I - Instructions (How to Create the Right Output): How will the bot respond?
T - Topic: (What the AI Needs to Know): What knowledge does the bot need?
Step 2: The C.R.E.W. Test
C - Complex: Is the task complex or multi-step?
R - Repetitive: Is the task repetitive and performed frequently?
E - Expert: Does the task require expert-level knowledge or domain-specific expertise?
W - Widespread: Is the task needed by many people or across various teams?
Step 3: Documents
Reread your answers to the P.I.T questions. What documents will you need, if any, to upload to the AI model so that it behaves as intended?
|