Blog Viewer

Empowering Diversity Part II: Potential Cautions of AI on Diversity, Engagement and Inclusion (DEI) in Research Administration

By SRAI News posted 10-09-2024 01:48 PM

  

Empowering Diversity Part II: Potential Cautions of AI on Diversity, Engagement and Inclusion (DEI) in Research Administration

AI, with its potential to revolutionize how organizations approach DEI in research administration, offers significant benefits. Its ability to swiftly and accurately process large volumes of data using advanced algorithms can uncover patterns and trends, empowering institutions to gain deeper insights into their DEI metrics and make faster, more informed decisions. For instance, AI can scrutinize internal grant distribution data, identifying and resolving gender and racial disparities in funding and resource allocation, thereby promoting a more equitable distribution of resources. AI-driven tools can standardize the evaluation process of grant applications for DEI consideration, ensuring that all submissions adhere to the DEI evaluation criteria. AI tools can, thus, assist organizations in adjusting their policies and refining their strategies to ensure DEI considerations in research administration, including training programs, research staff hiring practices, performance management processes, resource allocation and other DEI initiatives based on identified gaps and needs.

While AI holds significant promise for DEI compliance and initiatives, it is crucial to be acutely aware of the potential risks. Below are potential cautions regarding using AI on DEI in research administration and possible strategies for risk mitigation.

Enabling existing bias: One risk is exacerbating existing biases. Through algorithmic processes, AI systems are trained on historical data, which may contain biases related to race, gender, and other demographic factors. If these biases are not identified and corrected, AI can exacerbate disparities. This can continue, leading to unfair treatment of underrepresented groups in research administration, such as resource allocation and research trainee and staff training and recruitment practices.

Over-reliance on AI: While AI tools can enhance the efficiency of research administration, it's crucial to recognize the irreplaceable role of human judgment in ensuring fairness and equity. There is a potential risk of over-reliance on AI tools for DEI considerations and decision-making in research administration, including proposal development, proposal evaluation, and monitoring adherence to DEI principles while implementing funded projects. For example, suppose a reviewer uses an AI-based tool to scan and rank applications for DEI. In that case, they must input data related to past successful applications for the tool to identify similar DEI discourse. However, if successful applications do not take a comprehensive view of DEI, automated decision-making can miss the mark and seriously compromise DEI objectives. This underscores the importance of human judgment in the AI decision-making process.

Risk of non-compliance with laws and regulations: Integrating AI into DEI initiatives in research administration requires careful ethical consideration, institutional strategic alignment, and comprehension of regulatory and legal risks. For example, failing to ensure compliance with equal employment opportunity law and other regulations during hiring can result in legal challenges and reputation damage. The Department of Labor's guidance emphasizes ensuring that AI tools do not violate laws such as the Fair Labor Standards Act and the Family and Medical Leave Act.

Security risk: Collecting and storing sensitive demographic data introduces unique security risks. Third parties may perpetuate unauthorized access or use and misuse it in the workplace. Appropriate physical, organizational, and technological safety measures must be in place to protect sensitive DEI-related information.

Mitigation strategies: Ensuring high-quality, representative data, maintaining human oversight, aligning AI use with ethical and strategic goals, and complying with regulatory requirements are critical steps to mitigate the risks associated with AI. Universities must ensure that their use of AI aligns with their broader DEI goals and does not simply serve as an efficiency tool. Organizations should develop an internal AI governance structure that considers any DEI-related use of AI-enabled tools and establishes appropriate policies and procedures. This also includes regular audits of AI systems for fairness, updating training data to reflect current realities, and involving diverse teams in developing and implementing AI tools to ensure they address the needs of all stakeholders. By addressing these challenges proactively, universities can leverage AI to support their DEI objectives effectively and ethically, providing a sense of reassurance and confidence in the process. 

Ongoing need for training and recalibration of AI models: Continuous Learning and Bias Mitigation: It is critical to recognize that bias mitigation in AI systems is not a one-time process but a continuous effort. AI tools require regular updates and recalibration to ensure that they reflect current realities, legal frameworks, and social norms. As biases in AI models are often subtle and complex, organizations should implement ongoing training and testing protocols to minimize the risk of perpetuating these biases over time. Collaborating across disciplines—between AI developers, DEI experts, legal advisors, and researchers—can provide a more holistic approach to developing and refining AI systems used in DEI compliance. Ensuring continuous learning in AI models also involves regularly reviewing the quality and representativeness of training data, as AI systems can only be as fair as the data on which they are based.

References:

Barocas, S, Hardt, M, & Narayanan, A. (2019). Fairness and machine learning. MIT Press. Bastian, R. (2023, May 8). AI brings opportunities and risks to workplace DEI efforts. Forbes.

Birbal, C. (2024, May 3). New DOL guidelines caution against over-reliance on AI for compliance. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

 O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group. Pennington, K. (2024, June 25). AI and workplace DEI initiatives: Opportunities and challenges. McMillan LLP Privacy & Data Protection Bulletin.


Authored by Dr. Anita Sharma PhD, CRA , Director Research Services
Thompson Rivers University

Authored by Rashonda Harris, MBA, Ed.D., Adjunct Faculty Member
Johns Hopkins University


#Catalyst
#October2024
#ProfessionalDevelopment
#Featured

0 comments
19 views

Permalink