5 Research Security Questions Every Study Team Should Be Able to Answer

By SRAI News posted 2 days ago

  

Regulatory & Compliance Oversight |

 

Research security doesn’t start in an office—it starts with study teams. This article reframes research security as a set of everyday decisions and introduces five essential questions teams should be able to answer to protect data, participants, and institutional trust.

 


 

Research security is often framed just as a technical or compliance-driven issue—something managed through policies, systems, or oversight offices. But the most consequential security decisions are made much closer to the ground. They happen within study teams, through everyday choices about funding, data, participants, collaborators, and outcomes (National Science Foundation [NSF], 2023; Office of Science and Technology Policy [OSTP], 2022).

At its core, research security is not just about protecting information; it is about protecting trust. Participants share sensitive data with the expectation that it will be handled responsibly, used only for its intended purpose, and safeguarded throughout the life of the study and beyond. Funders, institutions, and the public similarly rely on research teams to act as careful stewards of that trust (National Academies of Sciences, Engineering, and Medicine [NASEM], 2020).

Security gaps rarely arise from malicious intent. More often, they emerge from unclear expectations, unexamined assumptions, or questions that were never fully addressed. By stepping back to ask the right questions early—and revisiting them as a study evolves—teams can surface hidden risks, clarify responsibilities, and strengthen ethical decision-making.

 

The following five questions are not a checklist or a compliance exercise. They are a framework for thoughtful stewardship. When study teams can confidently answer them, research security becomes an integrated part of good research practice rather than an after-the-fact concern.

 

1. Who is your funder?

Understand who funds your research—and how they are connected to other entities’ foundational research security. Study teams should have full visibility into funder affiliations, collaborations, and funding mechanisms, with all relationships properly disclosed and vetted.
Research security requires end-to-end transparency, and that includes the funder. Teams should be able to clearly identify all key personnel involved in the work, including unpaid contributors, informal collaborators, or visiting researchers. Foreign appointments, visiting positions, and outside research support must also be consistently reported (OSTP, 2022; NSF, 2023).

Incomplete or inconsistent disclosures remain one of the most significant sources of research security risk. These gaps can undermine institutional trust, delay projects, and expose teams to compliance and reputational consequences. Knowing exactly who is supporting the work—and under what conditions—is the first step toward managing those risks.

2. What do they want in return?

Every funding relationship carries expectations, whether explicit or implicit. Study teams must understand what is requested in exchange for support and how those expectations shape data access, sharing, and control.
Are you working with classified, sensitive, or proprietary data, materials, or technologies? Is any information export-restricted, controlled, or identifiable human-subject data? If so, are appropriate safeguards and access controls in place?
Equally important is understanding who can access what. Not every team member may be eligible or authorized to work with all data. Clear definitions of data environments, access hierarchies, storage locations, and monitoring practices are essential for protecting sensitive information.
Finally, teams should be clear about what material or intellectual outputs are expected to be shared. Ambiguity around ownership, publication rights, or downstream use can create significant security and ethical challenges later on (National Institutes of Health [NIH], 2023; MITRE Corporation, 2019).

3. Who is the study population?

Research security begins with knowing exactly whose data, samples, and experiences you are entrusted to protect. A clearly defined study population helps teams assess sensitivity, risk exposure, and potential downstream harm.

Just as important as defining who is included is understanding who is not. Vague or shifting population definitions can introduce ethical blind spots and complicate data governance decisions. Certain populations may face heightened risk if data are misused or improperly disclosed, making precision especially critical.

A thorough understanding of the study population supports ethical research conduct, regulatory compliance, and strong security planning. It anchors decisions about consent, access, retention, and dissemination in a clear understanding of whom the research ultimately affects.

4. How does the study end?

Study teams must have a shared and explicit understanding of when a study ends—and what happens next. A well-defined endpoint helps establish clear separation between active research and post-study activities.

This question becomes especially important in studies involving longitudinal data collection, follow-up, or secondary use that may continue beyond the original approval period. Without clarity, access controls can blur, data may be retained longer than necessary, and oversight responsibilities can become unclear.

Defining the endpoint supports appropriate decisions about data handling, access revocation, archiving, and ongoing monitoring. It also ensures that post-study activities remain aligned with original approvals, participant expectations, and security obligations (NIH, 2023).

5. What are we going to do with the results?

The intended use of study results shapes nearly every aspect of research security. How findings will be analyzed, shared, stored, or repurposed determines who needs access, how long data must be retained, and where security boundaries should be enforced.

When plans for results are undefined—or treated as an afterthought—the risk of inappropriate sharing, mission creep, or unintended secondary exposure increases significantly. Teams should be explicit about dissemination pathways, data retention timelines, and any future uses of results.

Clear decisions about the lifecycle of results help ensure that findings are protected as carefully as the data that produced them, preserving both scientific integrity and participant trust.

 

Strong research security isn’t just the implementation of software, policies, or audits. When study teams take the time to ask and answer these five questions, they create a shared understanding of responsibility, risk, and stewardship.

 

References

 

National Academies of Sciences, Engineering, and Medicine. (2020). Protecting the integrity of the research enterprise. The National Academies Press. https://nap.nationalacademies.org

National Institutes of Health. (2023). NIH data management and sharing policy. https://sharing.nih.gov/data-management-and-sharing-policy

National Science Foundation. (2023). NSF SECURE: Safeguarding the research enterprise. https://www.nsf.gov/secure4

Office of Science and Technology Policy. (2022). Protecting the integrity of U.S. science: A framework for research security. The White House. https://www.whitehouse.gov/ostp

MITRE Corporation. (2019). JASON report: Fundamental research security. https://www.mitre.org/publications

 

 

Authored by:

 

Rani Muthukrishnan, PhD
Director of Research Compliance
Texas A&M University–San Antonio
SRAI Catalyst Feature Editor

 

Anita Trupiano, MS
Program Development Analyst
Cancer Institute of New Jersey Rutgers
SRAI Catalyst Feature Editor

 

image

Shipra Mittal, MS, MBA
Senior Grants Manager
New York University
SRAI Catalyst Feature Editor

 

The Regulatory & Compliance Oversight Feature Editors want your important insights! 
Submit now to SRAI's Catalyst: https://srainternational.wufoo.com/forms/srai-catalyst-article-submission-form/

 

#Catalyst
#March2026
#Regulatory&ComplianceOversight
#ResearchSecurity
0 comments
12 views