This is research-specific guidance for the responsible use of AI tools in research and scholarly activity at WMed. This guidance is intended to support thoughtful adoption of AI while safeguarding research quality and trust. It applies to all members of the research community, including faculty, staff and research coordinators, students and trainees, postdoctoral scholars and visiting researchers, and collaborators working on WMed-led research. AI tools can support many stages of the research lifecycle, but they must be used in ways that uphold scholarly integrity, protect confidentiality, and align with disciplinary, institutional, and publishing expectations. Funding agencies (i.e. NIH, NSF, State Medicaid, CMS, Foundations, etc.) and publishers will have specific rules on use of AI. It is recommended to review these prior to beginning grant proposals, conducting research, and submission for publication. Researchers may also need to consult role-based guidance for teaching, clinical practice, or administrative work as applicable.
-
AI tools may be used to support research activities such as brainstorming research questions or study designs; summarizing literature for preliminary understanding; drafting outlines or early versions of text; organizing notes, ideas, or qualitative themes; and analysis of data. AI tools are intended to support research workflows, not replace scholarly judgment, methodological rigor, or critical analysis. All AI-generated content must be reviewed, edited, and validated by the researcher before use.
-
Researchers remain fully responsible for accuracy, interpretation, and conclusions; ethical conduct of research; and compliance with institutional and regulatory requirements. AI tools do not assume responsibility for research decisions or outputs. AI-generated content should be treated as a draft or aid, not as an authoritative source.
-
Research activities often involve confidential review materials, which must be protected. Researchers must not upload the following into public or unvetted third-party AI tools as inputting content into unvetted AI tools may be treated as public disclosure.
- Research protocols under IRB, IBC, or IACUC review.
- Manuscripts, grant proposals, or reviewer comments under confidential review.
- Unpublished data or analyses.
- Sensitive or protected information, including PII, PHI, or FERPA-protected information.
-
AI-generated research content must be carefully verified and validated. Researchers should confirm factual claims using authoritative sources; verify citations, references, and quotations; ensure summaries accurately reflect original studies; and evaluate limitations, bias, and applicability. Medical librarians are available to support literature searches, citation verification, and responsible use of AI in research and writing.
-
Transparency and disclosure of AI use are required when AI tools contribute meaningfully to research outputs. Disclosure expectations may vary depending on the discipline or field, journal or publisher requirements, and funding agency expectations. Researchers should follow the most specific applicable guidance and clearly communicate how AI tools were used in the research or writing process. AI tools must not be listed as authors. Human authors retain full responsibility for the work.
-
AI use in research should align with disciplinary best practices, professional and ethical standards, as well as journal, publisher, and conference policies. Because norms are evolving, researchers are encouraged to stay informed and consult guidance from publishers, professional organizations, and institutional resources.
-
Researchers are encouraged to consider the suitability and risks of AI tools before adoption. Tool evaluation may include data handling and privacy considerations; reliability and known limitations; bias and explainability; and alignment with research goals. Institutional resources, including Information Technology and the Medical Library, can assist with tool evaluation. Once identified, researchers must submit the AI tool Request prior to installing or using any new AI tool.
-
Open discussion of AI use supports responsible research. Collaborative dialogue helps prevent misunderstandings and supports ethical practice. Researchers are encouraged to:
- Discuss AI use with mentors, supervisors, and collaborators
- Establish shared expectations within research teams
- Communicate clearly about AI contributions to shared work
When You Are Unsure
If you are uncertain whether a particular AI use is appropriate in a research context, pause before proceeding. Follow institutional guidance for when you are unsure and consider consultation with appropriate leadership, such as supervisors, mentors, the Medical Library, or Information Technology. Requests for assistance can also be sent to Support+AI@wmed.edu.