The policy IT18 Artificial Intelligence (AI) Use applies to all members of the medical school community who use AI tools in connection with their institutional activities, systems, data, or operations. The policy establishes requirements for the responsible use of AI at WMed and is intended to protect information; ensure appropriate oversight and approval of AI tools; align AI use with existing technology, privacy, and security expectations; and provide a stable foundation supported by institutional guidance. The student policy UME433/GSE433 Use of Artificial Intelligence Resources During Assessments and Assignments describes the use of AI resources during assessments and assignments.
AI Guidance
This guidance applies to all members of the medical school community including faculty, residents, fellows, staff, and students. It provides a shared foundation for responsible, ethical, and thoughtful use of AI across the medical school and supports informed decision-making, learning, and innovation while protecting individuals, patients, and the institution. Institutional guidance is not intended to penalize good-faith use of AI tools, but it is meant to clarify expectations, risks, and appropriate boundaries.
Core Principles for Use of AI
The core principles for the use of AI at WMed are:
- AI tools support, but do not replace human judgment or responsibility.
- Only vetted and approved AI tools may be used.
- Sensitive or protected information must be safeguarded.
- AI-generated content requires human review and accountability.
- Transparency and disclosure are required when AI contributes meaningfully to work products.
- Verification and validation are essential before use.
When You Are Unsure
If you are uncertain whether a particular use of AI is appropriate, pause before proceeding. Consult the relevant guidance and engage appropriate leadership, supervisors, Information Technology, Clinical Affairs, or the Medical Library before moving forward. Requests for assistance can also be sent to Support+AI@wmed.edu.
-
The medical school defines how AI tools are reviewed, approved, and governed. Only vetted and approved AI tools may be used for institutional activities. AI tools that access WMed systems, integrate with platforms such as Teams, or process sensitive data require explicit approval. Approval of a tool for one context (e.g., administrative use) does not imply approval for use in other contexts, such as clinical care. Pilot testing and limited evaluation may be appropriate pathways for assessing new AI tools or use cases, when coordinated through established institutional governance processes. Third-party AI tools that are not vetted introduce additional privacy, security, and compliance risks and must not be used with sensitive or protected information.
-
Information must be protected when using AI tools. Sensitive or protected information must not be entered into unvetted or public AI tools. This includes Personally Identifiable Information (PII), Protected Health Information (PHI), FERPA-protected student information, confidential review materials (e.g., manuscripts, grant proposals, peer reviews), and non-public institutional information. Entering confidential, non-public, or regulated materials into unvetted AI tools may constitute public disclosure and introduce legal, ethical, or reputational risk. When appropriate, one should use de-identified, hypothetical, or institutionally approved data.
-
Humans are responsible when AI tools are used. AI-generated content is not authoritative. Humans remain fully accountable for accuracy, appropriateness, ethical use, and final decisions. AI output must be reviewed, edited, and validated before use. Responsibility cannot be delegated to an AI system. The level of review, validation, and oversight should be proportionate to the potential impact of the AI-support activity.
-
Users are responsible for verification of AI-generated content. AI tools may generate fluent but inaccurate content. Users must verify facts using authoritative sources; confirm citations and references; and validate outputs in context (clinical, academic, administrative). Verification expectations may vary but are always required. Institutional resources such as Information Technology, the Medical Library, Clinical Affairs, and the IRB are available to support responsible AI use, verification, and validation.
-
Transparency and disclosure are required when AI tools contribute meaningfully to work shared outside an individual’s personal use. Disclosure expectations may vary (e.g., publisher requirements, clinical context) but the institutional baseline is disclosure, with documented exceptions. AI tools must not be listed as authors.
-
The use of AI tools must support ethical scholarship and respect for intellectual property. AI tools must not be used to fabricate data or citations, misrepresent authorship or contribution, circumvent learning objectives or assessments. Copyrighted curricular materials and proprietary content must not be entered into third-party AI tools. In research and scholarly contexts, disclosure requirements may include manuscripts, acknowledgments, cover letters, or other publication materials, consistent with publisher or funded expectations.
-
When using AI tools, a user must address fairness and accessibility considerations. AI tools may reflect bias present in training data. Users should review outputs for bias or exclusion; consider accessibility of AI-generated materials; and avoid reinforcing inequities. Equitable and inclusive use of AI is a shared responsibility.
-
For scholarly writing and publication, AI tools may assist with drafting, organization, and clarity. AI tools are not authors. Authors remain accountable for accuracy and interpretation. Disclosure of AI use is required where applicable. Citations and claims must be verified using authoritative sources. Publisher and disciplinary standards must be followed.
-
All users should adhere to the general institutional guidelines, however there is role-specific guidance that provides more specificity and relevant details. The role-specific AI use guidance applies to the following roles:
Students: This applies to students engaged in coursework, research, and clinical experiences. It is relevant to academic integrity, course-specific expectations, appropriate disclosure, and learning-centered use of AI.
Faculty Teaching: This applies to faculty designing curriculum, assessments, and instruction. It is relevant to transparency with learners, protecting curricular intellectual property, and AI not being the sole determinant of student outcomes
Research: This applies to all research-related activities. It is relevant to the protection of confidential review materials, ethical research conduct, and alignment with funder and publisher expectations.
Residents and Fellows: This applies to residents and fellows in clinical training. It is relevant to patient safety, supervised use, clinical judgment, and competency-based progression.
Clinical Faculty and Clinician Educators: This applies to attending physicians, clinical faculty, and preceptors who are not learners. It is relevant to clinical judgment remaining human, approved tools only in clinical contexts, protection of PHI, supervision of learner AI use, and alignment with Clinical Affairs guidance.