Academic Writing and Publishing

AI tools can support certain aspects of scholarly writing, but they do not replace human authorship, intellectual contribution, or accountability. This guidance is intended to help authors use AI thoughtfully while maintaining ethical scholarship, protecting sensitive information, and meeting publishers and institutional expectations. 

This applies to members of the WMed community engaged in scholarly writing, including faculty authors, residents and fellows, staff contributing to scholarly work, and students participating in research or publication. This relates to writing for journals, conferences, academic reports, and other scholarly outputs. 

  • AI tools may be used to support certain stages of scholarly writing, such as brainstorming ideas or outlining manuscripts; drafting early versions of text for later refinement; improving clarity, organization, or grammar; and summarizing material for the author’s own understanding. AI tools are intended to assist with writing processes, not to generate final scholarly content without meaningful human contribution. Final written work must reflect the author’s understanding, interpretation, and intent.

  • AI tools are not authors. Human authors retain full responsibility for accuracy of claims and interpretations; integrity of analysis and conclusions; ethical representation of contributions; and compliance with institutional and publisher requirements. AI-generated content must be reviewed, edited, and validated. Authors are accountable for the final work, regardless of the tools used during drafting.

  • Transparency and disclosure of AI use are required when AI tools contribute meaningfully to scholarly writing. Disclosure expectations may vary depending on journal or publisher requirements, discipline-specific norms, and the nature and extent of AI assistance. Disclosure may be required in manuscripts, acknowledgments, or cover letters. Authors should follow the most specific applicable guidance and clearly describe how AI tools were used. AI tools must not be listed as authors. 

  • AI-generated text may appear fluent while containing inaccuracies or unsupported claims. 

    Authors must verify factual statements using authoritative sources; confirm accuracy of citations, quotations, and references; and ensure summaries reflect the original literature accurately. AI tools should not be relied upon as authoritative sources. Human judgment and evidence-based verification are required for all scholarly content. 

  • AI tools may support writing mechanics and organization, but they should not fabricate data, results, or citations; replace critical thinking or subject matter expertise; generate clinical interpretations or conclusions without validation. Routine tools for spelling, grammar, or formatting are distinct from generative AI systems and may be used as appropriate. 

  • Scholarly writing may involve sensitive materials that must be protected. Authors must not enter the following into unvetted third-party AI tools: Protected health information (PHI), identifiable patient data, confidential review materials (e.g., unpublished manuscripts, grant proposals, peer reviews), and non-public institutional information. Users should utilize de-identified or hypothetical examples when exploring AI-assisted writing tasks. 

  • Publisher policies on AI use are evolving and may differ across journals and disciplines. 

    Authors are expected to review and follow publisher instructions regarding AI use and disclosure while adhering to institutional standards for academic integrity, authorship, privacy, and IT use. When requirements differ, the most restrictive applicable standard should be followed. 

  • WMed provides resources to support responsible academic writing, and authors are encouraged to use these resources when working with AI-assisted writing. Resources include: 

    • Medical librarians who can assist with literature searches and citation verification 
    • Plagiarism-checking tools to support originality and disclosure readiness 
    • Institutional AI guidance addressing data protection, disclosure, and integrity 
  • AI tools may reflect bias present in training data and may produce outputs that appear credible despite inaccuracies. Authors should critically review AI-assisted content for bias or unexamined assumptions, overgeneralization or missing context, and inappropriate tone or framing. Understanding AI limitations is part of responsible scholarly practice.

When You Are Unsure 

If you are uncertain whether a particular AI use is appropriate in academic writing or scholarly publishing, pause before proceeding. Follow institutional guidance for when you are unsure and consider consultation with appropriate leadership, the Medical Library, or journal instructions. Requests for assistance can also be sent to Support+AI@wmed.edu.