2025/027 | Responsible Use of AI in Academic Writing and Peer Review
Generative AI tools like ChatGPT, Claude, and others have rapidly entered academic workflows, offering assistance with writing, brainstorming, literature review, and data analysis. While these tools hold promise, their use raises critical questions about authorship, transparency, originality, and integrity. This post explores emerging guidelines, ethical considerations, and best practices for responsible use of AI in research.
The Rise of Generative AI in Academia
What Generative AI Can Do:
- Drafting and editing: Generate text, improve grammar, suggest phrasing
- Literature review: Summarize papers, identify themes
- Data analysis: Generate code, interpret statistical output
- Brainstorming: Suggest research questions, hypotheses, or experimental designs
- Translation: Convert text between languages
Why It Matters for Research Integrity:
AI tools can enhance productivity, but they also introduce risks: fabricated citations, biased outputs, lack of transparency, and questions about intellectual contribution.
Key Concerns and Challenges
1. Authorship and Attribution
Can an AI be an author? Most major publishers and institutions say no. Authorship requires accountability, and AI cannot take responsibility for the accuracy or integrity of research.
Current Consensus:
- AI tools are not authors and should not be listed as co-authors
- Use of AI should be disclosed in methods, acknowledgments, or author statements
- Responsibility for content lies with human authors
2. Fabrication and Hallucination
AI can generate plausible-sounding but false information, including:
- Fake citations: Inventing non-existent papers with realistic-looking references
- Incorrect facts: Stating claims that are unsupported or wrong
- Biased outputs: Reflecting biases in training data
Best Practice: Always verify AI-generated content. Never cite a source you haven't read.
3. Originality and Plagiarism
AI outputs may inadvertently reproduce copyrighted or previously published material. Similarity detection tools may flag AI-generated text.
Questions to Consider:
- Does AI-generated text constitute plagiarism if it closely resembles existing work?
- Who owns the copyright to AI-generated text?
- How should institutions treat AI-assisted writing in assessments?
4. Transparency and Disclosure
Hiding AI use undermines trust. Readers, reviewers, and editors deserve to know when AI tools have been used and how.
Emerging Guidelines and Policies
Major publishers, funders, and institutions are developing AI policies:
1. Publisher Policies
- Nature, Science, Cell Press: AI cannot be listed as an author; AI use must be disclosed
- Elsevier, Springer Nature: Similar policies; require transparency statements
- Some journals: Ban AI-generated images in figures without explicit disclosure
2. Funder Guidelines
Funding agencies are beginning to address AI:
- Some require disclosure of AI use in grant applications
- Emphasize that AI outputs must be verified and validated
3. Institutional Policies
Universities are developing policies on AI in research and teaching:
- Acceptable use statements
- Disclosure requirements for dissertations and theses
- Training on responsible AI use
AI in Peer Review: Special Considerations
Using AI in peer review raises additional concerns about confidentiality and bias.
Concerns:
- Confidentiality breach: Uploading unpublished manuscripts to AI tools may expose confidential research
- Data retention: AI companies may retain and train on uploaded content
- Bias amplification: AI may reinforce existing biases in peer review (e.g., favoring certain writing styles, topics, or institutions)
Emerging Guidelines for Reviewers:
- Do not upload manuscripts to public AI tools (this violates confidentiality)
- If using AI for language assistance, use local/private models or tools with robust data-protection agreements
- Disclose AI use to editors if substantial assistance was provided
- Never let AI write your review; your expertise and judgment are what editors seek
Best Practices for Responsible AI Use
1. Understand Your Tool
Know the capabilities, limitations, and terms of service of the AI you're using. Is your data used for training? What are the privacy protections?
2. Disclose AI Use
Be transparent about:
- Which AI tool(s) you used
- What tasks the AI performed (e.g., editing for grammar, generating code, brainstorming)
- How you verified outputs
3. Verify Everything
Treat AI outputs as suggestions, not facts. Check citations, verify data, and review generated text for accuracy and appropriateness.
4. Retain Human Judgment and Accountability
AI should assist, not replace, human expertise. You remain responsible for the content, conclusions, and integrity of your work.
5. Respect Privacy and Confidentiality
Do not upload sensitive, confidential, or proprietary data (participant data, unpublished manuscripts, proprietary code) to public AI tools.
6. Avoid Over-Reliance
Develop and maintain your own writing, analytical, and critical thinking skills. AI should complement, not atrophy, your abilities.
7. Stay Informed
AI policies are evolving rapidly. Regularly check your institution's, funder's, and target journal's policies.
Looking Ahead
Generative AI is here to stay, and its role in research will continue to expand. The research community must balance innovation with integrity, developing norms that:
- Encourage responsible experimentation with AI tools
- Maintain transparency and accountability
- Protect the trustworthiness of the scholarly record
- Ensure equity (not everyone has equal access to advanced AI)
Conclusion
Generative AI offers powerful capabilities, but responsible use requires vigilance, transparency, and adherence to evolving guidelines. By disclosing AI use, verifying outputs, and retaining human accountability, researchers can harness AI's benefits while upholding the core principles of research integrity. As the technology and its regulation mature, the research community must remain engaged in shaping norms that protect both innovation and trust.
This post concludes our series on research integrity. From international guidelines and replication studies to institutional roles, citation ethics, human participant protections, and AI tools, maintaining integrity requires commitment at every level of the research ecosystem.