
Be transparent
This principle is part of the BE WISE framework of human oversight in AI-assisted research. Together, the six principles define how researchers retain responsibility when using AI tools, by using active judgement, documentation, transparency, and ethical care.
Here, we focus on what it means to be transparent in practice.

Transparency is the foundation of trust in research. Without clear disclosure, readers, editors, and reviewers cannot assess whether AI has influenced findings. Hidden or undisclosed AI use risks undermining credibility, reproducibility, and accountability.
Be proactive — disclose even minor or experimental uses to foster a culture of openness and trust.
Check policies – disclosure requirements are often listed on policy pages to ensure you meet relevant requirements.
Name the tools you used — including model or version where possible.
Describe their role — what specific task they supported (e.g. summarizing literature, polishing text, visualizing data).
Specify timing — when AI was used in the research or publication process (e.g. during drafting, analysis, or review).
Declare limitations — clarify what the AI did not do (e.g. no data generation or interpretation).
Example disclosure statement: |
|---|
We used [Tool + model/version] to support [task] at the [stage]. The tool was provided with [types of input] and was not used for [what it did not do]. All outputs were [checked/verified] by [whom]. |
⚠️ What NOT to do |
|---|
Unsafe prompt: ‘Write a disclosure that guarantees we complied with GDPR and journal policy.’ Why: AI can’t truthfully guarantee compliance; that’s a legal conclusion requiring human verification and governance checks. |