
Be transparent
This principle is part of the BE WISE framework of human oversight in AI-assisted research. Together, the six principles define how researchers retain responsibility when using AI tools, by using active judgement, documentation, transparency, and ethical care.
Here, we focus on what it means to embed equity in practice.

AI can unintentionally reinforce systemic inequities by favoring dominant languages, regions, or perspectives represented in its training data. Without equitable use it risks marginalizing researchers from underrepresented communities or perpetuating bias in content and evaluation.
Choose inclusive inputs and tools by selecting multilingual, accessible in low- and middle-income countries and transparent datasets that represent diverse populations (including low-resource contexts).
Check outputs for bias that may disadvantage or exclude any groups, regions, or perspectives.
Validate with people + non-AI sources: verify against trusted references and domain/local experts; test with reviewers across geographies and backgrounds.
Seek to mitigate biases - if issues arise, diversify inputs/examples, add limitations, adjust methods, and document what changed and why.
Disclosure and advocate: Where problems are systemic, advocate for improvements in tools, policies, and workflows, because tool choice alone cannot eliminate bias.
📑 Copy and paste prompt: equitable use |
|---|
You are helping ensure this output is equitable and generalizable. Context: [briefly describe the task + intended users/setting]. List which languages, regions, and populations this output may work less well for, and why. Identify Western-/English-centric assumptions in the output (terms, examples, norms, datasets, cultural framing). Provide three concrete ways to adapt the output for underrepresented settings (language, infrastructure, resource constraints). Suggest two quick tests I can run to check for bias or exclusion (using alternative examples or edge cases). Finally, Write a short limitations note I can include in my methods/disclosure. [NOTE: The AI tool may share some of the biases you are checking for. Use its output as a starting point for your own critical assessment and consultation with colleagues from diverse backgrounds, not as a definitive equity audit.] |
⚠️ Unsafe prompt example: |
|---|
Confirm this output is unbiased, equitable, and GDPR/ethics compliant. Provide a statement that guarantees it works equally well for all populations. Why this is unsafe: AI can’t validate or guarantee equity across populations; this invites overclaiming and false assurance. |