Skip to main content

Work with the right tools: human oversight in AI-assisted research

This principle is part of the BE WISE framework of human oversight in AI-assisted research. Together, the six principles define how researchers retain responsibility when using AI tools, by using active judgement, documentation, transparency, and ethical care.

Here, we focus on what it means to work with the right tools in practice.

Padlock With Keyhole icon in personal data security Illustrates cyber data or information privacy idea. blue color abstract hi speed internet technology. 3d illustration, 3d rendering.; Shutterstock ID 1991256086; purchase_order: IP; job: ; client: ; other:

Why it's needed:

AI tools vary widely in quality, safety, and reliability. Some generate biased or fabricated outputs; others mishandle sensitive data. Responsible use begins with responsible selection and awareness of the risks associated with that and how to mitigate them. Treat AI like any other research instrument: assess it before use, understand its limitations, limit what you share, and verify what it produces.

Work with the right tools: best practice

Before you begin working with an AI tool, ask it the following essential questions. If the answers aren’t clear or the information isn’t available, treat the tool as high-risk and limit its use to low impact tasks, or select an alternative tool.

1. What is this tool trained to do?

Look for a clear description of its purpose, function, and limitations on the provider’s website or documentation.

Avoid tools that make vague or overly broad claims (e.g. ‘improves any text’ or ‘detects all fraud’) and prioritize tools with clear purpose, function and limitations.

2. Where does its training data come from?

Responsible providers disclose whether data were sourced from licensed, public, or proprietary collections. If the source is undisclosed, the tool may pose risks of bias, copyright infringement, or misinformation.

3. How does it protect my data?

Check privacy policies for statements that user inputs are not stored, reused, or used to retrain the model, and avoid using unpublished or sensitive data if these safeguards are unclear.

4. Can I trace and verify its outputs?

Safe tools allow you to see or reproduce how a result was generated by providing citations, links and version numbers so you can check the claims and references are real and accurate. If outputs can’t be verified or traced, the tool should be considered unreliable.

5. Who is accountable if something goes wrong?

Reputable AI tools provide contact details, audit statements, or named developers responsible for oversight. If information remains unavailable, or concerns are raised, limit or discontinue use until governance measures are verified.

6. What can I share with this AI tool?

Before sharing data or information with any AI tool, consider the below table.

Would it be a problem if this content became public?

Action

Yes

Don’t upload data, use local, on-premises only, or no AI

Maybe / unpublished

Sanitize data and use enterprise or private AI tools only

No / already public

Use of general AI tools is acceptable

Assume that anything you type into a general-purpose, cloud-based AI tool (especially free or consumer versions) could be stored, reviewed, or used to improve the system unless the provider’s current data policy clearly says otherwise - and you’ve checked it.

📑 Copy and paste prompt: identifying limits

Before we start, I need a quick governance check.

1) What are you designed/trained to do for this task, and what are your key limitations?

2) What model/tool/version am I using right now (exact name + version/date if available)?

3) How is my data handled: is this chat stored, retained, shared, or used for training? Provide the exact policy page link(s).

4) Can you provide citations/links for factual claims, and can I reproduce your steps (e.g., settings, parameters)?

5) Who is accountable for this tool (provider contact / support / audit info)? Provide links.

If you cannot verify any answer, write ‘Unknown’ and tell me what official link I should check.

Important: Treat the tool's self-assessment as a starting point, not a verification. You must confirm data handling and accountability details against the provider's official documentation before relying on AI-reported answers.

Explore the BE WISE framework