Skip to main content

Artificial intelligence

At Frontiers we were born digital, built from the outset as a technology-enabled research publishing platform. Today, we steward one of the largest structured open access research datasets in the world. This scale of high-quality data powers our AI-driven integrity system, and the intelligent tools we use to strengthen research quality at scale.

Latest updates

About Frontiers’ AI research tools

At Frontiers, we take a unique three-pronged approach to peer review and research integrity by combining:

  • the expertise of our editorial boards

  • the rigor of our in-house review and integrity teams

  • advanced AI and technology

AIRA is Frontiers’ proprietary suite of artificial intelligence research tools, created to support peer review and the publishing process with innovative, trusted technology.

The initials AIRA originally came from the name Artificial Intelligence Review Assistant. Now, AIRA is more than a single tool: it’s a family of trusted AI assistants. From our original AI review to specialized integrity and workflow agents, every tool follows the same principles of transparency, ethics, and human oversight.

AIRA is built on Frontiers’ responsible AI governance framework, which commits us to transparency, human accountability, ethical guardrails, equitable access, and community-driven improvement. These principles ensure that AIRA strengthens scientific rigor and protects the integrity of research.

Learn more about our AI-powered assistant

How AIRA was created

Launched in 2018, AIRA’s development began with a simple question: how can artificial intelligence enhance academic publishing while protecting research integrity? As the volume of scientific research grew along with a rise in sophisticated industry-wide fraud, publishers needed tools that are fast, accurate, and transparent. To answer this, Frontiers brought together experts in editorial workflows, data science, product design and research ethics. The result was an industry-first assistant that is:

  • developed and maintained entirely in-house

  • trained and tuned around Frontiers’ publishing processes

  • continuously improved based on usage data, quality checks and user feedback.

From early prototypes to its current form, AIRA has been shaped by close collaboration between technology experts and publishing professionals.

How our AI tools support efficiency and integrity

AIRA creates efficiency by performing fast, consistent, and transparent quality checks that would be impossible at scale through human effort alone. It performs more than 40 quality checks in just a few minutes – from assessing language accuracy to detecting image manipulation. We use AIRA to provide decision support, but the final decision is always made by a human expert.

And because AIRA is developed in-house, Frontiers can align its design and behavior with our rigorous quality policies and principles.

Pietro Ghezzi

Pietro Ghezzi

Specialty Chief Editor, Frontiers in Immunology

“AIRA is a very useful tool, and I value its capacity to streamline the process and monitor editorial standards. Its ability to automatically identify plagiarized text and manipulated images, extremely difficult to do manually, is a great help for editors.”

AI supporting reviewers and editors

AIRA supports expert Editors, Reviewers and our in-house editorial teams with intelligent recommendations and guidance, allowing them to focus on what matters most: scientific quality.

AIRA Review Guide is an integrated AI assistant available to reviewers within our review forum. With suggested prompts to guide them, the chat interface offers reviewers a way to ask AIRA any question about the manuscript, helping them to summarize, extract claims, find gaps, and more.

It helps alleviate reviewer fatigue by offering guidance and recommendations to make the workflow more manageable while keeping human expertise at the center.

The tool is optional and never replaces human judgment - Editors and Reviewers remain responsible and publicly accountable for the accuracy, fairness, and scientific rigor of the process.

Developing the tool fully in-house means we can ensure that the intellectual property and privacy rights of our authors remain fully protected, unlike external LLM-based tools.

Continual learning and evolution

AIRA is continually refined through user feedback and close collaboration between technologists, editors, and research integrity specialists, ensuring it remains aligned with real publishing needs. We train AIRA on curated, policy-aligned datasets and evaluate updates regularly for precision, recall, and consistency. Bias, privacy protections, and reproducibility are monitored through ongoing audits.

As Frontiers expands, AIRA’s underlying models, safeguards, and interfaces are adapted to maintain quality at scale. Its capabilities also flex and develop to meet the escalating needs of combating research fraud – a growing challenge across the whole publishing industry.

This ongoing development cycle allows Frontiers to respond quickly to feedback, address risks early, and build AI features closely aligned with the future of open science publishing.

Our framework for responsible AI governance

All of our work is guided by our framework for responsible AI governance. The framework is shaped by our hands-on experience developing and deploying AI tools and aligned with leading international frameworks, including the OECD, UNESCO, NIST, ISO, and EU AI standards. It is formed of six pillars:

  • Transparency and accountability

  • AI literacy and capacity building in research

  • Ethics and integrity guardrails

  • Equity and access governance

  • Community engagement and researcher feedback

  • Advocacy and policy leadership

By translating these broad global standards into a research and publishing context, the pillars make them more practical and usable for stakeholders across the research ecosystem. They also acknowledge the vital role of all stakeholders, including editors, reviewers, and publishers, whose standards and practices ensure that responsible AI use is embedded throughout research publishing.

Read the full framework in our AI whitepaper

What to read next