AUTHOR=Álvarez-Martínez Francisco Javier , Esteban Luis , Frungillo Lucas , Butassi Estefanía , Zambon Alessandro , Herranz-López María , Aranda Mario , Pollastro Federica , Tixier Anne Sylvie , Garcia-Perez Jose V. , Arráez-Román David , Ross Andrew , Mena Pedro , Edrada-Ebel Ru Angelie , Lyng James , Micol Vicente , Borrás-Rocher Fernando , Barrajón-Catalán Enrique TITLE=There are significant differences among artificial intelligence large language models when answering scientific questions JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 8 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1664303 DOI=10.3389/frai.2025.1664303 ISSN=2624-8212 ABSTRACT=IntroductionThis study investigates the efficacy of large language models (LLMs) for generating accurate scientific responses through a comparative evaluation of five prominent free models: Claude 3.5 Sonnet, Gemini, ChatGPT 4o, Mistral Large 2, and Llama 3.1 70B.MethodsSixteen expert scientific reviewers assessed these models in terms of depth, accuracy, relevance, and clarity.ResultsClaude 3.5 Sonnet emerged as the highest scoring model, followed by Gemini, with notable variability among the other models. Additionally, retrieval-augmented generation (RAG) techniques were applied to improve LLM performance, and prompts were refined to improve answers. The results indicate that although LLMs such as Claude 3.5 Sonnet have potential for scientific tasks, other models may require more development or additional prompt engineering to reach comparable accuracy. Reviewers’ perceptions of artificial intelligence (AI) utility and trustworthiness showed a positive shift after evaluation. However, ethical concerns, particularly with respect to transparency and disclosure, remained consistent.DiscussionThe study highlights the need for structured frameworks for evaluating LLMs and ethical considerations essential for responsible AI integration in scientific research. These findings should be interpreted with caution, as the limited sample size and domain-specific focus of the exam questions restrict the generalizability of the results.