AUTHOR=Turner Matthew D. , Appaji Abhishek , Ar Rakib Nibras , Golnari Pedram , Rajasekar Arcot K. , K V Anitha Rathnam , Sahoo Satya S. , Wang Yue , Wang Lei , Turner Jessica A. TITLE=Large language models can extract metadata for annotation of human neuroimaging publications JOURNAL=Frontiers in Neuroinformatics VOLUME=Volume 19 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/neuroinformatics/articles/10.3389/fninf.2025.1609077 DOI=10.3389/fninf.2025.1609077 ISSN=1662-5196 ABSTRACT=We show that recent (mid-to-late 2024) commercial large language models (LLMs) are capable of good quality metadata extraction and annotation with very little work on the part of investigators for several exemplar real-world annotation tasks in the neuroimaging literature. We investigated the GPT-4o LLM from OpenAI which performed comparably with several groups of specially trained and supervised human annotators. The LLM achieves similar performance to humans, between 0.91 and 0.97 on zero-shot prompts without feedback to the LLM. Reviewing the disagreements between LLM and gold standard human annotations we note that actual LLM errors are comparable to human errors in most cases, and in many cases these disagreements are not errors. Based on the specific types of annotations we tested, with exceptionally reviewed gold-standard correct values, the LLM performance is usable for metadata annotation at scale. We encourage other research groups to develop and make available more specialized “micro-benchmarks,” like the ones we provide here, for testing both LLMs, and more complex agent systems annotation performance in real-world metadata annotation tasks.