TECHNOLOGY AND CODE article
Front. Public Health
Sec. Digital Public Health
This article is part of the Research TopicAdvancing Healthcare AI: Evaluating Accuracy and Future DirectionsView all 22 articles
Guardians of the Data: NER and LLMs for Effective Medical Record Anonymization in Brazilian Portuguese
Provisionally accepted- 1Venturus Centro de Inovacao Tecnologica, Campinas, Brazil
- 2Universidade Estadual de Campinas Faculdade de Ciencias Medicas, Campinas, Brazil
- 3Universidade Estadual de Campinas Instituto de Computacao, Campinas, Brazil
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The anonymization of medical records is essential to protect patient privacy while enabling the use of clinical data for research and Natural Language Processing (NLP) applications. However, for Brazilian Portuguese, the lack of publicly available and high-quality anonymized datasets limits progress in this area. In this study, we present AnonyMed-BR, a novel dataset of Brazilian medical records that includes both real and synthetic samples, manually annotated to identify personally identifiable information (PII) such as names, dates, locations, and healthcare identifiers. To benchmark our dataset and assess anonymization performance, we evaluate two anonymization strategies: (i) an extractive strategy based on Named Entity Recognition (NER) using BERT-based models, and (ii) a generative strategy using T5-based and GPT-based models to rewrite texts while masking sensitive entities. We conduct a comprehensive series of experiments to evaluate and compare anonymization strategies. Specifically, we assess the impact of incorporating synthetic generated records on model performance by contrasting models fine-tuned solely on real data with those fine-tuned on synthetic samples. We also investigate whether pretraining on biomedical corpora or task-specific fine-tuning more effectively improves performance in the anonymization task. Finally, to support robust evaluation, we introduce an LLM-as-a-Judge framework that leverages a reasoning Large Language Model (LLM) to score anonymization quality, estimate information loss, and assess reidentification risk. Model performance was primarily evaluated using the F1 score on a held-out test set. All evaluated models achieved good performance in the anonymization task, with the best models reaching F1 scores above 0.90. Both extractive and generative approaches were effective in identifying and masking sensitive entities while preserving the clinical meaning of the texts. Experiments also revealed that including synthetic data improved model generalization, and that task-specific fine-tuning yielded greater performance gains than pretraining the model on biomedical domain. To the best of our knowledge, AnonyMed-BR is the first manually annotated anonymization dataset for Brazilian Portuguese medical texts, enabling systematic evaluation of both extractive and generative models.
Keywords: Generative models, Large language models, Medical DataAnonymization, named entity recognition, seq2seq, Transformer models
Received: 01 Oct 2025; Accepted: 28 Nov 2025.
Copyright: © 2025 Schiezaro, Rosa, Campos and Pedrini. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Guilherme Rosa
Helio Pedrini
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
