AUTHOR=Paul Tanmoy , Rana Md Kamruz Zaman , Tautam Preethi Aishwarya , Kotapati Teja Venkat Pavan , Jampani Yaswitha , Singh Nitesh , Islam Humayera , Mandhadi Vasanthi , Sharma Vishakha , Barnes Michael , Hammer Richard D. , Mosa Abu Saleh Mohammad TITLE=Investigation of the Utility of Features in a Clinical De-identification Model: A Demonstration Using EHR Pathology Reports for Advanced NSCLC Patients JOURNAL=Frontiers in Digital Health VOLUME=Volume 4 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2022.728922 DOI=10.3389/fdgth.2022.728922 ISSN=2673-253X ABSTRACT=Background Electronic health record (EHR) systems contain a large volume of texts, including visit notes, discharge summaries, and various reports. To protect the confidentiality of patients, these records often need to be fully de-identified before circulating for secondary use. Machine learning (ML) based named entity recognition (NER) model has emerged as a popular technique of automatic de-identification. Objective The performance of a machine learning model highly depends on the selection of appropriate features. The objective of this study was to investigate the usability of multiple features in building a conditional random field (CRF) based clinical de-identification NER model. Methods Using open-source natural language processing (NLP) toolkits, we annotated protected health information (PHI) in 1500 pathology reports and built supervised NER models using multiple features and their combinations. We further investigated the dependency of a model’s performance on the size of training data. Results Among the 10 feature extractors explored in this study, n-gram, prefix-suffix, word embedding, and word shape performed the best. A model using combination of these four feature sets yielded precision, recall, and F1-score for each PHI as follows: NAME (0.80,0.79.0.80), LOCATION (0.85,0.83,0.84), DATE (0.86,0.79,0.82), HOSPITAL (0.96, 0.93, 0.95), ID (0.99, 0.82, 0.90), INITIALS (0.97, 0.49, 0.65). We also found that the model’s performance becomes saturated when the training data size is beyond 200.