Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Big Data

Sec. Machine Learning and Artificial Intelligence

Volume 8 - 2025 | doi: 10.3389/fdata.2025.1682984

Enhancing Bangla Handwritten Character Recognition Using Vision Transformers, VGG-16, and ResNet-50: A Performance Analysis

Provisionally accepted
A  H M Shahariar ParvezA H M Shahariar Parvez1Md  Samiul IslamMd Samiul Islam2Fahmid  Al FaridFahmid Al Farid3Tashida  YeasminTashida Yeasmin4Md. Monirul  IslamMd. Monirul Islam5Md. Shafiul  AzamMd. Shafiul Azam5Hezerul  Abdul KarimHezerul Abdul Karim3JIA  UDDINJIA UDDIN6*
  • 1Daffodil International University, Dhaka, Bangladesh
  • 2State University of Bangladesh, Dhaka, Bangladesh
  • 3Multimedia University, Malacca, Malaysia
  • 4Atish Dipankar University of Science and Technology, Dhaka, Bangladesh
  • 5Pabna University of Science and Technology, Pabna, Bangladesh
  • 6Woosong University, Daejeon, Republic of Korea

The final, formatted version of the article will be published soon.

Bangla Handwritten Character Recognition (BHCR) remains challenging due to complex alphabets, and handwriting variations. In this study, we present a comparative evaluation of three deep learning architectures—Vision Transformer (ViT), VGG-16, and ResNet-50—on the CMATERdb 3.1.2 dataset comprising 24,000 images of 50 basic Bangla characters. Our work highlights the effectiveness of ViT in capturing global context and long-range dependencies, leading to improved generalization. Experimental results show that ViT achieves a state-of-the-art accuracy of 98.26%, outperforming VGG-16 (94.54%) and ResNet-50 (93.12%). We also analyze model behavior, discuss overfitting in CNNs, and provide insights into character-level misclassifications. This study demonstrates the potential of transformer-based architectures for robust BHCR and offers a benchmark for future research.

Keywords: deep learning, Bangla handwritten character recognition, Optical character recognition, Convolutional Neural Network, Vision Transformer (ViT), VGG-16, Resnet-50

Received: 10 Aug 2025; Accepted: 23 Oct 2025.

Copyright: © 2025 Shahariar Parvez, Samiul Islam, Al Farid, Yeasmin, Islam, Azam, Abdul Karim and UDDIN. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: JIA UDDIN, jia.uddin@wsu.ac.kr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.