Your new experience awaits. Try the new design now and help us make it even better

MINI REVIEW article

Front. Artif. Intell.

Sec. Natural Language Processing

Positive Sentiments in Early Academic Literature on DeepSeek: A Cross-Disciplinary Mini Review

Provisionally accepted
Yuxing  HeYuxing He1Angie  GianganAngie Giangan2Nam  VuNam Vu3*Casey  WattersCasey Watters1
  • 1Bond University, City of Gold Coast, Australia
  • 2No Affiliation, Lausanne, Switzerland
  • 3Cranfield University, Cranfield, United Kingdom

The final, formatted version of the article will be published soon.

DeepSeek is a free and self-hostable large language model (LLM) that recently became the most downloaded app across 156 countries. As early academic literature on ChatGPT was predominantly critical of the model, this mini-review is interested in examining how DeepSeek is being evaluated across academic disciplines. The review analyzes available articles with DeepSeek in the title, abstract, or keywords, using the VADER sentiment analysis library. Due to limitations in comparing sentiment across languages, we excluded Chinese literature in our selection. We found that Computer Science, Engineering, and Medicine are the most prominent fields studying DeepSeek, showing an overall positive sentiment. Notably, Computer Science had the highest mean sentiment and the most positive articles. Other fields of interest included Mathematics, Business, and Environmental Science. While there is substantial academic interest in DeepSeek's practicality and performance, discussions on its political or ethical implications are limited in academic literature. In contrast to ChatGPT, where all early literature carried a negative sentiment, DeepSeek literature is mainly positive. This study enhances our understanding of DeepSeek's reception in the scientific community and suggests that further research could explore regional perspectives.

Keywords: artificial intelligence, Censorship, Chinese AI, deep learning, deepseek, large language models (LLM), natural language processing (NLP), neural networks

Received: 17 Oct 2025; Accepted: 10 Dec 2025.

Copyright: © 2025 He, Giangan, Vu and Watters. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Nam Vu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.