AUTHOR=Ivanov Vladimir Vladimirovich TITLE=Sentence-level complexity in Russian: An evaluation of BERT and graph neural networks JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 5 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2022.1008411 DOI=10.3389/frai.2022.1008411 ISSN=2624-8212 ABSTRACT=Sentence-level complexity evaluation (SCE) can be formulated as assigning a given sentence a complexity score: either as a category, or a single value. SCE task can be treated as an intermediate step for text complexity prediction, text simplification, lexical complexity prediction, etc. What is more, robust prediction of a single sentence complexity needs much shorter text fragments than the ones typically required to robustly evaluate text complexity. Recent research of absolute and relative sentence-level complexity models typically aimed at analyzing features. Morphosyntactic and lexical features have proved their vital role as predictors in the state-of-the-art deep neural models for sentence categorization. However, a common issue is the interpretability of deep neural network results. This paper presents testing and comparing several approaches to predict both absolute and relative sentence complexity in Russian. Such a comparison is done for the first time for the Russian language. We show that the pre-trained language models outperform graph neural networks, that incorporate the syntactical dependency tree of a sentence. Despite the graph neural networks perform worse, their predictions can be explained much easier.