AUTHOR=Hao Gao , Hijazi Haytham , Durães João , Medeiros Júlio , Couceiro Ricardo , Lam Chan Tong , Teixeira César , Castelhano João , Castelo Branco Miguel , Carvalho Paulo , Madeira Henrique TITLE=On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement JOURNAL=Frontiers in Neuroscience VOLUME=Volume 16 - 2022 YEAR=2023 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2022.1065366 DOI=10.3389/fnins.2022.1065366 ISSN=1662-453X ABSTRACT=This paper investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different methods to measure code complexity. Participants (27 programmers) were asked to try to understand a set of programs, while the complexity of such programs is assessed through different methods: a) classic code complexity metrics such as McCabe and Halstead metrics, b) cognitive complexity metrics based on scored code constructs, c) cognitive complexity metrics from tools such as SonarQube, d) direct assessment of programmers’ behavioral features (e.g., reading time, revisits) using eye tracking, and e) cognitive load/mental effort assessed using Electro-encephalography (EEG). The programmers’ cognitive load measured using EEG was used as a reference to evaluate how the different metrics can express the (human) difficulty in comprehending the code. Extensive experimental results show that popular metrics such as V(g) and the complexity metric from Sonar Source tools deviate considerably from the programmers’ perception of code complexity and often do not show the expected monotonic behavior. The paper summarizes the findings in a set of guidelines to improve existing code complexity, particularly state-of-the-art metrics such as cognitive complexity from SonarSource tools.