ORIGINAL RESEARCH article

Front. Comput. Sci., 03 February 2026

Sec. Digital Education

Volume 8 - 2026 | https://doi.org/10.3389/fcomp.2026.1729059

Artificial intelligence in educational assignments: issues of academic integrity

  • Department of Infocognitive Technologies, Moscow Polytechnic University, Moscow, Russia

Article metrics

View details

555

Views

64

Downloads

Abstract

Background:

This article examines the challenges associated with students’ use of artificial intelligence (AI)-based software tools in the educational process. Advances in information technology enable the automatic generation of new content of various types (text, graphics, and audio) without direct human input. While offering considerable opportunities, such technologies also pose potential risks for maintaining academic integrity in the course of mastering educational programs. The aim of this study was to assess the influence of AI technologies on students’ responses when completing assignments related to theoretical knowledge acquisition.

Materials and methods:

The research was conducted between 2023 and 2025 among second-year students enrolled in the “Applied Informatics” program. Stylistic, morphological, semantic, and syntactic analysis methods were applied to identify key errors in responses to different types of tasks. An anonymous survey conducted among students after submission of their completed work enabled us to identify the principles, methods, and tools that were used to obtain the results.

Results:

The key feature of the study sample was the lack of prior work experience and limited professional background among these students, which made it possible to determine the extent to which AI-generated information influenced the final content of their responses. The study identified the most typical structural and semantic patterns characteristic of AI-assisted student answers. On this basis, a methodology was developed to support instructors in assessing the degree of AI involvement in student work.

Conclusion:

The findings may be applied to the modernization of the educational process and in the designing personalized educational trajectories.

1 Introduction

One of the main driving forces behind the development of modern society is information and knowledge (Vassilakopoulou and Hustad, 2023; Shaginov, 2020). The active advancement and use of software and hardware in industrial, economic, and domestic spheres have created an environment in which the significance of intangible assets is increasing, particularly within the information space associated with information and communication technologies (Engelbutzeder et al., 2023; Weber et al., 2024). A modern individual must not only be able to search for information to meet personal needs but also analyze large volumes of data and make constructive decisions based on it in both professional and everyday activities.

To achieve these goals effectively, a policy of digital transformation across all types of human activity is being actively pursued. For the fulfillment of professional responsibilities in any sector of the economy and national industry, specialized software products are employed. These tools make it possible to monitor the state of objects or processes, analyze large arrays of heterogeneous data, identify patterns, develop effective solutions, and forecast events, and more in real time (Omol, 2024; Kraus et al., 2022). Digital technologies are also widely applied in everyday life, for example, to enable online communication between individuals, government authorities, and various services; to control household appliances; and to monitor processes including biological ones (Van Der Schaft et al., 2024; Alaimo, 2022).

All of these processes require a high level of educational development, which ensures not only a high degree of digital transformation in knowledge-intensive industries, management, and other spheres, but also equip the population to consciously and effectively use digital products.

The rapid development of artificial intelligence (AI) and big data has led to the widespread availability of software products that employ these technologies to perform variety of routine tasks. At present, individuals can use electronic resources for creating or editing text, graphic images, videos, or audios, transforming one type of information into another, data analysis, and so forth (Korneev, 2024). Such resources provide the possibility of free access to limited functionalities of AI or full access with subscription.

An analysis of contemporary research and media publications showed that approximately 50% of companies worldwide employ AI technologies in their business processes (Artificial Intelligence Conquers Business, 2025). Large Russian enterprises are gradually moving from the situational use of isolated machine learning tools to comprehensive strategies for integrating AI within the organizational structures (Valeev and Kuznetsova, 2025).

The field of education is no exception, despite its inherent conservatism. As noted by international researchers, the use of AI in education should be considered from two perspectives: from the standpoint of managing the educational process (including its content) and from the standpoint of mastering the educational programs (Kamalov et al., 2023; Chiu et al., 2023).

The main problem in mastering the educational program lies in the fact that students now have the opportunity to use AI tools to perform various types of tasks, including preparing answers, writing essays or term papers, and conducting laboratory work. The issue becomes particularly relevant not only because of the accessibility of AI services but also due to their continuous improvement through ongoing training and the expansion of functional capabilities. For several years, active discussions within the professional community and media have focused on the potential use of neural networks in educational processes. In Russian education, this is primarily associated with the case of a student writing a thesis with the assistance of ChatGPT and subsequently defending it successfully (Chiu et al., 2023). Therefore, the purpose of this study was to determine the extent to which the use of AI in mastering the educational program influences the quality of students’ knowledge.

The research hypothesizes that the use of AI tools by students in their first professional education, when completing academic assignments, may reduce the quality of their knowledge and contribute to the formation of false didactic units.

The theoretical significance of the study lies in identifying the characteristics that influence pedagogical technologies and educational models within the context of potential use of AI technologies. The modern concept of education should be based on the integration of AI technologies rather than excluding them from the learning model.

The practical significance of the study lies in the development of a methodology that ensures monitoring of the mastery of educational program while identifying the extent of a student’s use of artificial intelligence technologies. The results obtained may serve as recommendations for the development of educational programs and methodologies for diverse forms of knowledge assessment.

2 Literature review

The scientific community views academic integrity as a system of values that implies trust, quality, and fairness in the conduct of intellectual activities (Chiu et al., 2023; Zinchenko et al., 2021). The basic principles, whose formulation varies across educational institutions, may transform over time under the influence of external factors such as ongoing changes in technologies, societal values, and educational concepts.

Researchers found that the principles of academic integrity have adapted to processes associated with the development of communication tools and free access to Internet resources (Engelbutzeder et al., 2023; Bermus and Sizova, 2025; Cojocariu and Mareş, 2022; Dawson, 2020), mass commercialization of education (Mamedov and Bayramova, 2020; Saravanakumar and Padmini Devi, 2020), active use of distance learning forms (Parapi et al., 2020; Evseev and Yablonskaya, 2025; Akimova and Shataeva, 2020), and globalization of education (Sabitova and Bairkenova, 2019; Bamberger and Morris, 2024). For example, powerful plagiarism detection systems (Turnitin, Antiplagiat) were developed and widely implemented to prevent passing off others’ work as one’s own or improper borrowing of materials. Traditional assessment methods have been transformed into more objective evaluation approaches by introducing alternative forms such as portfolios with access to completed assignments, and project-based assessments. Video surveillance has become common for objectively monitoring task completion during face-to-face exams. To adapt students to university environments, institutions have introduced intensive programs focused on academic integrity, authorship, collaborative work dynamics, and institutional hierarchies.

Modern challenges for academic integrity are posed by the rapid advancement of AI technologies (Weber et al., 2024; Kamalov et al., 2023; Gavrilina, 2023). Generative models can produce coherent and original texts, solve mathematical problems, generate software code, and perform numerous other tasks, thereby blurring the line between independent work and external assistance. Some researchers argue that anti-plagiarism systems cannot accurately detect text generated by AI (Fatima et al., 2022; Bezuglyy and Ershova, 2023; Sokolov, 2022). Despite this, these systems continue to evolve and at least allow identifying suspicious text fragments. In their studies, scientists emphasize that AI influences the transformation of traditional educational assignments and necessitates the development of AI literacy (Liua et al., 2021; Zhamborov et al., 2023; Trusova, 2024). Traditional educational tasks involve working with text: searching for information, analyzing content, structuring or organizing data, writing essays or reports, preparing oral presentations, and solving typical problems. According to some researchers, all of these tasks represent monotonous and lengthy process requiring high concentration over extended periods. Contemporary students often find this challenging due to their fragmented thinking styles, which make it difficult for them to process large amounts of data (Kurashinova and Burayeva, 2021; Kubantseva, 2022; Bakhmetieva et al., 2020). To simplify their workload, many students resort to AI capabilities to complete similar assignments, leading to violations of academic integrity principles. This problem calls for a comprehensive solution.

Educational program developers actively adopt alternative forms of theoretical material control, including implementing creative or project-based assignments, brainstorming sessions, simulator-based exercises, and other tools enabling real-time decision-making (Makaryev, 2021; Polyakova, 2024; Kholopova, 2020). Researchers conclude that completely excluding AI from the educational process is impossible (Bermus and Sizova, 2025; Akimova and Shataeva, 2020; Trusova, 2024). Regardless of measures taken within the classroom setting, students will continue to have access to software based on AI technologies outside the classroom environment, utilizing them for editing texts, searching for information, and performing other tasks. As a solution to this issue, integrating AI technologies into the educational process alongside an ethical usage training program is proposed (Cojocariu and Mareş, 2022), (Trusova, 2024), (Makaryev, 2021). The foundation of such training should emphasize critical evaluation of AI-generated content and awareness of the limitations imposed on generative models.

3 Materials and methods

The object of this study involved monitoring the quality of independent assignments completed by students during the course of mastering the core higher educational program.

The subject of the study was the extended written responses of students to different types of questions.

The study was conducted since 2023 and was based on students’ responses to assignments in the academic discipline “Structural Design” within the core educational program in the field of “Applied Informatics,” specialization “Corporate Information Systems.” The discipline included 72 academic hours of in-class workload (54 h of laboratory and practical classes and 18 h of lectures) and 54 academic hours of independent work for second-year students (third semester of study).

The content of the learning material, methods for assessing results, and the pedagogical technologies employed remained consistent and were implemented uniformly:

  • In 2023, with two groups totaling 43 individuals,

  • In 2024, with two groups totaling 31 individuals,

  • In 2025, with two groups totaling 46 individuals.

The students’ ages ranged from 18 to 20 years and the selection of the discipline was determined by the period of instruction (according to the curriculum, from September to December inclusive each year) and the degree of interdisciplinarity established between the current didactic units and previously mastered topics. Prior to the commencement of the discipline, an entrance test was conducted, which demonstrated a normal distribution of initial knowledge levels.

The didactic units and professional competencies of the studied discipline form the foundation for developing software products of varying complexity across diverse domains. Consequently, by the start of this discipline, students possess general programming concepts, techniques for database creation, and communication strategies applicable to project management. Thus, students develop generic competencies enabling them to perform data retrieval and analysis, validate research outcomes, and identify appropriate tools for problem-solving. However, they lack specialized competencies associated with business process analysis and reengineering. This implies that first-time learners, who lack real-world experience, primarily rely on existing foundational knowledge and current curricular content rather than practical expertise acquired in actual conditions.

The investigation comprised three stages, each employing distinct methodological approaches:

  • 1) Preparatory Stage: This involved designing assessment questions aligned with the covered didactic units and constructing a questionnaire to record the conditions surrounding the students’ completion of tasks. The methods employed included:

  • Uniform distribution: The content and volume of questions proportionally reflected the amount of learned material and the predetermined objectives.

  • Targeted selection: Questions were grouped according to the competencies being assessed.

  • Differentiated difficulty levels: Each set of tasks encompassed questions ranging from easy (determining the validity of a statement) and intermediate (selecting a single answer) to medium (choosing multiple answers), advanced (formulating brief responses), and complex (elaborating detailed answers).

  • Content analysis: This focused on incorporating essential terms frequently encountered throughout the coursework, educational programs, and future professional settings.

While performing the tasks, students faced no restrictions regarding source materials or necessary tools; they were free to employ whatever they deemed suitable for addressing the challenges. Consequently, it was essential to gather insights into how students organized their workflow, and these details were collected through a survey.

  • 2) Analytical Stage: This was conducted after receiving the students’ answers and survey responses. The response evaluation involved stylistic, morphological, semantic, and syntactical analyses, aimed to identify primary errors arising from the students’ use of AI tools. In addition, statistical methods were applied mathematically and analytically to process and interpret the gathered data, thereby revealing trends and regularities in knowledge quality as well as the influence of various factors.

Graphical representations facilitated the visualization of statistical analysis outcomes:

  • Histograms illustrated the quantitative distribution of object characteristics.

  • Radar charts enabled the visualization of multidimensional datasets, allowing comparison of multiple quantitative variables across different subjects.

  • Stacked bar charts emphasized part-to-whole relationships and cumulative trends across categories over time.

  • 3) Developmental Stage: Employing systems analysis methodologies, an algorithm was devised to assess students’ academic integrity in completion of assignments. Systems analysis enabled the representation of complex processes as integrated systems composed of interconnected components that interact with their external environment. The verification of completed assignments was treated as multifaceted procedure, requiring not only the comparison of evidence but also the identification of latent patterns.

4 Results

4.1 Statistical data analysis

To test the hypothesis from 2023 to 2025, third-semester students enrolled in the “Structural Design” course within the “Corporate Information Systems” educational program were given a self-assessment task for independent completion. The assignment was available on the university’s digital platform at any time during a one-week period. Restrictions were imposed on the number of attempts (only one attempt was allowed) and its duration (up to 40 min).

The content of the assignment for each student was randomly generated by the digital system at the start of the task, while the structure of the assignment was identical across all versions (Figure 1).

Figure 1

The base score for correct answers was proportional to the number of questions per block. All responses, except for extended answers, were automatically checked by the digital platform. An example of a self-assessment task was provided in Appendix A.

The educational program stipulated that final assignments can be graded as follows: “Excellent,” “Good,” “Satisfactory,” or “Unsatisfactory.” A course was considered incomplete if a student receives an unsatisfactory grade (“Unsatisfactory”). In this case, they could not proceed with further studies until their grade was improved. Other grades affect academic scholarship levels.

Scores obtained for completing the final assignment were converted into the following evaluations:

  • 0.0–59.9 points: Unsatisfactory;

  • 60.0–74.9 points: Satisfactory;

  • 75.0–89.9 points: Good;

  • 90.0–100.0 points: Excellent.

Based on an analysis of completed work annually, a diagram illustrating the performance results was created (Figure 2).

Figure 2

Analysis of quantitative outcomes over time revealed:

  • A decrease in the best result: 93.2 → 76.8 → 69.0.

  • A decrease in the worst result: 44.4 → 35.7 → 18.1.

  • An increase in the density of unsatisfactory and satisfactory results (Figure 3).

Figure 3

An analysis of answer quality showed changes in the density of error classes. Table 1 presented the classification of errors identified in the submitted responses.

Table 1

NNameIDCharacteristics
1Error in assertionsERR-1Applicable to true-false type questions
2Selected incorrect answer optionERR-2Applicable to single-correct-choice selection questions
3Incorrect number of correct answers selectedERR-3Applies to multi-select questions where students choose more than one correct answer from multiple options.
Accounts for cases with an insufficient or excessive number of answers provided by the student.
Among chosen answers there may be both correct and incorrect ones.
4Short answer recorded incorrectlyERR-4Applies to short-formulated answer questions.
Contains spelling, grammatical mistakes, or typos while fundamentally matching the essence of the question, e.g., ‘campiler’ instead of ‘compiler’.
Includes unnecessary words not fitting the form of the question, such as providing ‘waterfall model’ instead of just ‘waterfall’.
All noted errors count as one instance.
5Short answer does not match the questionERR-5Applies to short-formulated answer questions.
Answer does not correspond to the subject area of the question.
6Generalized extended responseERR-6Applies to extended answers.
Student provides a generalized statement combining various facts or examples into a unified whole but misses key aspects relevant to the specific question.
For example, characterizing elements broadly so they could apply to numerous objects, only partially related to the question’s subject matter.
Specific terminology is absent.
7Extended response substantially mismatches the questionERR-7Applies to extended answers.
Assertions and terminologies presented do not fully align with the subject domain specified in the question.
8Extended response format mismatchERR-8Applies to extended answers.
Answer includes statements, examples, or conclusions outside the scope of the posed question, mixing different levels of detail.
For instance, instead of defining graphic elements’ quantity, their characteristics are described.
9Extended response contains factual errorsERR-9Applies to extended answers.
Essentially matches the subject domain of the question but individual conclusions or examples contradict established facts.
Each noted mistake counts as one instance.
10No answer providedERR-10Applies to any type of question.
A student deliberately omits answering due to lack of knowledge, accidentally skips the question due to carelessness or haste, or runs out of time before finishing the entire assignment.

Classification of errors in responses.

Radar charts showing the absolute and relative frequencies of each error class by year were shown in Figure 4.

Figure 4

Analyzing the distribution of values among error classes and evaluation results revealed shifts in the nature of errors. During the early research period, all error classes occurred. Later, errors predominantly appeared in tasks requiring the formulation of personal answers. Figure 5 illustrates the annual frequency of occurrence for each error class in student submissions.

Figure 5

Figure 6

After completing the assignment, every student was anonymously asked to fill out a questionnaire aimed at identifying factors influencing their performance. The questions focused on determining student activity patterns—tools used for preparation, difficulties encountered, attitudes toward the questions and received results, and insights gained.

The survey consisted of 27 statements answered with either “yes” (scored 1 point) or “no” (scored 0 points). Despite anonymity, some students may have exaggerated their work ethic by omitting violations of academic integrity principles. Table 2 summarized the survey content and the absolute data derived from the analyses.

Figure 7

Table 2

NStatementValues
202320242025
Points%Points%Points%
1I answered all questions independently without any assistance00.0000.0000.00
2During the task, I had help from another person offline or online1227.91516.132656.52
3I worked on the assignment simultaneously with one or more students from my group1227.9139.682656.52
4Another person did the assignment for me. If yes, try to describe this person’s activities when answering questions 4–2100.0000.0000.00
5For some (or all) questions, I used lecture notes, reference materials, or textbooks3683.721341.941736.96
6For some (or all) questions, I used electronic resources (search engines, forums, e-textbooks)3274.421548.3936.52
7I used artificial intelligence to search for information regarding some (or all) questions1739.5331100.0046100.00
8I referred to reference materials in any form to verify the accuracy of my answers2762.791445.162145.65
9I used artificial intelligence tools to confirm the accuracy of my answers1125.58516.1324.35
10I used artificial intelligence tools to edit my answers (including those obtained through any reference material)00.0000.0000.00
11I used an artificial intelligence-generated answer unchanged as my own613.952890.324086.96
12I double-checked the artificial intelligence-provided answer using search engines or other reference materials12.3300.0024.35
13I wrote additional prompts to clarify answers obtained via artificial intelligence00.0000.00613.04
14I prioritized answers generated by artificial intelligence1739.5331100.003984.78
15I rewrote answers produced by artificial intelligence00.0000.001430.43
16I compiled the final answer based on both mine and artificial intelligence outputs00.0000.0000.00
17I guessed some answers818.6000.0000.00
18I ran out of time to finish all questions00.00516.131226.09
19I selectively used artificial intelligence tools to save more time for checking my answers36.9800.001941.30
20Using artificial intelligence took longer than planned00.0000.0000.00
21Final results matched my expectations after submitting answers36.98516.131123.91
22My doubts about certain answers turned out to be justified1944.1900.001226.09
23Artificial intelligence-generated answers scored high marks00.0000.0000.00
24I believe that non-artificial-intelligence means did not negatively impact my outcome2353.4939.681430.43
25I believe that artificial intelligence tools did not negatively impact my outcome24.652270.971634.78
26Most likely, I would have managed the assignment without external aids00.0000.0000.00
27I needed a specific positive grade2865.122477.424189.13

Survey results analyzing student adherence to academic integrity principles.

From these quantitative findings, several conclusions were drawn:

  • Students increasingly relied on AI tools when acquiring information, often substituting their own knowledge with retrieved results. This trend correlated with increased correctness rates in multiple-choice question formats.

  • Critical thinking toward acquired information was reduced without additional verification. Survey results indicated that most students utilized information directly, both without performing extra validation steps and without transforming it.

  • Planning skills were lacking in the use of supplementary tools. The collected data suggested that the active use of AI tools left many students unable to complete all assigned tasks effectively.

4.2 Content analysis of extended answers

The results of the questionnaire analysis revealed that, when completing the assignments, students did not use their own notes, course materials, or electronic educational resources (such as electronic textbooks, methodological guides, or other printed editions). To complete the tasks, students employed search engines such as Google or Yandex by directly inserting the text of the assignment into the search bar without modification (typical for 2023). Similar actions were carried out with queries to resources using neural networks, such as ChatGPT or GigaCHAT (typical for 2024–2025). It should be noted that in 2023, students compiled final answers by combining responses generated by neural networks with information obtained from search engines. In addition, during this period, the final answers were edited by students to align them with the form of the question.

It is considered that the result produced by a neural network creates new knowledge (Fatima et al., 2022). However, upon detailed analysis of such texts, it can be established that they are “secondary,” since neural network models are trained on existing texts and tend to generate generalized concepts without specific examples or details. When reading such texts, one may perceive them as something already encountered elsewhere (while no exact word-for-word match with the original exists) or as uninformative. It is essential to consider that a text may have been written or revised by a copywriter (or modified by a person with the aim of expanding or reducing its length, merging small fragments from different sources, etc.). In such cases, it is important to evaluate the logical consistency of the material, the presence of repetitions or contradictions, and the adherence to syntactic rules.

All of these analytical operations could be performed only if the evaluator possessed knowledge in the relevant field or had developed sufficient professional experience. In most cases, a student was not a specialist with professional competencies in the domain in which they were being educated. When using AI to complete academic tasks (essays, extended responses to questions, laboratory or practical work, and test assignments), a student is unable to evaluate whether the generated response corresponds to the subject area, which could result in the formation of misconceptions about the object of study. To confirm this hypothesis, responses of students from different years of study to identical questions were analyzed. The following fragments of results obtained for different types of questions are presented:

  • Comparison of several objects to identify their common or unique characteristics (Table 3)

  • Extended response related to determining object characteristics that correspond to specific conditions (Table 4).

  • Determination of the truth of a statement with the provision of arguments justifying the choice made (Table 5).

  • Quantitative and qualitative analysis of an image (Table 6).

4.2.1 Answers to the question regarding the identification of distinctive characteristics of objects

All answers, including those presented as examples in Table 3, show similarities in their structure. The answers from 2024–2025 are characterized by the presence of definitions for each concept from the question and by a conclusion that contains words derived from these definitions. The form of the 2023 responses suggests that the survey participants attempted to formulate a distinction between two concepts or to define one concept using the characteristics of the other. This approach takes into account the form of the question and its context. It can be assumed that this allowed the 2023 respondents to arrive closer to the correct answer, which is that one concept is part of the other (this constitutes the key feature of the answer). In the 2024–2025 responses, such a feature is absent, presenting both concepts as independent of one another. It is possible that the 2023 responses were revised before being submitted for review in order to align them with the form of the question. The 2024–2025 responses may also have undergone revision, but this appears to have consisted mainly of text removal (a vivid example of this approach is Answer 1).

Table 3

AttributeCharacteristic
QuestionWhat is the difference between a problem domain and a subject one?
Expected answerBy the aspects (boundary) of the research domain (if the answer is brief).
Characteristic of a correct answerThe problem domain is considered a part of the subject domain and includes objects, processes, and attributes (or characteristics) that are key to the conducted research (or problem solving).
Answers of 2024–2025Answer 1The problem domain is the problem itself, while the subject domain is everything related to it.
Answer 2The subject domain is the area of activity considered in the specific context of the given task set before us. The subject domain contains system elements, subprocesses, shows how they interact with each other, and how event chains are implemented. The problem domain, in turn, is a set of tasks and issues addressed by researchers. It concerns a specific problem, something that can be improved. The problem itself arises within the subject domain.
Answer 3The subject domain includes all important objects and relationships between them in a specific field that need to be studied and described, while the problem domain includes the tasks, issues, and problems that must be solved with the help of an information system. In summary, the subject domain describes what exists in the system, while the problem domain shows what needs to be done.
Answers of 2023Answer 1These concepts differ in that the problem domain is only a part of the subject domain. The problem domain focuses on and provides a detailed representation of the part of the subject domain in which problems, tasks, or questions arise that must be studied, whereas the subject domain provides a description of the entire domain.
Answer 2The difference between the subject and problem domains is that the subject one defines a set of objects and their properties for study, whereas the problem one focuses on a specific area that requires investigation to solve a problem.
Answer 3The problem domain is based on the subject one and is associated with solving specific tasks and problems of the subject domain.

Answers to the question regarding the identification of distinctive characteristics of objects.

4.2.2 Answers to the open-ended question

All of the answers were detailed (Table 4), as expected from the form of the question. An analysis of the collected answers reveals several common structural features:

Table 4

AttributeCharacteristic
QuestionFormulate the significance of the design stage in the software life cycle model in the form of an extended response.
Expected answerAn extended response that defines the place of the design stage, the methods used, the results, participants, resources, and risks.
Characteristic of a correct answerSpecific software life cycle models should be identified, based on which the characteristics of the design stage are provided.
Answers of 2024–2025Answer 1When we design, we create a clear plan. We decide what parts the product will consist of, how they will interact with each other, and what is better to choose for a more user-friendly interface, etc. Accordingly, if this stage is skipped, the speed of work will be low because there will be no clear structure. If we find an error, it is easier to fix it during the design stage than when the full code has already been written.
Answer 2The design stage in the software life cycle is important. When the requirements have already been collected and analyzed, a simple list of customer requests is no longer sufficient. At this point, it is necessary to create a clear plan showing how the system is organized and how it will actually function. During the design stage, the program architecture is developed, its main parts are defined, how they are interconnected, and what each part must do. In other words, this stage translates “what is needed” into “how it will be done.” If the design stage is skipped or poorly executed, many errors will emerge during coding and testing, which are difficult to correct. Conversely, if everything is carefully planned in advance, the work proceeds faster and more smoothly.
Essentially, design is the link between requirements analysis and the development itself.
Answer 3The significance of the design stage is primarily determined by the problems it allows us to solve. First, design makes it possible to calculate the resources required for the development and implementation of software, which helps to avoid unforeseen expenses. Second, software design usually also includes the development of an implementation algorithm, which significantly reduces development time. Third, preliminary design allows the creation of a system that takes into account all customer requirements.
Since all three of these aspects are very important in software development, this life cycle stage is significant.
Answers of 2023Answer 1At the design stage in the software life cycle model, the system architecture, data structure, and operating algorithms are defined. The task of design is to create a foundation for further development, testing, maintenance, and the long-term existence of the program. Incorrect decisions at the design stage can lead to difficulties and high costs for correction at later stages of development. Proper design minimizes risks and ensures the quality of the software product. For these reasons, the design stage can be considered as playing a key role in the software life cycle.
Answer 2The design stage is significant in the software life cycle model for the following reasons:
At this stage, the architecture and structure of the system are defined, which makes it possible to understand how the components will interact and how task execution will be organized.
Design makes it possible to identify potential problems or weaknesses in the system, thereby preventing errors and deficiencies in the future.
The design stage also defines the requirements for resources, time, and budget for system implementation, which enables planning and management of the development process.
Answer 3The design stage in the software life cycle model is of great importance, as it is a key stage that determines the architecture and structure of the software product being developed. The main aspects of the significance of the design stage are as follows:
(1) architectural solution: at this stage, the overall architecture of the software product is defined, including the choice of technologies, platform, architectural patterns, and components. A sound architectural solution ensures efficiency, scalability, and ease of maintenance of the software product;
(2) division of responsibility: design makes it possible to define the modules and components of the program, as well as to distribute responsibility among developers. Clear division of responsibility improves collaboration and simplifies the development process;
(3) resource optimization: design allows the optimization of resource usage such as memory, processor time, and network resources. This is important to ensure efficient program operation and to meet user requirements;
(4) maintainability and expandability: well-designed software is easy to maintain and expand. Design helps create a modular structure that facilitates changes and the addition of new functions to the software product;
(5) risk management: design helps identify potential risks and problems at early stages of development. This makes it possible to take appropriate measures to eliminate risks and prevent problems in the future.
Overall, the design stage is fundamental for successful software development, as it defines the main aspects of functionality, quality, and efficiency of the software product.

Answers to the open-ended question.

  • Each answer begins with a statement and ends with a conclusion that reiterates this statement in nearly identical wording.

  • All answers contain enumerations: either design characteristics (in 2023 – answers 2 and 3; in 2024–2025 – answer 3) or sequences of actions within the process (in 2023 – answer 1; in 2024–2025 – answers 1 and 2).

  • The enumerations are accompanied by either characteristics or factual definitions followed by explanatory details, most clearly evident in answer 3 of 2023.

A comparison across years demonstrates semantic differences. The 2024–2025 answers are expressed in simple and accessible language, resembling promotional text, whereas the 2023 answers contain professional terminology that corresponds more closely to the expected correct answer.

4.2.3 Answers to the question on determining the truth of the statement and explanation of the corresponding choice

The structure of all answers is uniform (Table 5). Each answer begins with a statement, followed by supporting arguments. The analysis demonstrated that all statements are correct and consistent with the expected answers. At the same time, the semantics of the arguments only partially align with the anticipated rationale. The predominant form of presenting arguments is through the enumeration of possible algorithmic forms. However, the choice of algorithmic form is a consequence rather than the underlying reason for the selection. In each response, the actual reason is not explicitly articulated; instead, the explanations are framed in terms of convenience of representation, available tools, and similar considerations. These, however, are also consequences rather than true reasons. Consequently, none of the responses provide a rationale that would allow for assessing the degree of mastery of the educational material. It should also be emphasized that the responses from 2024–2025 once again lack professional terminology and fail to meet the standards of technical writing (for instance, they include personal pronouns and colloquial expressions).

Table 5

AttributeCharacteristic
QuestionCan the form of the same algorithm be represented in different ways? Please explain your answer.
Expected answerThe same algorithm can be represented in different forms depending on the specification of requirements.
Characteristic of a correct answerThe answer should be given in the affirmative form. The representation of an algorithm (graphical, textual, in the form of pseudocode, or as program code) depends on the requirements defined by the stakeholders (for example, business analysts), the context of the subject area, and the policies of the company engaged in the formalization process.
Answers of 2024–2025Answer 1Yes, of course. For example, if we want to present our algorithm to analysts, it is more convenient for them to work with diagrams, whereas if we present the algorithm to developers, it is more convenient for them to work with code. Different forms of an algorithm may also highlight different aspects of it. Having representations of the same algorithm in various forms makes it easier for us to analyze it.
Answer 2The form of the same algorithm can be represented differently depending on the chosen method of visualization or the style of the flowchart. For example, different symbols and notations can be used for blocks, different types of arrows for connections between blocks, as well as various methods of representing conditions and loops. However, regardless of the visual representation, the logic and structure of the algorithm must remain the same.
Answer 3A flowchart is needed to make the algorithm easier to understand and explain. It serves several purposes. First, it provides a visual representation. When looking at a diagram, the workflow is immediately clear, and it is not necessary to carefully read the text. Second, it helps to identify errors. In a diagram, it is easier to notice if something is missing or redundant. It is also a means of communication, as a diagram is understandable to programmers, analysts, and even customers. Furthermore, a flowchart can be stored as a document to be revisited later in order to recall how the algorithm was originally designed. It is also useful in education, since it is easier for beginners to understand a diagram than to start directly with code.
Answers of 2023Answer 1Yes, the form of the same algorithm can be represented in different ways depending on the environment or the tool used for representation. For example, an algorithm can be presented as text, as a flowchart, as an activity diagram, as program code in different programming languages, and so on. Each representation may be most suitable for a particular purpose or audience.
Answer 2Yes, the form of the same algorithm can be represented in different ways. Algorithms can be expressed in natural language, as a flowchart, in pseudocode, or in program code. Each of these methods of representation has its advantages and is suitable for different purposes.
Natural language is a simple and intuitive way to describe an algorithm that can be understood by people who do not have programming skills. However, due to its ambiguity and lack of strictness, natural language can be difficult for a computer to interpret and execute.
A flowchart is a graphical representation of an algorithm using various symbols and arrows to indicate the sequence of actions. Flowcharts facilitate the visual understanding of an algorithm and can be useful for learning and debugging code.
Pseudocode is a mixture of ordinary language and program code that allows an algorithm to be described in a simple and understandable form. Pseudocode is a convenient tool for planning and designing algorithms before writing actual code.
Program code is the concrete implementation of an algorithm in a specific programming language, such as Python or Java. Code ensures accuracy and high performance in the execution of the algorithm by a computer.
Answer 3The form of the same algorithm can be represented differently depending on the chosen method of visualization or the style of the flowchart. For example, different symbols and notations can be used for blocks, different types of arrows for connections between blocks, as well as various methods of representing conditions and loops. However, regardless of the visual representation, the logic and structure of the algorithm must remain the same.

Answers to the question on determining the truth of the statement and explanation of the corresponding choice.

4.2.4 Answers to the questions connected with the image analysis

All answers to questions related to image analysis, regardless of the year the survey was conducted, are consistent and comparable to those presented in Table 6.

Table 6

AttributeCharacteristic
QuestionDetermine the names of the graphical objects and their quantity in the presented model. Identify all elements that do not comply with the requirements of the corresponding modeling methodology.
Image 1Functional model of the process for identifying errors in the application of the modeling methodology (Figure 6).
Expected answer to Image 1Functions: 3; boundary input arrows: 2; boundary control arrows: 3; boundary output arrows: 2; boundary mechanism arrows: 2; output–input connections: 4.
The absence of numbering in the function blocks and the close placement of labels near several arrows make it impossible to accurately identify the corresponding flow.
Answer to Image 1Function blocks: 3; input arrows: 2; control arrows: 3; output arrows: 2; mechanism arrows: 2.
Incorrect use of control arrows:
For example, the elements “Tax Code of the Russian Federation” and “Consumer Protection Law” are designated as control. In reality, these are external reference documents that do not directly control the process. Control arrows in IDEF0 describe what regulates or constrains the activity of a function, rather than merely transmitting information.
Violation of flow direction rules:
According to IDEF0, control, input, and output arrows must be oriented strictly in accordance with their function. The arrow “Product reservation” rather belongs to the output of the process “Selection and reservation of goods,” but here it appears as a control element.
Error with context boundaries:
The diagram mixes internal data (for example, “Product reservation”) and external data (for example, “Product catalog”). It is unclear which elements belong to the inputs and outputs of the system and which to the internal processes.
Insufficiently detailed decomposition:
Some functions appear too general, for example, “Order formation.” According to the IDEF0 methodology, functions should be described in detail at the lower level.
Incorrect use of documents as arrows:
“Delivery documentation” is indicated as the output of the process “Order formation.” This creates confusion, as documents are usually the result of actions, but here they are mixed with other data.
Undefined interactions between functions:
Arrows between functions are not labeled, which makes it difficult to interpret the transmitted data. For example, the connections between “Order formation” and “Receipt and verification of order” are unclear.
Expert commentary on Answer 1The classification of arrows is missing (it should be noted that the number of boundary arrows was determined correctly).
All identified errors go beyond the context of the question, i.e., the evaluation of the presented image for compliance with the standard requirements was not carried out. All listed errors are based on the general context of each label.
Some statements contradict the image (for example, all arrows are labeled).
Image 2Flowchart of the algorithm for determining the number of graphical blocks used (Figure 7).
Expected answer to Image 2The flowchart includes: terminators (2), processes (4), a loop (1), a decision (1), and lines (10).
The elements of the flowchart must be arranged vertically in a straight line from top to bottom. An error is also the designation Imax:=1 (a parallelogram should be used for data, not a rectangle for a process). It is acceptable to indicate different block sizes and the absence of directions in some lines.
Answer to Image 2Oval (symbol for the beginning and end of the algorithm): the flowchart contains two such elements, “Start” and “End.”
Rectangle (operation): denotes steps of execution or assignment of values to variables. The diagram contains the following blocks:
Imax: = 1; Max: = A[1]; Max: = A[I]; Imax: =; Output Max, Imax. Total: 5 rectangles.
Diamond (condition): used to represent a logical condition or branching. The diagram contains two diamonds: “Check: I = 2, 10, 1” and “Check: Max < A[I].” Total: 2 diamonds.
Arrows (connectors): represent the flow of the algorithm. The diagram contains several arrows connecting the blocks.
Thus, the flowchart includes: 2 ovals, 5 rectangles, 2 diamonds, and arrows connecting the blocks.
The flowchart is intended for programmatic data processing. Data processing requires mandatory input and output of data. Input and output of data are designated by a parallelogram. In the flowchart, a trapezoid is used for data output. Therefore, this is an error.
Expert commentary on Answer 2The answer is not related to the context of the question. The terminology used (oval, rectangle, condition, operation, etc.) is not applied in the methodology (start or end of an algorithm is a terminator; operation is a process; condition is a decision, etc.). Image recognition was performed incorrectly: the parallelogram was replaced with a rectangle, and the hexagon with a diamond; the number of lines was not identified.
The methodological errors are based on incorrect analysis of the image. Indeed, input and output of data are represented by a parallelogram, but it is present in the image (and does not resemble a trapezoid).

Answers to the questions connected with the image analysis.

Analysis of the results using a neural network revealed the following patterns:

  • 1) Answers to questions requiring image analysis often contain facts that contradict either one another or established standards. It should be noted that, as the neural network is trained, the number of such errors decreases. When, following the initial analysis of a block diagram, a query is formulated that requires clarification of the names of identified graphical elements, the generated response aligns these names with textual designations in accordance with industry standards. Furthermore, if the query includes an explicit rule (e.g., a statement that a trapezoid is absent from the image, as in the response to image 2), the answer is correct in most cases.

  • 2) Answers to questions requiring the identification of errors in a graphical representation require additional manual verification (and possibly further training of the neural network for each model), as well as supplementary text processing.

  • 3) For text-based answers, the neural network produces correct but generalized answers. These answers do not incorporate key characteristics of the subject domain or specialized terminology. The answers to different questions are structurally repetitive and do not conform to the standards of scientific or technical writing. The absence of domain-specific features linked to the context of a particular didactic unit renders the answers imprecise.

4.3 Methodology for checking student assignments in the context of using artificial intelligence tools

On the basis of these findings, the following methodology for evaluating assignments involving text-based answers is proposed:

  • 1) Evaluate the answer using software designed to detect the extent of AI involvement in text generation (e.g., GigaCHECK, GPTZero, ReTextAI). The evaluated text will be assigned to one of three possible statuses: “the text was most likely written by a human,” “the text was generated by a neural network,” or “the text was presumably written collaboratively by a human and a generative artificial intelligence model.” The exact wording may vary depending on the software used. It is important to recognize that the outcome may include both false positives and false negatives.

  • 2) Evaluate the text using plagiarism detection tools that also include AI-generated text identification modules (e.g., Antiplagiat, Percent). These tools identify all external sources from which the answer text may have been borrowed. The presence of such overlaps suggests that the student consulted multiple information sources.

The combined outcomes of steps 1 and 2 provide a preliminary assessment, which determines the level of rigor required for step 3.

If suspicious fragments are identified in the answer text, both the structure and semantic accuracy of the text should be analyzed. Depending on the neural network model employed, the generated text may exhibit oversimplified phrasing (sentences lacking participial or adverbial constructions, clarifications, or other syntactic structures that demonstrate linguistic variety), a lack of specialized terminology, and a monotonous style of presentation (e.g., enumerations of facts followed by conclusions, with partial repetition of the original facts).

5 Discussion

Researchers in the field of educational technologies argue that the integration of modern AI-based tools enhances the digital competence of all participants in the educational process and fosters the development of a unified digital learning environment (Bezuglyy and Ershova, 2023; Zhamborov et al., 2023). At the same time, the adoption of such tools raises serious ethical concerns, including algorithmic and data bias in model training, issues of academic integrity in assignment completion, and the risk of diminished critical thinking among students.

Advances in functionality and the reduction of errors in AI-generated responses are expected to further increase the proportion of student work incorporating AI technologies (Znatdinov et al., 2024; Kostikova et al., 2025). Several studies conclude that such technologies complicate the study of theoretical material (Kamalov et al., 2023; Sokolov, 2022), as this stage of learning requires the development of competencies in information retrieval, systematization, and critical analysis. These competencies enable students to apply methods and tools meaningfully for effective problem-solving.

Students at the beginning of their studies may develop the assumption that memorizing theory is unnecessary, since answers to almost all question can be instantly obtained with the help of AI. Such systems not only select relevant sources but also analyze their content and generate reports in the required format. Consequently, there emerges a risk that AI may be used not as a supportive assistant but as a complete substitute for the student’s own work. The conducted study confirms this concern. In the early stages of the experiment, responses to assignments were compiled from several sources and supplemented with outputs from the neural network. At later stages, however, students reproduced the neural network’s responses almost verbatim in their final reports.

Developers of AI-based chatbots highlight that the datasets on which neural networks are trained reflect information relevant only to a specific point in time (Omol, 2024; Korneev, 2024; Fatima et al., 2022). As a result, generated responses often do not correspond to current knowledge. Pedagogical studies consistently highlight that the primary principle of the educational process is that learning must be practice-oriented and aligned with the contemporary state of the discipline under study (Chiu et al., 2023; Zhamborov et al., 2023). This discrepancy should reduce students’ reliance on chatbots as sources of information and tools for critical analysis. Findings of the present study demonstrate that students lacking professional knowledge or practical experience in the relevant field are unable to perform adequate fact-checking of AI-generated answers. Effective factual verification requires mastery of theoretical concepts and continuous monitoring of developments in the professional domain.

The proposed methodology for detecting the use of AI in student responses assists not only instructors in assessing the degree of knowledge acquisition, but also students in evaluating the quality of the AI-generated content. According to researchers, such practices may contribute to cultivating a culture of digital hygiene (Bezuglyy and Ershova, 2023; Sukhodaeva, 2022).

The research hypothesis has been confirmed. Students actively employed AI technologies for text generation and image analysis in preparing their reports. The resulting responses exhibited structural similarity, contained generic formulations, made limited use of professional terminology, included semantically repetitive statements phrased in different ways, and displayed syntactic, logical, and lexical inconsistencies. These features indicate that AI technologies were used not as supportive tools for data processing but as direct substitutes for student work. This reliance negatively affected the overall quality of responses and, consequently, the quality of knowledge acquired.

The study, however, has certain limitations. The analysis focused exclusively on textual responses from students enrolled in technical programs related to the design and development of digital systems. These programs inherently involve frequent use of software tools, familiarity with their functionalities, and the ability to formulate precise queries. Furthermore, working with texts in this domain requires the accurate and consistent presentation of facts, as well as the identification of cause-and-effect relationships. Additional factors influencing the results included the participants’ age range and the absence of restrictions on the use of software tools.

6 Conclusion

Academic dishonesty among students is a widely recognized international phenomenon that manifests in multiple forms, including cheating, collaborative completion of individual assignments, the use of cheat sheets, and the exploitation of communication tools to obtain answers. Advances in modern science and technology have further facilitated such practices, transforming the very framework by which academic integrity is defined. The growing accessibility and continuous enhancement of software tools based on AI technologies allow the execution of a wide range of tasks, effectively substituting for the user’s own activity. In the past, an instructor’s primary challenge was to verify the correctness of an answer, identify instances of improper borrowing, or detect authorship substitution. Today, however, text generated by AI constitutes new knowledge in its own right. The inability to distinguish such artificially created text may foster the illusion of high student achievement for the instructors, while for students it inevitably leads to grade devaluation and the false impression of genuine knowledge acquisition.

The contemporary educational system must neither ignore nor isolate itself from these software tools and technologies. It is evident that the advancement of pedagogical practices must account for the principles of academic integrity as well as the digital resources available to students, which may be employed not only during formal instruction but also in independent study. The establishment of a unified educational environment that incorporates the principles of digital hygiene, mutual accountability, the inevitability of sanctions for violations of academic integrity, and, simultaneously, trust and fairness toward students who conscientiously complete assignments, represents a critical factor in shaping effective educational trajectories.

Statements

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants was not required to participate in this study in accordance with the national legislation and the institutional requirements.

Author contributions

ML: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcomp.2026.1729059/full#supplementary-material

References

  • 1

    AkimovaE. N.ShataevaO. V. (2020). Academic capitalism: paradigm shift in higher education development. Bulletin Moscow Region State University. Ser.3, 817. doi: 10.18384/2310-6646-2020-3-8-17

  • 2

    AlaimoC. (2022). From people to objects: the digital transformation of fields. Organ. Stud.43, 10911114. doi: 10.1177/01708406211030654

  • 3

    Artificial Intelligence Conquers Business. Available online at: https://cipr.ru/izdanie-2025/iskusstvennyj-intellekt-zavoevyvaet-biznes (Accessed Jule 14, 2025).

  • 4

    BakhmetievaI.GorbunovV.BurovM.MargalitadzeO.VostroilovaE. (2020). Students’ clip thinking: a problem approach. IJASOS6, 123128. doi: 10.18769/ijasos.616017

  • 5

    BambergerA.MorrisP. (2024). Critical perspectives on internationalization in higher education: commercialization, global citizenship, or postcolonial imperialism?Crit. Stud. Educ.65, 128146. doi: 10.1080/17508487.2023.2233572

  • 6

    BermusA. G.SizovaE. V. (2025). Ethical aspects of artificial intelligence technology implementation in classical universities: analysis of student audience attitudes. Continuous Educ.2, 16. doi: 10.15393/j5.art.2025.10585

  • 7

    BezuglyyT. A.ErshovaM. E. (2023). Using text-based neural networks and artificial intelligence in students' academic work. Problems Modern Education5, 206216. doi: 10.31862/2218-8711-2023-5-206-216

  • 8

    ChiuT. K.XiaQ.ZhouX.ChaiC. S.ChengM. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers Educ.4:100118. doi: 10.1016/j.caeai.2022.100118

  • 9

    CojocariuV. M.MareşG. (2022). “Academic integrity in the technology-driven education era” in Ethical use of information Technology in Higher Education EAI/springer innovations in communication and computing. ed. MâțăL. (Singapore: Springer), 116.

  • 10

    DawsonP. (2020). Defending assessment security in a digital world: Preventing e-cheating and supporting academic integrity in higher education. London: Routledge.

  • 11

    EngelbutzederP.RandellD.LandwehrM.AalK.StevensG.WulfV. (2023). From surplus and scarcity toward abundance: understanding the use of ICT in food resource sharing practices. ACM Trans. Comput.-Hum. Interact.30, 131. doi: 10.1145/3589957

  • 12

    EvseevL. V.YablonskayaL. V. (2025). Commercial education in market economy conditions as a product of capitalist utilitarianism. Youth Herald of Novorossiysk Branch of Belgorod State Technological University named after V.G. Shukhov1, 6470. doi: 10.51639/2713-0576_2025_5_1_64

  • 13

    FatimaN.ImranA. S.KastratiZ.DaudpotaS. M.SoomroA. (2022). A systematic literature review on text generation using deep neural network models. IEEE Access10, 5349053503. doi: 10.1109/access.2022.3174108

  • 14

    GavrilinaE. A. (2023). Maintaining academic integrity at the university in the age of technology. Applied ethics statements2, 96103.

  • 15

    KamalovF.Santandreu CalongeD.GurribI. (2023). New era of artificial intelligence in education: towards a sustainable multifaceted revolution. Sustainability15:12451. doi: 10.3390/su151612451

  • 16

    KholopovaL. A. (2020). What awaits traditional education in the context of digitalization. National Priorities of Russia2, 8795.

  • 17

    KorneevY. A. (2024). Evolution of neural networks. Real Estate Cadastre2, 4758.

  • 18

    KostikovaL. P.EseninaN. E.OlkovA. S. (2025). Artificial intelligence in the educational process of a modern university: results of a student survey. Dent. Concepts2, 93109. doi: 10.24412/2304-120X-2025-11022

  • 19

    KrausS.DurstS.FerreiraJ. J.VeigaP.KailerN.WeinmannA. (2022). Digital transformation in business and management research: an overview of the current status quo. Int. J. Inf. Manag.63:102466. doi: 10.1016/j.ijinfomgt.2021.102466

  • 20

    KubantsevaD. I. (2022). Clip way of thinking in the context of learning process. Problems Contemp. Educ.6, 7079. doi: 10.31862/2218-8711-2022-6-70-79

  • 21

    KurashinovaA. K.BurayevaL. A. (2021). Clip thinking and its influence on the quality of cognitive activity of listeners in the conditions of professional training. Probl. Mod. Pedagog. Educ.72, 200202.

  • 22

    LiuaY.SalehbS.HuangJ. (2021). Artificial intelligence in promoting teaching and learning transformation in schools. Artif. Intell.15, 112. doi: 10.53333/IJICC2013/15369

  • 23

    MakaryevI. S. (2021). Key ideas of knowledge management concept in the digital educational environment. Academic Bulletin. Bulletin of St. Petersburg Academy of Postgraduate Pedagogical Education1, 815.

  • 24

    MamedovZ. F.BayramovaK. (2020). University development strategies: Commercialization and responses to new challenges. Economic and social development. Varazdin, Croati: Varazdin Development and Entrepreneurship Agency. Book of proceedings, 101108.

  • 25

    OmolE. J. (2024). Organizational digital transformation: from evolution to future trends. Digit. Transf. Soc.3, 240256. doi: 10.1108/DTS-08-2023-0061

  • 26

    ParapiJ. M. O.MaesarohL. I.BasukiB.MasykuriE. S. (2020). Virtual education: a brief overview of its role in the current educational system. Scripta Engl. Dep. J.7, 811. doi: 10.37729/scripta.v7i1.632

  • 27

    PolyakovaO. (2024). Paradox of decreasing value of knowledge amid expansion of educational opportunities in the digital age. Scholars Council11, 713716. doi: 10.33920/nik-02-2411-06

  • 28

    SabitovaA. A.BairkenovaG. T. (2019). On the issue of academic honesty. Scientific Almanac of Association France-Kazakhstan4, 415422.

  • 29

    SaravanakumarA. R.Padmini DeviK. R. (2020). Indian higher education: issues and opportunities. J. Crit. Rev.7, 542545.

  • 30

    ShaginovB. A. (2020). Approaches to the definition of the information society in the national doctrine, the legal essence of the information society. Law and State: The Theory and Practice9, 105107.

  • 31

    SokolovN. V. (2022). Problems and risks of the application of modern technologies of artificial intelligence in education of the Russian Federation. Actual Problems of Pedagogy and Psychology3, 1014.

  • 32

    SukhodaevaT. S. (2022). Development of digital hygiene skills as an essential element of mentoring in higher education institutions. The siberian transport university bulletin: humanitarian research2, 100105. doi: 10.52170/2618-7949_2022_13_100

  • 33

    TrusovaE. V. (2024). Integration of artificial intelligence into the educational process. Scholarly Notes. Electron. Sci. J. Kursk State Univ.2, 131136.

  • 34

    ValeevA. S.KuznetsovaE. V. (2025). The impact of artificial intelligence on business processes: opportunities and risks. Econ. Bus. Theory Pract.4, 7681. doi: 10.24412/2411-0450-2025-4-76-81

  • 35

    Van Der SchaftA. H.LubX. D.Van Der HeijdenB.SolingerO. N. (2024). How employees experience digital transformation: a dynamic and multi-layered sensemaking perspective. J. Hosp. Tour. Res.48, 803820. doi: 10.1177/10963480221123098

  • 36

    VassilakopoulouP.HustadE. (2023). Bridging digital divides: a literature review and research agenda for information systems research. Inf. Syst. Front.25, 955969. doi: 10.1007/s10796-020-10096-3,

  • 37

    WeberP.CarlK. V.HinzO. (2024). Applications of explainable artificial intelligence in finance – a systematic review of finance, information systems, and computer science literature. Manag. Rev. Q74, 867907. doi: 10.1007/s11301-023-00320-0

  • 38

    ZhamborovA. A.KokurkhaevaR. M.-B.GazdievaE. K. (2023). Digitalization of education as a new requirement of society. J. Appl. Res.4, 104108. doi: 10.47576/2949-1878_2023_4_104

  • 39

    ZinchenkoV.OstapenkoS.UdovichenkoH. (2021). Introduction of academic honesty as a necessary prerequisite and an important component of quality education for future economists. Rev. Rom. Educ. Multidimens.13, 8195. doi: 10.18662/rrem/13.1/361

  • 40

    ZnatdinovV. R.KershengoltsA. I.YudinaA. M. (2024). Use of neural network technologies for modeling independent student work within university educational process. Int. Res. J.9:88. doi: 10.60797/IRJ.2024.147.112

Summary

Keywords

artificial intelligence, digital transformation, education, quality of knowledge, questionnaire survey

Citation

Logachev M (2026) Artificial intelligence in educational assignments: issues of academic integrity. Front. Comput. Sci. 8:1729059. doi: 10.3389/fcomp.2026.1729059

Received

27 October 2025

Revised

15 December 2025

Accepted

07 January 2026

Published

03 February 2026

Volume

8 - 2026

Edited by

Siyabonga Mhlongo, University of Johannesburg, South Africa

Reviewed by

Nurkasym Kylychbekovich Arkabaev, Osh State University, Kyrgyzstan

Arathi Arakala, RMIT University, Australia

Updates

Copyright

*Correspondence: Maxim Logachev,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics