Your new experience awaits. Try the new design now and help us make it even better

CURRICULUM, INSTRUCTION, AND PEDAGOGY article

Front. Educ., 04 February 2026

Sec. Higher Education

Volume 11 - 2026 | https://doi.org/10.3389/feduc.2026.1716286

Constructing a course quality evaluation system based on hierarchical analysis and its application

  • School of Mining Engineering, Anhui University of Science and Technology, Huainan, Anhui, China

The fundamental issue in curriculum quality evaluation systems lies in how to scientifically and effectively measure the alignment between educational processes and student development objectives. Curriculum quality evaluation assesses whether teaching objectives are being achieved. Designing a rational and effective evaluation indicator system helps improve teaching quality, optimize curriculum reform, enhance student competence, and ensure curriculum quality. This study addressed the current research gap in higher education, where course quality evaluations rely heavily on subjective experience and lack a systematic quantitative analytical framework. It aimed to construct a quantifiable, multidimensional, and integrated comprehensive evaluation system for course quality to enhance the objectivity and scientific rigor of assessments. Using the core mining engineering course “Coal Mining” as an empirical case, it examined four key dimensions—faculty resources, teaching process, teaching resources, and teaching effectiveness—to systematically establish an evaluation indicator system for course quality. Methodologically, the study employed the analytic hierarchy process (AHP) to scientifically determine the weights of each evaluation indicator, thereby analyzing the impact of different dimensions on course quality. Subsequently, the fuzzy comprehensive evaluation method was applied to convert qualitative descriptions in the evaluation into quantitative analysis, addressing the inherent fuzziness and uncertainty in teaching evaluations. The results demonstrated that this approach not only achieves both qualitative judgments and quantitative scoring for course quality, effectively overcoming limitations of traditional evaluation methods, but also produces evaluation processes and outcomes characterized by higher reliability, rationality, and objectivity. The significance of this study lies in its provision of an empirical analytical model that combines theoretical grounding with practical applicability for quality diagnosis and continuous improvement in engineering education programs. The specific improvement measures proposed offer a direct reference for advancing curriculum reform and ensuring the quality of talent cultivation.

1 Introduction

Against the backdrop of deepening quality assurance in global higher education and ongoing curriculum reform, course quality evaluation has emerged as a core issue transcending national borders. The international community widely recognizes that systematic evaluation and assessment play a crucial strategic role in enhancing educational quality, equity, and efficiency. Countries are not only increasingly relying on explicit educational standards and extensively establishing national databases to monitor learning outcomes but also actively participating in international benchmarking tests to advance the global dialog on educational quality (Prima et al., 2025; Zhang et al., 2025; Wang et al., 2025). Within China’s teaching quality evaluation system, course quality evaluation plays a central role and serves as both an incentive and a guarantee for teaching quality. Traditional rating criteria are overly simplistic and rely on quantifiable indicators such as standardized test scores, pass rates, and grade points. These measures primarily assess students’ memorization and mastery of fixed knowledge points and problem-solving skills, rather than their understanding and application abilities. In response to the contradiction between the current single evaluation standard for course quality and the diversified educational goals (Liu et al., 2025), through the integration of ‘teaching-learning-evaluation’, the evaluation concept should be transformed into a concrete program that can be replicated, diffused, and learned (Chen et al., 2020). In the field of higher education, the purpose of course quality evaluation is to determine the extent to which a course achieves its teaching objectives and to promote course development and teaching reform based on the evaluation results (Li Y. R. et al., 2021). Therefore, course quality evaluation can substantially improve the level of course development. Only when the established course quality evaluation system is complete enough and the degree of control of course quality is comprehensive enough, can we accurately grasp teaching information and improve the quality of teaching (Zhao, 2016). A course quality evaluation system is an integrated system covering multi-level indicators. Therefore, it is necessary to assess the teaching quality of the course system through a combination of quantitative and qualitative means of overall assessment. The establishment of a course quality evaluation system is extremely complex and cannot be accomplished by a single evaluation element alone. When multiple evaluation subjects (such as teaching supervisors, peer experts, teaching managers, and technical support staff) participate together, each subject has different positions, knowledge structures, and cognitive levels, resulting in differences in evaluation standards and focuses, which can have diverse impacts on the evaluation process and results. Moreover, it is not guaranteed that the evaluation subjects can participate wholeheartedly in the teaching process at all stages, and it is also not certain that all evaluation subjects are fully capable of evaluating the quality of the courses. Only by improving students’ motivation to participate in course quality evaluation and accurately assessing the educational needs of the course and students’ learning tendencies can we better optimize the construction of the course. The study drew on Hendra G D D’s (Hendra et al., 2021) research approach to construct a course quality evaluation system encompassing four dimensions of teaching. The analytic hierarchy process (AHP) was employed to determine the weights of each evaluation indicator, and the fuzzy comprehensive evaluation method was subsequently applied to obtain the course quality evaluation results.

2 Construction of a curriculum quality evaluation body

As a very important component of curriculum construction, the curriculum quality evaluation system must be measurable, independent, and universal, and it should constitute a relatively perfect, scientific, standardized, reasonable, and efficient evaluation framework (Li, 2017). The evaluation system encompasses various aspects, such as the school, the college, professional development aligned with the orientation of talent training, graduation requirements, course syllabi, teaching processes, student management, course examinations and assessments, teaching resources, and continuous improvement. These aspects serve as evaluation indicators to ensure that student learning outcomes and course quality remain central (Li D. B. et al., 2021; Wang and Jiang, 2018). The analytic hierarchy process (AHP) is a systematic, hierarchical, multi-objective decision-making method that integrates qualitative and quantitative approaches. It treats a complex multi-objective decision problem as a system, decomposing the objective into multiple goals or criteria, which are further subdivided into several levels of indicators. Using qualitative indicator fuzzy quantification methods, it calculates hierarchical single rankings (weights) and overall rankings, providing a systematic approach for multi-objective (multi-indicator), multi-party optimization decisions. The decision problem is organized into a hierarchical model, typically comprising three fundamental levels: (1) Objective Level: The highest level, representing the purpose of solving the problem—the overall goal that the AHP aims to achieve. It contains only one element. (2) Criterion Level: The intermediate level, representing the steps involved in adopting a specific plan or measure to achieve the predetermined overall goal. It may include multiple levels. (3) Alternative Level: The lowest tier, representing the various measures, strategies, or solutions available for problem resolution. To this end, the evaluation system is hierarchically structured from the perspectives of teaching, learning, and effectiveness, using faculty resources, teaching processes, teaching resources, and curriculum outcomes as primary indicators.

2.1 Teacher power

Teachers are the central participants in curriculum teaching activities, overseeing curriculum development, design, and instruction. They bear the great responsibility of teaching and educating people, continuing human development and civilization inheritance. An excellent teacher should not only demonstrate high professional ethics but also possess strong academic knowledge and skills. Moreover, teachers should prioritize their own professional development, actively engaging in career planning and training based on long-term goals.

Teachers with high academic qualifications, strong professional competence, and exemplary ethical standards are the backbone of curriculum development and play a key role in improving teaching quality. This highlights that teachers must demonstrate professionalism, encompassing teaching ability, academic qualifications, professional background, and ethical standards, as shown in Table 1.

Table 1
www.frontiersin.org

Table 1. Indicators of the hierarchy of teaching staff.

2.2 Teaching process

When evaluating the quality of the course, it is important not only to assess the quality of course design but also to observe the quality of course implementation. The implementation of the course aligns the course plan with actual practice to achieve the intended goals, involving both teachers and students collaboratively constructing the course content. In this process, the concept of process quality arises, referring to the quality of the course content constructed by the two main bodies, teachers and students. This quality should be evaluated in terms of teaching attitude, teaching content, teaching methods, assessment methods, and related aspects, as shown in Table 2.

Table 2
www.frontiersin.org

Table 2. Teaching process level indicators.

2.3 Teaching resources

Teaching resources are a prerequisite for ensuring the successful completion of the teaching task and constitute essential elements of the teaching structure. Broadly, teaching resources include teachers, students, and all materials and tools that support teaching and learning. Narrowly, they refer specifically to resources that promote teaching. In this study, according to the curriculum requirements and students’ characteristics, teaching resources were mainly divided into three types of indicators: basic resources, auxiliary resources, and safeguard resources, as shown in Table 3.

Table 3
www.frontiersin.org

Table 3. Teaching resource level indicators.

2.4 Teaching effectiveness

The effectiveness of teaching and its underlying qualities can be comprehensively evaluated through a multi-faceted, multi-level, and systematic approach. Student learning behaviors, learning outcomes, peer evaluations, and assessments of teaching reform achievements constitute the fundamental factors considered in the evaluation of teaching effectiveness (quality). To this end, a set of secondary indicators across three levels—implementation effectiveness, course standards, and teaching reform outcomes—has been proposed, as shown in Table 4.

Table 4
www.frontiersin.org

Table 4. Teaching effect level indicators.

3 Basic steps of the comprehensive evaluation method

3.1 Construct factor set

First, construct a set of indicators for comprehensive evaluation. Let the overall evaluation factor set be U = {U1, U2...Uk, Um}, where m denotes the number of primary evaluation indicators. Each primary indicator Uk (k = 1, 2,..., m) can be further subdivided into several secondary indicators, forming its subset Uk = {uk1, uk2...ukl} where l represents the number of secondary indicators.

3.2 Establishment of the comment set

Based on the specific characteristics of the fuzzy object to be evaluated, a rubric set is established to determine the range of the evaluation results for specific factors. In other words, the rubric set represents a partitioning of the evaluation factor’s result interval. Each rating describes a state of the evaluated factor, and all possible evaluations made by the evaluator constitute the complete set of ratings: v = {v1, v2...vn}, where n is the total number of rubrics.

3.3 Establishment of the fuzzy matrix

After establishing the rubric set, it is necessary to quantify the object being evaluated from the perspective of each factor. The fuzzy matrix is established by considering the affiliation degree of the corresponding fuzzy subset for the object under review from a single factor perspective:

S = ( r 11 r 1 m r n 1 r nm ) .

In the matrix S, rij represents the affiliation of the evaluated object to the rating vj from the perspective of factor ui. The unique advantage of the fuzzy comprehensive evaluation method lies in the fact that it uses a set of membership vectors rather than a single scalar to characterize the performance of a factor, which is more expressive than traditional single-value evaluation models.

3.4 Determining the weights of the evaluation factors

There are multiple methods to determine the weights of evaluation factors. In this study, hierarchical analysis was employed to determine the weight of each factor in the model, denoted as W = {w1, w2.wn}. The hierarchical analysis determines the weights by analyzing the relative importance of each factor, followed by a normalization process to obtain the final weight values.

3.5 Synthesis of the fuzzy comprehensive evaluation matrix

In the fuzzy comprehensive evaluation method, the evaluation matrix is typically synthesized using the ‘weighted average’ approach, combining the weight matrix W and the fuzzy judgment matrix S to obtain the fuzzy comprehensive evaluation result vector R. The specific addition method is shown in the following formula:

R = W S = ( W 1 , W 2 , W n ) ( r 11 r 1 m r n 1 r nm )

4 Weight calculation of the evaluation indexes based on the AHP

American operations researcher A-L-Saaty proposed the AHP, a decision analysis method that integrates quantitative and qualitative analysis, also referred to as ‘hierarchical analysis’ (Wang and Jiang, 2018; Zhu, 2005). Hierarchy analysis is a model and method that can be applied to complex systems that are difficult to fully quantify, facilitating informed decision-making.

In hierarchical analysis, a key element of the entire model is the precise determination of the weights at the criterion level. However, in general, numerous factors often influence these weights, and it is also difficult for empirical experts to directly define the weights at the criterion level. In addition, directly defining the weights of these factors often leads to incomplete consideration by the decision maker, inconsistencies among the established key weights, and, ultimately, the failure of the entire hierarchical model. Currently, the weights at the criterion level are often determined by constructing a judgment matrix and comparing the factors independently. The ultimate goal of the evaluation system is first determined, and the influencing factors and their affiliations at all levels are hierarchically processed to form a judgment matrix.

Assuming that it is necessary to establish the relationship between the degree of influence of n factors, X = {x1, x2.xn}, and a factor Z, the comparison matrix and inverse line analysis can be established using two-by-two comparisons. Specifically, for any two evaluation factors xi and xj, their relative importance must be compared, with the result recorded as a scale value aij. Conversely, when comparing the relative importance of xj and xi, the result is denoted as aij, where aij = 1/aij. These two values form an inverse relationship. By repeating the above process, the comparison judgment matrix A = (aij)n × m can be formed in the end. To determine the value of aij, this study assigned the importance of the judgment matrix using a 1–9 numerical scale, establishing a mapping relationship with expert linguistic assessments. The specific correspondence between the numerical values and expert language is shown in Table 5. Subsequently, the values of the judgment matrix were assigned based on actual inspection or consultation with experts. The relative importance of each level of factors was first determined, followed by the calculation of the relative weights for each level. Finally, a consistency test was performed, and only when the consistency coefficient was CR < 0.10, the judgment matrix was considered established. Otherwise, it was also necessary to adjust the judgment matrix.

Table 5
www.frontiersin.org

Table 5. Criteria for assigning values in the judgment matrix.

Based on the composition of course quality and its influencing factors, this study established the indicator level of the course quality evaluation system, comprising four primary indicators, 14 secondary indicators, and 49 tertiary indicators. To ensure the scientific rigor and authority of the indicator weights, this study employed purposive sampling to carefully select 20 experts with extensive experience in mining engineering education, curriculum design, or higher education evaluation. Expert selection was primarily based on the following criteria:

1. Professional Qualifications and Experience: All experts possessed at least 8 years of teaching, research, or engineering practice experience in mining engineering or related fields, demonstrating a profound understanding of the content and quality requirements of the specialized curriculum.

2. Professional Title and Academic Qualification Composition: Among the experts, 40% held senior-level titles (professors, senior engineers with professor-level titles), 50% held associate senior titles (associate professors, senior engineers), and 10% held intermediate titles (lecturers, engineers). All experts had master’s degrees or higher, with 40% holding doctoral degrees. This composition of high professional titles and advanced academic qualifications ensured the depth and credibility of their evaluations.

3. Diverse Roles and Perspectives: The expert panel included various stakeholders: Full-time university faculty members (12 members) directly teaching Mining Engineering or related courses; academic administrators (four members); and representatives from major coal enterprises or design institutes (four members), who provided critical insights from engineering practice and talent-demand perspectives. This diversity ensured that the evaluation criteria adhered to pedagogical principles while closely aligning with industry needs.

4. Participation Willingness and Reliability: All invited experts were informed about the study objectives in advance and committed to completing the consultation diligently. A total of 20 consultation forms were distributed, with all 20 returned, achieving a 100% response rate. The expert engagement coefficient was 100%, indicating strong support and cooperation from the experts. In summary, the expert panel in this study was representative in terms of professional qualifications, academic titles, educational backgrounds, and role composition. Their collective judgment provided a reliable basis for determining the AHP weights. Based on the survey results, the degree of agreement for each weighting indicator was summarized.

Using hierarchical analysis, the evaluation system was decomposed into different hierarchical structures according to the order of the target level, the criterion level, and the program level. The eigenvector values of the judgment matrix were mathematically determined, and the weights of indicators at each level were calculated accordingly. The final weight values are shown in Tables 69.

Table 6
www.frontiersin.org

Table 6. Teacher strength level indicator weights and approval rates.

Table 7
www.frontiersin.org

Table 7. Curriculum resource level indicator weights and approval rates.

Table 8
www.frontiersin.org

Table 8. Teaching process level indicator weights and approval rates.

Table 9
www.frontiersin.org

Table 9. Teaching effectiveness hierarchy indicator weights and approval rates.

5 Application of course quality evaluation

The ‘weighted average method’ was used to construct the evaluation matrix in the fuzzy comprehensive evaluation method, taking the course ‘Coal Mining Science’ in the Mining Engineering major as an example. Teachers of this major evaluated the three-level indicators of the course according to four levels (fully satisfied: 100 points, satisfied: 80 points, basically satisfied: 60 points, and dissatisfied: 0 points), formed a fuzzy judgment matrix, and established a set of rubrics based on the specifics of the fuzzy object being evaluated. Each rubric level described the state of the evaluated factors, and the complete rubric set included all possible evaluation behaviors of the evaluator. After the rubric set was established, the evaluation content was quantified by factor. By considering the degree of membership corresponding to the fuzzy subset of the evaluated object from the perspective of a single factor, constructing a fuzzy matrix, and synthesizing it with the weight matrix obtained using the hierarchical analysis method, the final comprehensive evaluation result was 87.7634 points. Based on the analysis of the results, the following aspects should be strengthened.

5.1 Improvement in teachers’ teaching ability

The weights determined through the Analytic Hierarchy Process (AHP) in this study indicated that, among the four primary indicators of course quality, “faculty strength” and “teaching effectiveness” carried relatively higher weights. This finding highlights the central importance of “faculty leadership” and “student developmental outcomes” in the expert consensus. Further analysis of the sub-indicator weights revealed that “faculty’s ability to integrate cutting-edge disciplinary knowledge” and “teaching innovation and practical skills” are particularly critical. However, fuzzy comprehensive evaluation results indicated that case-based courses have room for improvement precisely in these high-weight indicators. This finding prompts the following targeted reflections and course design refinements:

1. Faculty members should actively engage in the latest disciplinary research, master cutting-edge subject knowledge, and deepen teaching practices. In particular, during instruction, they should focus on integrating recent research findings and pedagogical innovations into teaching practices to broaden students’ horizons and enrich instructional content.

2. Young faculty members are encouraged to participate in learning opportunities at production sites, design institutes, and other enterprises to achieve a closer integration of theoretical knowledge with practical field experience. High-quality platforms should be actively established to assist young core faculty in researching frontier knowledge within their disciplines.

3. Cutting-edge and contemporary course content is highly anticipated, yet current update mechanisms remain inadequate. Teachers should deepen their understanding of frontier knowledge to ensure dynamic content renewal. Teacher practice should be upgraded from observational visits to task-oriented “teaching internships” and incorporate “field case reflection” sessions to foster the integration of knowledge and practice.

4. Course portfolios should be established based on evaluation metrics, and a “micro-cycle” revision mechanism should be implemented, enabling course design to respond swiftly to assessment feedback.

5.2 Updating teaching content

The design of educational content is dynamic and time-sensitive. We should continuously adjust it to align with the current social context and changes in the broader disciplinary environment, ensuring that students acquire the most advanced knowledge with practical significance. For textbooks containing outdated traditional content, obsolete theoretical knowledge should be decisively discarded, while new technologies, contemporary ideas, and other relevant teaching content should be incorporated to optimize the configuration of teaching materials. The content should be closely linked to classroom teaching and actual engineering cases, effectively enhancing students’ ability to analyze and solve real-world engineering problems.

5.3 Innovative assessment content and methods

On the one hand, regarding assessment content, greater emphasis should be placed on evaluating knowledge application and innovation. On the other hand, assessment weighting should not solely focus on “hard metrics,” such as final exams; instead, greater importance should be assigned to “soft metrics,” such as ongoing evaluations and extracurricular activities. In terms of assessment methods, we must prioritize approaches that highlight students’ learning capabilities, flexibly adapting to course characteristics and educational objectives. Assessment methods should be tailored to different course types to ensure that evaluations effectively measure both teaching effectiveness and student learning outcomes. At the same time, outstanding innovation methodologies, assessment models, and evaluation systems from both domestic and international sources should be drawn upon. These should be appropriately adapted and synthesized into a set of methods tailored to this academic discipline (Wang et al., 2025; Prima et al., 2025).

5.4 Use of network teaching resources

Network-based teaching offers rich resources that facilitate students’ independent learning and effectively supplement course content. Colleges and universities should try to use network teaching resources to support instruction, using them as supplementary materials in addition to the core course resources to enrich the curriculum. Teachers should incorporate a variety of internet resources related to the course into the teaching platform, allowing students to freely download materials and engage in learning exchanges. In addition to using the internal network teaching resources of colleges and universities, sharing network teaching resources across regions and campuses should be promoted to maximize the benefits of network-based teaching.

6 Conclusion

This study used the mining engineering course “Coal Mining” as a case study to conduct qualitative and quantitative evaluations of course quality in China’s general higher education institutions, primarily employing the AHP and the fuzzy comprehensive evaluation method.

1. The weights of each indicator within the course quality evaluation system were rationally allocated using the AHP. Subsequently, the fuzzy comprehensive evaluation method was applied to assess the case course “Coal Mining” during the comprehensive evaluation of course quality.

2. Conducting both qualitative and quantitative evaluations of course quality using these two methods significantly enhances the reliability, rationality, and objectivity of the evaluation process. It also positively contributes to improving the practical applicability of the results.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

BF: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was supported by the Anhui Province Quality Engineering Project ‘Research and Application of Building Online Course Quality Evaluation System Based on Hierarchical Analysis’ (2020jyxm0470).

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The reviewer CZ declared a shared affiliation with the author to the handling editor at the time of review.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Chen, H. Y., Liu, W. T., Zhou, G., and Zhang, X. Y. (2020). Curriculum quality evaluation research under background of professional certification. Educ. Modern. 7, 98–101. doi: 10.16541/j.cnki.2095-8420.2020.54.028

Crossref Full Text | Google Scholar

Hendra, G. D. D., Gede, I. S., and Kadek, I. S. (2021). Digital Test Instruments Based on Wondershare-Superitem for Supporting Distance Learning Implementation of Assessment Course. Interntional Journal of Instruction, 14, 945–964. doi: 10.29333/IJI.2021.14454A

Crossref Full Text | Google Scholar

Li, Y. K. (2017). The design of university course quality evaluation index system and empirical analysis. Dalian Univ. Technol. 20–28.

Google Scholar

Li, Y. R., Hu, L. M., and Zang, B. (2021). Research on curriculum quality evaluation system of general education in universities: taking H university in Hefei as an example. J. Hefei Univ. Technol. (Social Sciences) 35, 133–137.

Google Scholar

Li, D. B., Yan, Z. G., and Chen, J. (2021). Construction of curriculum quality evaluation system in the context of professional accreditation of engineering education. Heilongjiang Educ. (Theory & Practice) 4, 49–50.

Google Scholar

Liu, H., Zhou, Z., Tian, Y. L., and Fan, M. M. (2025). Research on a multi-task teaching model for cryptography courses. J. High. Educ. 11, 94–99. doi: 10.19980/j.CN23-1593/G4.2025.35.020

Crossref Full Text | Google Scholar

Prima, A., Warman,, Bahzar, M., and Nurlaili, (2025). Basic education curriculum evaluation strategy in improving the quality of education. Asian J. Educ. Soc. Stud. 51, 702–712. doi: 10.9734/AJESS/2025/V51I102525

Crossref Full Text | Google Scholar

Wang, R., Chen, J., Fu, H. J., Zhao, J., and Fu, H. (2025). Evaluating the effectiveness of online course quality improvement based on the LDA-PMC model. Int. J. Web-Based Learn. Teach. Technol. 20, 1–20. doi: 10.4018/IJWLTT.393044

Crossref Full Text | Google Scholar

Wang, Y. S., and Jiang, X. (2018). The evaluation of teaching education quality of the genera curriculum in high level universities. J. Natl. Acad. Educ. Adm. 2, 70–77.

Google Scholar

Zhang, Y. C., Yu, J. Y., Xia, C. R., Zhang, Y., Yu, J., Xia, C., et al. (2025). Evaluation of students’ satisfaction with obe teaching modes in the manual therapy course by students’ evaluation of educational quality questionnaire. BMC Med. Educ. 25:1570-1570. doi: 10.1186/S12909-025-08071-0,

PubMed Abstract | Crossref Full Text | Google Scholar

Zhao, C. Y. (2016). The problems and the improvement of the university course quality evaluation: based on the investigation of 49 universities. Res. Educ. Dev. 36, 44–51. doi: 10.14121/j.cnki.1008-3855.2016.23.008

Crossref Full Text | Google Scholar

Zhu, J. J. (2005). Research on some problems of the analytic hierarchy process and its application. Shenyang: Northeastern University.

Google Scholar

Keywords: course instruction, course quality, curriculum quality, evaluation system, hierarchical analysis method

Citation: Fu B (2026) Constructing a course quality evaluation system based on hierarchical analysis and its application. Front. Educ. 11:1716286. doi: 10.3389/feduc.2026.1716286

Received: 20 November 2025; Revised: 10 January 2026; Accepted: 16 January 2026;
Published: 04 February 2026.

Edited by:

Shuai Zhang, Zhejiang International Studies University, China

Reviewed by:

Binbin Yang, Xuchang University, China
I. Kadek Suartama, Ganesha University of Education, Indonesia
Chengxing Zhao, Anhui University of Science and Technology, China

Copyright © 2026 Fu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Baojie Fu, MjM3NTE0NzU0MUBxcS5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.