Research Topic

The Use of Organized Learning Models in Assessment

About this Research Topic

Organized learning models (OLMs) are research-based content or cognitive frameworks that explicitly consider the relationships among precursor and successor nodes and that can be integrated with psychometric models to provide a powerful next generation of assessments. The term nodes refers to the competencies, content, practices, skills, or aspects of cognition that constitute the latent dimensions of the model. Organized learning models have also been referred to as cognitive learning models, learning progressions, learning ontologies, learning trajectories, developmental continua, and learning maps. The psychometric approaches applied to organized learning models have been referred to as cognitive diagnostic models, diagnostic classification models, and Bayesian network analysis.

Organized learning models have been used for, and studied as components of, formative and summative assessment in many research projects. In the last decade they have been increasingly applied in operational testing programs, with (e.g., the Dynamic Learning Maps Alternate Assessment) or without (e.g., Smarter Balanced assessment system) explicit integration of the learning and psychometric models. Organized learning models hold the promise of providing results that better support teaching and learning.

Despite the increased use of organized learning models, there are many unanswered research questions, such as:
-What are the best methods for validating OLMs?
-What validation evidence exists for specific OLMs?
-Is there an optimal grain size in an OLM and if so, does it vary by assessment purpose?
-What are the limits (and tradeoffs) on the number of nodes per assessment?
-How many items are needed per node?
-Should items measure one or multiple nodes?
-What are the trade-offs of using one-to-one versus many-to-many relationships among OLM nodes?
-Which psychometric models work best with OLMs and under what circumstances?
-How does data sparseness impact estimation of psychometric model parameters used with OLMs?

The goal of this special issue is to move the field of assessment forward by providing a broad view of progress made to-date and identifying unresolved issues. To this end we welcome theoretical, empirical, and policy research related to the use of OLMs in assessment. Review papers will be considered. We welcome submissions that provide analysis of the use of OLMs with underserved learners.


Keywords: Organized learning models, cognitive learning models, learning progressions, learning ontologies, learning trajectories, developmental continua, learning maps, diagnostic classification models, cognitive diagnosis, Bayesian network analysis


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Organized learning models (OLMs) are research-based content or cognitive frameworks that explicitly consider the relationships among precursor and successor nodes and that can be integrated with psychometric models to provide a powerful next generation of assessments. The term nodes refers to the competencies, content, practices, skills, or aspects of cognition that constitute the latent dimensions of the model. Organized learning models have also been referred to as cognitive learning models, learning progressions, learning ontologies, learning trajectories, developmental continua, and learning maps. The psychometric approaches applied to organized learning models have been referred to as cognitive diagnostic models, diagnostic classification models, and Bayesian network analysis.

Organized learning models have been used for, and studied as components of, formative and summative assessment in many research projects. In the last decade they have been increasingly applied in operational testing programs, with (e.g., the Dynamic Learning Maps Alternate Assessment) or without (e.g., Smarter Balanced assessment system) explicit integration of the learning and psychometric models. Organized learning models hold the promise of providing results that better support teaching and learning.

Despite the increased use of organized learning models, there are many unanswered research questions, such as:
-What are the best methods for validating OLMs?
-What validation evidence exists for specific OLMs?
-Is there an optimal grain size in an OLM and if so, does it vary by assessment purpose?
-What are the limits (and tradeoffs) on the number of nodes per assessment?
-How many items are needed per node?
-Should items measure one or multiple nodes?
-What are the trade-offs of using one-to-one versus many-to-many relationships among OLM nodes?
-Which psychometric models work best with OLMs and under what circumstances?
-How does data sparseness impact estimation of psychometric model parameters used with OLMs?

The goal of this special issue is to move the field of assessment forward by providing a broad view of progress made to-date and identifying unresolved issues. To this end we welcome theoretical, empirical, and policy research related to the use of OLMs in assessment. Review papers will be considered. We welcome submissions that provide analysis of the use of OLMs with underserved learners.


Keywords: Organized learning models, cognitive learning models, learning progressions, learning ontologies, learning trajectories, developmental continua, learning maps, diagnostic classification models, cognitive diagnosis, Bayesian network analysis


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

31 May 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

31 May 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..