CiteScore 0.7
More on impact ›

Mini Review ARTICLE

Front. Educ., 17 July 2020 | https://doi.org/10.3389/feduc.2020.00128

Defining the Boundaries Between Artificial Intelligence in Education, Computer-Supported Collaborative Learning, Educational Data Mining, and Learning Analytics: A Need for Coherence

  • 1Institute of Educational Technology, The Open University, Milton Keynes, United Kingdom
  • 2Smartlearning, Copenhagen, Denmark

This review aims to provide a concise overview of four distinct research fields: Artificial Intelligence and EDucation (AIED), Computer-Supported Collaborative Learning (CSCL), Educational Data Mining (EDM), and Learning Analytics (LA). While all four fields are focused on understanding learning and teaching using technology, each field has a relatively unique or common perspective on which theoretical frameworks, methods, and ontologies might be appropriate. In this review we argue that researchers should be encouraged to cross the boundaries of their respective field and work together to address the complex challenges in education.

Introduction

In the last 20 years a range of disciplines have been developed in the broad field of education and technology. Since the early 1980s the broad field of Artificial Intelligence and EDucation (AIED) emerged that aimed to use a combination of Artificial Intelligence (AI), learning theory, and educational practice to improve learning outcomes for learners using computers (Boyd et al., 1982; Holmes et al., 2019). Within AIED various subfields of research emerged based upon the power of computing and machine learning, such as intelligent tutoring systems (Aleven and Koedinger, 2002), adaptive hypertext systems (Eysink et al., 2009; Romero et al., 2009), and Computer-Supported Collaborative Learning (CSCL). Since the early 1990s a range of CSCL publications appeared exploring how learners and teachers could work together online using computers. A vast number of CSCL studies (e.g.,Gunawardena, 1995; Roschelle and Koschmann, 1996; Fischer and Mandl, 2005; Rienties et al., 2009) have found that scaffolding, self-regulation, task design, and teaching presence are important concepts that can encourage learners to effectively work together.

In the mid-2000s a third stream of researchers (e.g., Baker and Yacef, 2009; Rosé et al., 2014) using Educational Data Mining (EDM) started to explore learning processes using bigger data sets and increased interconnections between data. Since 2011 a fourth research field of Learning Analytics (LA) emerged, which is specifically focused on understanding the complex learning processes and learning outputs, using a multi-disciplinary combination of computer-science, educational psychology, engineering, and learning sciences (Ferguson, 2012; Papamitsiou and Economides, 2014). In this contribution we aim to define what the potential boundaries and synergies are between AIED, CSCL, EDM, and LA, and how a combined interdisciplinary perspective can help to maximize the potential of these four research fields to understand the complexities of learning and teaching using technology. This might be particularly relevant for researchers and practitioners who may be new to these research fields. For a more detailed and deeper analysis of these fields, we encourage readers to connect to the respective journals in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Overview of the four research fields of education and technology.

Four Perspectives on Computing, Learning, and Education

The boundaries between AIED, CSCL, EDM, and LA are rather blurred. In part, this is because researchers and practitioners from these respective fields look at similar, yet slightly distinct phenomena, and in part, this is because researchers often work in interdisciplinary research groups across the boundaries of their specific research focus (Jeong et al., 2014; Aldowah et al., 2019; Dormezil et al., 2019). Therefore, the characterisations of the four research fields below are by definition an oversimplification of their complex, inter-linked, and fluid perspectives, relations, methodologies, and ontologies. Given that these fields emerged, faded, merged, and re-emerged at various points of time, rather than giving a historical overview of these fields, we will describe these fields in alphabetical order and in relation to the following aspects (see Table 1): (a) main aim/target, (b) educational and other underpinnings, (c) techniques and approaches, (d) society, and (e) conferences and journals.

Artificial Intelligence in Education

Although there is not a single definition of what AI might be, AI broadly refers to “computers which perform cognitive tasks, usually associated with human minds, particularly learning, and problem-solving” (Baker et al., 2019, p. 10). It is an umbrella term used to describe several methods such as machine learning, data mining (DM), neural networks or an algorithm (Zawacki-Richter et al., 2019). Its roots can be traced back to computer science and engineering, with a strong relation to economics, cognitive science, philosophy, and neuroscience (Popenici and Kerr, 2017; Holmes et al., 2019; Zawacki-Richter et al., 2019). As indicated in Table 1, the main aim of AIED is to simulate and predict learning processes. In terms of philosophical underpinning, a crucial underlying assumption of AI, and AIED in particular, is that any aspect of learning or any other feature of intelligence can be described, and that a machine is able to simulate it (Zawacki-Richter et al., 2019). In the last 20 years, substantial progress has been made in machine learning, which allows researchers to understand, model and simulate the complex behaviors of humans, which are assumed to be rational. Popenici and Kerr (2017, p. 2) defined machine learning “as a subfield of artificial intelligence that includes software able to recognize patterns, make predictions, and apply newly discovered patterns to situations that were not included or covered by their initial design.” With the incredible advances of AI in other sectors (e.g., automobile, health care, manufacturing), recently there has been a renewed interest in AIED (Tuomi, 2018; Zawacki-Richter et al., 2019).

For example, in a review of 146 studies conducted between 2007 and 2018 (Zawacki-Richter et al., 2019) a range of applications of AI in higher education were identified, including making admission decisions and course scheduling (Andris et al., 2013), assessment and feedback (Adamson et al., 2014), intelligence tutoring systems (Aleven and Koedinger, 2002), profile and prediction of students dropping out (Rizvi et al., 2019), and student models and academic achievement (Rizvi et al., 2019). As identified by Zawacki-Richter et al. (2019), although substantial progress has been made in AIED, most studies are quantitative in nature, make use of human intervention studies (Blanchard, 2012), with a control and experimental group, lack reflection on risks, challenges and ethical implications, and present a weak connection to relevant educational theories.

Computer-Supported Collaborative Learning

A main aim of CSCL is to understand the complex interactions in and outside class settings. While AIED assumes that all learning can be described and simulated by machines, in CSCL literature there is often a recognition that learning is complex, and socially constructed. McKeown et al. (2017, p. 439) argued that “(r)esearch in CSCL focuses on learning as a cognitive and/or social process and studies learning designs, learning processes, and pedagogic practices that support technology-mediated collaborative processes in communities of practice.” Given its focus on people working together, there are complex and dynamic interactions that may, or may not, be easily identifiable by computers (e.g., body language, cultural differences, emotions, linguistic styles). In order to develop and maintain a successful CSCL culture, Jeong et al. (2014) theorized that technology used for collaboration in CSCL needs to include: (1) a joint task, (2) communication, (3) sharing of resources, (4) engagement in productive processes, (5) engagement in co-construction, (6) monitoring and regulation, and (7) finding and building groups and communities. In face-to-face and blended learning scenarios, this maintenance of successful discourse might be difficult to achieve, while in online settings there is a wealth of research showing complexities in online collaboration (Fischer and Mandl, 2005; Rienties et al., 2009). For example, in a review of 180 articles published in CSCL conferences in the period 2005–2017, Xia and Borge (2019) found that most studies focused on interaction in classrooms (47%), technology implemented in classrooms (13%), technology implemented in informal settings (15%), and in labs (11%). This strong focus on in-class analysis seems substantially different to AIED. Furthermore, CSCL seems to have strong experimental and learning science roots (Wise and Schwarz, 2017), whereby approximately half of recent studies identified by Jeong et al. (2014) used a methodologically strong design. At the same time, several meta reviews indicated a need for CSCL researchers to embrace more analytics and multi-level approaches to extend their methodological toolbox as well as the rigor of their studies beyond a single classroom or context (Jeong et al., 2014; Wise and Schwarz, 2017; Xia and Borge, 2019).

Educational Data Mining

The main aim of EDM could be succinctly described as analyzing data from educational systems. With the rise of educational data, EDM has been going from strength to strength (Koedinger et al., 2015; Dutt et al., 2017; Aldowah et al., 2019). Early literature reviews (Romero and Ventura, 2007, 2010) noted the need for considering pedagogical aspects when mining data from educational systems, and identified benefits for students and teachers when recommender systems are used. Building on the first EDM conference in 2008, EDM has been defined (Baker and Yacef, 2009) as “an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.” By using a range of DM techniques, EDM researchers aim to discover novel and potentially useful information from large amounts of data. As argued by a range of EDM researchers, while DM techniques are useful in big data contexts, in education there is a need to adjust algorithms to specific contexts (Dutt et al., 2017). Koedinger et al. (2015) explained that EDM focuses on a range of research questions in the psychology of learning: (a) assessment of cognition and learning, (b) transfer of learning, and discovery of cognitive models, (c) affect, motivation, and metacognition (Rosé et al., 2014), and (d) language and discourse analytics.

A desirable sequence of EDM research is to start off with DM leading to new statistical models of data, followed by building an (adaptive) automated system, and finally, closing the loop, by running an evidence-based experiment (Koedinger et al., 2015). In a review of 166 EDM studies, Dutt et al. (2017) identified five common clusters of studies: (1) analyzing student motivation, attitude and behavior; (2) understanding learning style; (3) e-learning; (4) collaborative learning; (5) EDM using clustering. A particular notable distinction between EDM, CSCL, and LA is the lack of specific reliance on educational theory. Most EDM research is considered pedagogically and educational theory-neutral, as the focus is on data discovery, testing of interventions, and optimizing models.

Learning Analytics

The Journal of Learning Analytics defines LA as “… research into the challenges of collecting, analyzing, and reporting data with the specific intent to improve learning.” We define the main aim of LA as to improve learning processes. Several higher education institutions and distance learning providers have started to explore the use of LA dashboards that can display learner and learning behavior to teachers and instructional designers in order to provide more real-time or just-in-time support to students (Jivet et al., 2018; Herodotou et al., 2020). Furthermore, several institutions have developed predictive LA approaches to help identify, as early as possible, students who may be considered “at risk” of failing, and which of those students may need additional support (Viberg et al., 2018; Herodotou et al., 2020). Some institutions are also currently experimenting with providing LA data directly to students in order to support their learning processes and self-regulation (Winne, 2017; Rienties et al., 2019).

As argued by a range of authors, the distinction between EDM and LA is rather unclear, as leading researchers from both fields contribute to similar themes and debates across the two fields (Aldowah et al., 2019; Dormezil et al., 2019). According to Papamitsiou and Economides (2014), both EDM and LA communities share compatible goals and focus where learning science and data-driven analytics intersect. However, there are some subtle and more explicit differences in their ontological origins, techniques used, and perhaps most importantly the specific topics of interest. As argued by Papamitsiou and Economides (2014, p. 50) “LA adopts a holistic framework, seeking to understand systems in their full complexity. On the other hand, EDM adopts a reductionistic viewpoint by analyzing individual components, seeking for new patterns in data and modifying respective algorithms.” In a review contrasting 1,952 LA articles with 783 EDM articles by Dormezil et al. (2019), several common themes were identified, such as “educational computing” and “student performance.” LA focuses mostly on instruction and communication, student learning objectives and natural language processing. In contrast, EDM is focused on student performance and the technical specifications of respective predictive approaches, in particular “learning algorithms” and “student models.” Nonetheless, there is more common overlap than distinct differences; Dormezil et al. (2019) argued that LA is probably best described as one domain with one prominent subset, that of EDM.

Discussion

This review has briefly explored the intersection between education and technology in four fields: AIED, CSCL, EDM, and LA. In the last decade tremendous progress has been made to better understand the complexities of learning and teaching with technology. With the rise and availability of big data in education and AI, substantial leaps in the conceptual, theoretical, and evidence-based understanding of learning and teaching have been made in the four fields discussed. However, as highlighted by a range of reviews, most of these innovations have been localized in small lab studies, or in a single course, or specific context, with limited large-scale adoption within and across institutions (Viberg et al., 2018; Herodotou et al., 2020).

In order to truly make substantial leaps in the actual adoption of technology in large educational settings, achieve wide-spread uptake in educational institutions, and improve our understanding of the complexities of learning that can advance our theoretical models, we argue that the four research fields need to break down some of the artificial barriers between the respective communities, and jointly work together as one interdisciplinary research field. This can be achieved via a web of inter-related activities. First of all, national and international funding bodies should explicitly embrace and fund interdisciplinary research that cuts across the four (and other) fields. Second, by building cross-disciplinary network opportunities for researchers to learn from different disciplines might help to cross-fertilize and cross-pollinate different research ideas, methods and approaches. This can be “formally” achieved by including specific tracks in conference programs, joined special issues, and running some events together, as well as informally by encouraging research visits and invited seminars. Third, as highlighted in Table 1, there are substantial synergies that are possible in terms of theoretical, empirical and methodological advancement between the four fields. We argue that by bringing the best research minds together across the four fields, substantial progress can be made to address some of the large challenges in education and society at large. Toward this direction, in the last few years we have seen several initiatives that attempt to bring those fields closer, including the Festival of Learning and the creation of the International Alliance to Advance Learning in the Digital Era1 that brings the various societies included in Table 1 together. In terms of next steps following this work, and given the short-length nature of this article, a systematic and exhaustive review across the four fields would be particularly beneficial and help establish how exactly these fields differ and overlap.

Author Contributions

All authors contributed to the article and approved the submitted version.

Funding

This article had received funding from the Horizon 2020 Research and Innovation Program ERASMUS+ (KA203-2019-002).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

  1. ^ http://www.alliancelss.com/

References

Adamson, D., Dyke, G., Jang, H., and Rosé, C. P. (2014). Towards an agile approach to adapting dynamic collaboration support to student needs. Int. J. Artif. Intel. Educ. 24, 92–124. doi: 10.1007/s40593-013-0012-6

CrossRef Full Text | Google Scholar

Aldowah, H., Al-Samarraie, H., and Fauzy, W. M. (2019). Educational data mining and learning analytics for 21st century higher education: a review and synthesis. Telemat. Inform. 37, 13–49. doi: 10.1016/j.tele.2019.01.007

CrossRef Full Text | Google Scholar

Aleven, V. A. W. M. M., and Koedinger, K. R. (2002). An effective metacognitive strategy: learning by doing and explaining with a computer-based cognitive tutor. Cogn. Sci. 26, 147–179. doi: 10.1207/s15516709cog2602_1

CrossRef Full Text | Google Scholar

Andris, C., Cowen, D., and Wittenbach, J. (2013). support vector machine for spat- ial variation. Transact. GIS 17, 41–61. doi: 10.1111/j.1467-9671.2012.01354.x

CrossRef Full Text | Google Scholar

Baker, R. S., and Yacef, K. (2009). The state of educational data mining in 2009: a review and future visions. JEDM J. Educ. Data Min. 1, 3–17. doi: 10.5281/zenodo.3554657

CrossRef Full Text | Google Scholar

Baker, T., Smith, L., and Anissa, N. (2019). Educ-AI-Tion Rebooted? Exploring the Future of Artificial Intelligence in Schools and Colleges (London: Nesta). Available online at: https://www.nesta.org.uk/report/education-rebooted

Google Scholar

Blanchard, E. G. (2012). “Intelligent tutoring systems,” in On the WEIRD Nature of ITS/AIED Conferences, eds S. A. Cerri, W. J. Clancey, G. Papadourakis, and K. Panourgia (Berlin).

Google Scholar

Boyd, G., Keller, A., and Kenner, R. (1982). Remedial and second language English teaching using computer assisted learning. Comput. Educ. 6, 105–112. doi: 10.1016/B978-0-08-028111-7.50020-9

CrossRef Full Text | Google Scholar

Dormezil, S., Khoshgoftaar, T., and Robinson-Bryant, F. (2019). “Differentiating between educational data mining and learning analytics: a bibliometric approach,” in LABBEC Workshop (Learning Analytics: Building Bridges Between the Education and the Computing Communities), 1–6.

Google Scholar

Dutt, A., Ismail, M. A., and Herawan, T. (2017). A systematic review on educational data mining. IEEE Access 5, 15991–16005. doi: 10.1109/ACCESS.2017.2654247

CrossRef Full Text | Google Scholar

Eysink, T. H. S., de Jong, T., Berthold, K., Kolloffel, B., Opfermann, M., Wouters, P., et al. (2009). Learner performance in multimedia learning arrangements: an analysis across instructional approaches. Ame. Educ. Res. J. 46, 1107–1149. doi: 10.3102/0002831209340235

CrossRef Full Text | Google Scholar

Ferguson, R. (2012). Learning analytics: drivers, developments and challenges. Int. J. Technol. Enhan. Learn. 4, 304–317. doi: 10.1504/ijtel.2012.051816

CrossRef Full Text | Google Scholar

Fischer, F., and Mandl, H. (2005). Knowledge convergence in computer-supported collaborative learning: the role of external representation tools. J. Learn. Sci. 14, 405–441. doi: 10.1207/s15327809jls1403_3

CrossRef Full Text | Google Scholar

Gunawardena, C. N. (1995). Social presence theory and implications for interaction and collaborative learning in computer conferencing. Int. J. Educ. Telecommun. 1, 147–166.

Google Scholar

Herodotou, C., Rienties, B., Hlosta, M., Boroowa, A., Mangafa, C., and Zdrahal, Z. (2020). The scalable implementation of predictive learning analytics at a distance learning university: insights from a longitudinal case study. Internet High. Educ. 45:100725. doi: 10.1016/j.iheduc.2020.100725

CrossRef Full Text | Google Scholar

Holmes, W., Bialik, M., and Fadel, C. (2019). Artificial Intelligence In Education: Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign.

Google Scholar

Jeong, H., Hmelo-Silver, C. E., and Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks 2005–2009. Int. J. Comput. Supp. Coll. Learn. 9, 305–334. doi: 10.1007/s11412-014-9198-3

CrossRef Full Text | Google Scholar

Jivet, I., Scheffel, M., Specht, M., and Drachsler, H. (2018). “License to evaluate: preparing learning analytics dashboards for educational practice,” in Proceedings of the 8th International Conference on Learning Analytics & Knowledge (LAK’18), Sydney.

Google Scholar

Koedinger, K., D’Mello, S., McLaughlin, E. A., Pardos, Z. A., and Rosé, C. P. (2015). Data mining and education. Wiley Interdiscip. Rev. Cogn. Sci. 6, 333–353. doi: 10.1002/wcs.1350

PubMed Abstract | CrossRef Full Text | Google Scholar

McKeown, J., Hmelo-Silver, C. E., Jeong, H., Hartley, K., Faulkner, R., Emmanuel, N., et al. (2017). “A meta-synthesis of CSCL literature in STEM education,” in Computer Supported Collaborative Learning, Philadelphia, PA.

Google Scholar

Papamitsiou, Z., and Economides, A. (2014). Learning analytics and educational data mining in practice: a systematic literature review of empirical evidence. Educ. Technol. Soc. 17, 49–64.

Google Scholar

Popenici, S. A. D., and Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Res. Pract. Technol. Enhan. Learn. 12:22. doi: 10.1186/s41039-017-0062-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Rienties, B., Tempelaar, D. T., Nguyen, Q., and Littlejohn, A. (2019). Unpacking the intertemporal impact of self-regulation in a blended mathematics environment. Comput. Human Behav. 100, 345–357. doi: 10.1016/j.chb.2019.07.007

CrossRef Full Text | Google Scholar

Rienties, B., Tempelaar, D. T., Van Den Bossche, P., Gijselaers, W. H., and Segers, M. (2009). The role of academic motivation in computer-supported collaborative learning. Comput. Human Behav. 25, 1195–1206. doi: 10.1016/j.chb.2009.05.012

CrossRef Full Text | Google Scholar

Rizvi, S., Rienties, B., and Khoja, S. A. (2019). The role of demographics in online learning; a decision tree based approach. Comput. Edu. 137, 32–47. doi: 10.1016/j.compedu.2019.04.001

CrossRef Full Text | Google Scholar

Romero, C., and Ventura, S. (2007). Educational data mining: a survey from 1995 to 2005. Expert Syst. Appl. 33, 135–146. doi: 10.1016/j.eswa.2006.04.005

CrossRef Full Text | Google Scholar

Romero, C., and Ventura, S. (2010). Educational data mining: a review of the state of the art. IEEE Trans. Syst. ManCybernet. Part C 40, 601–618. doi: 10.1109/TSMCC.2010.2053532

CrossRef Full Text | Google Scholar

Romero, C., Ventura, S., Zafra, A., and Bra, P. D. (2009). Applying Web usage mining for personalizing hyperlinks in Web-based adaptive educational systems. Comput. Educ. 53, 828–840. doi: 10.1016/j.compedu.2009.05.003

CrossRef Full Text | Google Scholar

Roschelle, J., and Koschmann, T. (1996). “Learning by collaborating: convergent conceptual change,” in CSCL: Theory and Practice of an Emerging Paradigm, ed. T. Koschmann (New Jersey: Lawrence Erlbaum Associates, Inc), 209–248.

Google Scholar

Rosé, C. P., Carlson, R., Yang, D., Wen, M., Resnick, L., Goldman, P., et al. (2014). “Social factors that contribute to attrition in MOOCs,” in Proceedings of the first ACM Conference on Learning@scale Conference, Atlanta, GA.

Google Scholar

Tuomi, I. (2018). “The impact of artificial intelligence on learning, teaching, and education,” in Policies for the Future, eds M. Cabrera, R. Vuorikari, and Y. Punie (Luxembourg: Publications Office of the European Union). Available online at: http://repositorio.minedu.gob.pe/bitstream/handle/MINEDU/6021/The%20Impact%20of%20Artificial%20Intelligence%20on%20Learning,%20Teaching,%20and%20Education.pdf?sequence=1

Google Scholar

Viberg, O., Hatakka, M., Bälter, O., and Mavroudi, A. (2018). The current landscape of learning analytics in higher education. Comput. Hum. Behav. 89, 98–110. doi: 10.1016/j.chb.2018.07.027

CrossRef Full Text | Google Scholar

Winne, P. H. (2017). Leveraging big data to help each learner upgrade learning and accelerate learning science. Teach. Coll. Record 119, 1–24.

Google Scholar

Wise, A. F., and Schwarz, B. B. (2017). Visions of CSCL: eight provocations for the future of the field. Int. J. Comput. Supp. Coll. Learn. 12, 423–467. doi: 10.1007/s11412-017-9267-5

CrossRef Full Text | Google Scholar

Xia, Y., and Borge, M. (2019). “A systematic review of the quantification of qualitative data in proceedings of international conferences on CSCL from 2005 to 2017,” in 13th International Conference on Computer Supported Collaborative Learning - A Wide Lens: Combining Embodied, Enactive, Extended, and Embedded Learning in Collaborative Settings, CSCL 2019, eds K. Lund, G. P. Niccolai, E. Lavoue, C. Hmelo-Silver, G. Gweon, and M. Baker et al. Lyon.

Google Scholar

Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? Int. J. Educ. Technol. Higher Edu. 16:39. doi: 10.1186/s41239-019-0171-0

CrossRef Full Text | Google Scholar

Keywords: artificial intelligence in education, computer-supported collaborative learning, educational data mining, learning analytics, review

Citation: Rienties B, Køhler Simonsen H and Herodotou C (2020) Defining the Boundaries Between Artificial Intelligence in Education, Computer-Supported Collaborative Learning, Educational Data Mining, and Learning Analytics: A Need for Coherence. Front. Educ. 5:128. doi: 10.3389/feduc.2020.00128

Received: 17 April 2020; Accepted: 29 June 2020;
Published: 17 July 2020.

Edited by:

Matthias Stadler, Ludwig Maximilian University of Munich, Germany

Reviewed by:

Judit García-Martín, University of Salamanca, Spain
Jessica Levy, University of Luxembourg, Luxembourg

Copyright © 2020 Rienties, Køhler Simonsen and Herodotou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Bart Rienties, bart.rienties@open.ac.uk