- Department of Journalism, Media and Communication (JMG), University of Gothenburg, Gothenburg, Sweden
This reflective essay outlines a positive-constructive pedagogy as an approach to enhancing AI literacies of different types, with a focus on the individual’s personal relationship to AI technologies. It is argued that personal relationships with emerging technologies are crucial for a successful implementation of AI in society, as understanding of AI needs a personal contact with the technology and tacit knowledge about it to develop. AI thus poses a societal learning challenge, which can be met by lending space for personal attitudes, affects, and reflections of media users from the outset, supporting them gradually toward a critical and deepened relationship to the tools and software they are using.
Introduction
An individual’s personal relationship with artificial intelligence (AI) is a decisive factor in shaping societal development. In this reflective essay, I intend to outline a pedagogy that is individual-centered and, more particularly, focuses on the individual’s relationship to AI technologies and pinpoints the positive and advantageous dimensions of the technologies. The essay is based on my personal experiences of workshops that I have run among journalism students, journalists, and other communication practitioners in 2024–2025. I particularly focus on four 2-h workshops organized with a similar structure for academic researchers and communicators in October 2024 and February 2025, journalism students in November 2024, and public-service journalists in March 2025 in Sweden. The aim is to describe a pedagogical model that represents something that I call positive-constructive pedagogy (see e.g. Seligman and Csikszentmihalyi, 2000).
I will first discuss the three basic divides that typically define learners’ perceptions of AI and argue that there are at least three kinds of gaps between the public presentations of AI and the lived experiences, which need to be taken into account in pedagogy. Following this, I will discuss why pedagogical approaches to AI education can greatly benefit from a positive-constructive approach in pedagogy and how centering on the individual’s perspective is conceived of as important in the current competence models proposed by UNESCO. As a pedagogical application of these ideas, I will present the so-called 3E model that intends to put these ideas into practice. The 3E model focuses on adoption and building an initial understanding of AI technologies, which has been the focus of my workshops. Ideas concerning AI pedagogies are still a work-in-progress, and so is my effort to construct a pedagogical approach that would help learners become equipped with a baseline understanding of the new disruptive technologies.
Disentangling pre-conceptions of AI
I argue that there are three fundamental disparities that make AI a different kind of phenomenon in people’s everyday lives than in conceptual terms. First, there is a gap between the mediated imaginary of AI and its actual presence in daily life. Second, there is a disparity between the abstract nature of the concept and its concrete manifestations in users’ everyday experiences, seen as narrow AI. Third, there is a disproportion between organizational decision-making and individual decision-making when it comes to the selection and use of AI tools. Next, I will discuss these gaps in more detail.
Imagined vs. lived AI divide
Studies on journalistic newspaper coverage have found that industry sources dominate public attention. For example, a content analysis by the Reuters Institute for the Study of Journalism (Brennen et al., 2018) showed that almost 60% of news articles across different outlets covered new industry products—from new versions of familiar gadgets such as smartphones to wearable, portable, sci-fi-like assistants, and robots. These are typically promoted and commented on by male stakeholders who can have financial gains from AI (Brantner and Saurwein, 2021; Brennen et al., 2018; Ouchchy et al., 2020); for example, almost 12% of all articles in the Reuters study included a reference to Elon Musk. The dominance of insider and tech expert views overshadows experts from the academy and governments, female experts, activists, non-governmental organizations (NGOs), and other civil-society representatives, as well as ordinary customers, educators, and citizens.
Moreover, as is well documented, popular cultural imaginaries—prominently represented in mainstream cinema through films such as The Matrix, Terminator, or Dall-E—tend to construct narratives of AI that are heavily dystopian. These portrayals frequently revolve around fears of technological singularity and scenarios in which intelligent machines surpass and ultimately take over control of societies. Alternatively, AI is often anthropomorphized or personified, reflecting deeply rooted cultural motifs evident in films such as Her and Simone, where emotional or romantic relationships between humans and (female) AI entities form the central narrative arc. The tendency to name chatbots and design humanoid robots only reinforces the fact that the term AI involves the unfortunate word choice “intelligence,” a word that inherently suggests human-like cognitive capacities. These imaginaries contribute, even if we are conscious of their constructed nature, and may influence our learning.
In contrast to the grand cultural narratives surrounding AI, the everyday experience of using available tools and technologies can appear slightly lame or underwhelming. Once the initial sense of wonder fades—such as witnessing a generative AI model for the first time produce a coherent text—the capabilities of current AI tools accessible to the general public remain markedly modest when compared to the futuristic visions popularized in media and fiction. AI tools fail at conducting tasks that are simple for a human being with an emotional repertoire and are often quite formulaic and repetitive.
One productive way to challenge these dominant narratives is to begin with definitional clarity—or, perhaps more usefully, with inverse definitions. Instead of asking what AI is, a valuable entry point for AI pedagogy may be to point out what it is not. For individuals newly encountering AI—many of whom may only recently have heard of tools such as ChatGPT—it is important to note that the concept of AI is far from new. According to the Swedish Internet Foundation (2024), approximately one-third of the population has already used ChatGPT. Yet, it may still come as a surprise that the origins of AI can be traced back to the mid-1950s.
Conceptual vs. applied AI divide
Since AI has been defined as technologies, applications and tools, knowledge fields, and research fields, learners may find it challenging to capture in conceptual terms. Our everyday experiences do not always let us distinguish AI from the software that we are using. AI can be both integrated into regular software and stand out as separate tools marketed as AI solutions. Originally, the definition of AI refers to “the ability of machines to use language, develop abstractions and concepts, and handle problems usually reserved for humans and improve their own performance” (McCarthy et al., 1955). It thus refers to the capabilities of computer systems or intelligent agents to perform tasks that were once considered exclusive to human intelligence and to enhance these capabilities through autonomous learning—a trait traditionally attributed to humans. In this sense, AI is better understood as a characterization of a set of potential to be harnessed in interaction rather than as a concrete or easily extractable entity.
Indeed, policymakers and public educators have started describing AI as an umbrella term or concept referring to a variety of technologies. The Swedish Internet Foundation (2024) suggests on their website that AI is “[a] collective term for computer programmes and tools that are designed in various ways to resemble human thinking,” adding that, “for example, computers should be able to reason and plan, learn from new information and much more.” Approaching AI as a comprehensive term may also make it easier to understand how it is manifested in different types and forms, such as generative AI (genAI). However, whereas general AI is a very broad and abstract term, and even narrow AI might appear less tangible, genAI seems more appealing because of its concrete form, in particular as large language models such as ChatGPT, Copilot, or Claude, of which most citizens nowadays have some awareness. Drawing on the umbrella concept, AI can be conceptualized as a pool of resources, or, to be more concrete, tools, which an individual can choose to add to his or her toolkit.
The organizational vs. individual divide
There is also a gap between organizations and individuals in terms of adopting and using AI. In organizations, decisions regarding the use (or non-use) of tools, as well as in-house AI development, are made top-down at the strategic level. Individuals are typically either prohibited from or permitted to use certain tools but are rarely encouraged to choose tools based on their own preferences, largely due to concerns about privacy and data security. In informal and private contexts, however, individuals are free to choose whichever tools they prefer—and must also carry out their risk assessments. This may not sound surprising, but it has consequences for pedagogies.
Because of this gap, talking about professional and private uses is often two things. In organizational settings, which I will later call the backstage, you are not allowed to test everything freely. If the organization only uses Copilot, you need to build all your understanding on that tool. In private uses, which I will call the personal stage, you can experiment and learn; that is, harness your settings for learning. Those who actively test and learn by trial can be better equipped to encounter AI-driven technologies at the workplace, as they have developed a sense for critical prompting.
Toward positive-constructive pedagogy
Positive-constructive pedagogy is an educational approach that merges the core ideas of constructivism with a strong, affirmative focus on learners’ development and wellbeing. The pedagogy seeks to evoke trust in the learner toward the technologies, despite the manifold critiques that can be put forward to these technologies, and while acknowledging these risks. It highlights each learner’s strengths, potential, and intrinsic motivation, while also involving the emotional and affective dimensions of both technology relationship and learning. Even if many of the dimensions of technology adoption are related to emotional and practical dimensions of use, such as the attitudes of using tools, the perceived ease of use, and the perceived usefulness, technologies are often discussed without this individually nuanced underpinning.
Positive-constructive pedagogy encourages collaborative learning and dialogue, which allows learners to engage with diverse perspectives and co-construct understanding. Learning activities are often experiential and reflective, helping learners connect theory to practice while also examining their assumptions and insights. Teachers play a supportive role by offering affirming, constructive feedback that nurtures both competence and confidence.
Individual’s technology relationship
To conceptualize the individual relationship to AI technologies, I outlined a model of different stages of production in the context of academic uses of AI (Jaakkola, 2024). This model suggested that there are three spheres where an individual’s relationship to AI emerges and is shaped, which I called the personal stage, the backstage, and the frontstage (see Figure 1):
1. Personal stage: the private sphere of personal use and informal, often self-determined learning where you can choose, test, and (mis)use tools according to your own preferences.
2. Backstage: the organizational environment, such as the workplace (a newsroom, school, public authority office, and classroom), where the organization’s guidelines define the uses (which tools to use and not to use, and how) and learning efforts are connected to collective cycles of knowledge production and workflows.
3. Front stage: the public or semi-public space where the outcome of work is presented to an audience—whether as a journalist, teacher, researcher, etc.—and where both individual and organizational transparency about the uses is expected as an action of responsibility.

Figure 1. The three stages of AI uses. The images were produced with the AI-tool Playground AI with prompts referring to the following concepts: “researcher examining a computer”, “researcher and a team analysing results” and “researcher presenting results to an audience” (see Jaakkola, 2024, p. 5).
At the individual level of the personal stage, a person can freely select tools and software, engaging in exploration, testing, and experimentation without immediate organizational constraints. At the personal stage, the individual needs to take responsibility for his or her actions and assess how to protect him or herself in terms of data security and information hygiene, and how much to invest resources in tools and processes. In contrast, the organizational guidelines and frameworks inform and restrict the individual’s actions backstage, where the employer or the organizational environment outlines the restrictions for the use, demarcating the areas of use and non-use. In the backstage, the employer or institutional context establishes guidelines and regulations that define acceptable and prohibited uses of AI technologies. For instance, many organizations—whether universities, companies, public authorities (such as schools), or newsrooms—implement specific policies that permit the use of certain platforms, such as Microsoft’s AI tools, while restricting or prohibiting others. A common example is the preferential use of Copilot over ChatGPT, or AI systems developed by the organization itself over external tools and services available.
Finally, at the front stage, the uses at both the backstage and personal stages need to be assessed and described to increase the transparency and truthfulness of the outcome that has been generated with co-intelligence between human actors and a machine. Some forms of communication, such as journalism, apply stricter ethical frameworks for transparency and accountability than others, such as communication between free citizens. The practices developed for the front stage also significantly define what possibilities citizens and audiences have to assess the truthfulness of content and establish trust in the producers of content or information.
The spaces are interconnected and cumulatively contribute to the experiences of an individual with AI, and, in the long term, their AI literacy. The key is fostering critical reflection on these experiences. These workshops are grounded in the idea that gaining personal, experiential knowledge of AI tools is essential for understanding their implications and potential. In this essay, I argue that this individual, experience-based perspective represents a crucial component in the development of emerging pedagogical approaches aimed at supporting effective teaching and learning about AI.
Competence framework and iteration
The United Nations organization UNESCO, responsible for global educational visions and especially for the Global South, has outlined competency frameworks for learners (Miao et al., 2024) and educators (Miao and Cukurova, 2024). These competence frameworks include a dozen different competencies across four dimensions: a human-centered mindset, ethics of AI, AI techniques and applications, and AI system design (also see Jaakkola, 2023; Deuze and Beckett, 2022; Ioscote et al., 2024).
These competencies also span three progression levels or degrees of “understanding,” “applying,” and “creating,” reflecting three different levels of competence ranging from the foundations of knowing to skills of action and, finally, to the capacity to design systems. While there is still no consensus on whether learners in formal education need to master coding or create software by themselves, or whether the sufficient level of competency can be limited to a more superficial use of tools (Moreno-León et al., 2016; Green, 2018), it is evident that acquiring creation skills always contributes to a deeper understanding of reception and transversal skills such as problem-solving (Popat and Starkey, 2019).
One of the fundamental aspects of access and navigation with technologies is whether the user chooses to use certain tools or technologies or refrain from their use. Even if the user opts for the use, there is a wide variety of options to choose from. Many basic functions can be carried out with the help of large language models, in which the distinct functions of data management and analysis are integrated. Still, there are also specialized tools, such as ChatGPT, Claude, or Gemini, that can be used for more specific purposes, such as creating a synthetic voice or musical pieces. In this respect, models of technology adoption may cast light on what factors affect the choices (see, e.g., Lai, 2017). These models show that things such as perceived ease of tools and self-efficacy affect the decisions to start using a tool (Venkatesh et al., 2003).
Even frameworks for organizational learning are based on the iterative processes of testing and integration. As proposed by Nonaka and Takeuchi (1995), knowledge is created and managed in organizations within the processes of socialization, externalization, combination, and internalization. Their so-called SECI model implies a continuous process where knowledge is shared, articulated, systematized, and internalized, enabling organizational learning. They emphasized that knowledge creation is not merely about processing information but about mobilizing human commitment and fostering shared understanding. Similar ideas about continuous processes of verbalization are used in models of reflective learning (Higgins, 2017) and experiential learning (Kolb, 2015). In a similar way, organizations are expected to reach AI (or other technological) readiness in the dynamic processes between people, social processes, data, and technologies (Uren and Edwards, 2023).
Pedagogical application: the 3E model
The three workshops were structured according to a three-part model based on principles of professional reflection, experiential learning, and tacit knowledge production. I refer to this as the 3E Model, consisting of the phases of Enter, Experience, Exit. The Enter phase is about becoming familiar with the concept of AI and approaching theoretical or conceptual knowledge from a personal perspective. Participants are encouraged to explore their preconceptions, identify, and overcome fears or barriers, and build a foundational understanding. In the Experience phase, participants engage hands-on with tools and methods, gaining first-hand insights through trial and error. The focus is on learning by doing.
In the final Exit phase, participants reflect on their experiences and attitudes, step back to see the broader picture, and collaboratively develop shared practices, such as guidelines or “house rules.” The model is not only learner-centered but also focused on the positive gains and effects; only in the final phase, the risks, restrictions, and possible harms are addressed and explicitly included in the picture. If the model is repeated, reflection informs re-entry. I will briefly touch upon some pedagogical assignments that can be applied in these phases.
Enter
To settle the individual perspective, an initial pedagogical approach to AI can aptly begin with addressing preconceptions and attitudes among learners. This includes both alleviating unfounded fears and tempering inflated expectations—two extremes that, as mentioned earlier, often characterize public perceptions shaped by media narratives. Educators and facilitators have the important task of placing AI technologies within a realistic and comprehensible context by providing factual, balanced information. Equally important is the effort to understand the subjective meanings individuals assign AI. Each person’s background, experiences, and exposure to technology shape their perceptions, influencing both their openness and resistance to engaging with AI tools. Identifying these personal barriers and underlying attitudes—whether they stem from uncertainty, lack of knowledge, ethical concerns, or past negative experiences—is crucial.
Example 1. Learners are invited to discuss their previous experiences with ChatGPT. For what purposes have they used it, and what were the outcomes? Are they utilizing their full potential? These experiences are documented on a shared whiteboard or similar platform to make the diversity of use visible. By exchanging experiences, learners can discover how others engage with the tool and consider new ways it might be used. They may also identify gaps in their knowledge and skills. Through shared discussion, the tool becomes more approachable—more “tamed”—as learners begin to attach personal meaning to it and integrate it into their reasoning.
Learners can also be invited to list the pros and cons of their uses or journalistic uses. Providing an initial reflective space for participants to explore their attitudes and emotional responses to technology lays the groundwork for more focused and meaningful learning. Encouraging learners to consider their relationship with technology in general—and with AI, particularly genAI, which is currently the most prominent and user-friendly form—enables them to develop a holistic understanding of themselves as users and fosters components of self-efficacy. Once this foundation is established, the process of acquiring knowledge and practical skills through experimentation and hands-on testing, which requires a degree of confidence and self-perceived competence, tends to proceed more effectively.
Experiment
We know that ICT or computer skills cannot be taught without connecting exercises to a context. By freely experimenting with different tools, learners’ gain a first-hand contact and tacit knowledge about how tools are used or how they need to be mastered.
Example 2. Learners are invited to test tools focusing on different media types by creating a written text, an audio text, a visual text, and an audiovisual presentation. Potential tools are listed for learners in each category, and learners can choose which tool to experiment with. They may make different versions of varying prompts or comparing tools designed for the same procedure. They are asked to share their presentation with others.
Learners are asked to create visualizations of a given group of people—for example, minorities or gender-related topics—to engage with the various biases and instances of cultural insensitivity that may occur in AI systems. By prompting the generation of images related to value-laden themes, such biases can be revealed and critically examined. As prompt designers, learners are empowered to influence and revise the outcomes themselves. This active engagement fosters a deeper understanding of how human–machine interaction works and helps them build confidence in navigating and shaping co-intelligence.
Exit
In the Exit phase, the outcomes of the experiments are integrated with reflection. Some critics may argue that an approach focusing on me-centered perspectives risks overlooking the broader macrostructures that shape everyday experiences, such as political, economic, and institutional forces. However, these wider structures can be meaningfully addressed through the lens of individual experience, particularly by encouraging contextualization and critical reflection. A key pedagogical objective in engaging learners with AI, therefore, lies in making visible the often-invisible infrastructures—technological, organizational, and societal—that individuals rely on in personal use (at the personal stage), within institutional or organizational boundaries (the backstage), and in their public or professional roles (the front stage). Educators can help learners situate their experiences within larger systems of power and influence, fostering both individual agency and structural awareness.
Example 3. The outcome of experiments is analyzed in pairs or small groups in terms of opportunities and risks. By balancing opportunities and risks, learners acquire a more balanced picture and assess the values of the tools in question for the specific tasks they may want to accomplish, for example, whether or to what extent to use AI for the creation of a journalistic podcast and its promotion online. In this phase, the educator becomes more questioning with an attempt to guide learners to create more critical distance from their uses and technological relationships.
While single-session workshops can only serve as an entry point to the broader process of continuous learning, they must introduce participants to the foundations of a personal relationship with AI—one that encompasses attitudes, skills, usage practices, and critical reflection. Ideally, learners leave with not only an understanding of the concepts but also a sense of direction for how to further develop and refine this relationship over time, supported by the initial scaffolded experience. Experiential and affective dimensions, which involve the pivotal component of tacit knowledge, can be complemented with the reflections brought forward by self-determined study. Self-study materials such as MOOCs (massive open online courses) and open educational resources (OERs) are widely available.
Another criticism that the positive approach stressed by the conceptualization of individual relationships in the focus may encounter is that it risks being insufficiently critical. By prioritizing the notion of AI as a resource, it may downplay the significant risks, potential harms, and disadvantages associated with AI. However, as the public discourse on AI is largely shaped by these risks, emphasizing risks may contribute to higher thresholds in making decisions regarding their uses. Cultivating a critical distance involves encouraging reflection on the implications of use and non-use, the modes and contexts of application, and the broader societal and ethical frameworks in which these technologies operate. Such metaperspectives are most effectively developed through structured opportunities for reflection, which allow learners to move beyond surface-level engagement and consider deeper questions about power, responsibility, and the socio-technical dynamics of AI.
Conclusion
In this essay, I explored how the concept of developing and strengthening an individual’s trusting relationship with AI technologies can be conceptualized and translated into the pedagogical practice of the 3E model. With the threefold model of Enter, Experiment, Exit, educators can visit all stages of AI use in varying didactic settings.
The positive-constructive pedagogical approach to AI of the 3E model can be contrasted with public discourses that often emphasize potential risks and pitfalls from the outset. This approach rests on the premise that critical awareness and reflective distance toward technologies are most effectively cultivated through direct, personal engagement. Hands-on experience allows learners to develop more nuanced, relevant, and proportional insights grounded in observation and evidence. Rather than instill fear or skepticism prematurely, this pedagogy encourages exploration and informed judgment, thereby empowering individuals to critically assess AI technologies based on their encounters and contextualized knowledge.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author.
Author contributions
MJ: Methodology, Supervision, Writing – review & editing, Investigation, Conceptualization, Validation, Data curation, Writing – original draft, Funding acquisition, Resources, Formal analysis, Software, Visualization, Project administration.
Funding
The author(s) declare that no financial support was received for the research and/or publication of this article.
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author declares that no Gen AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Brantner, C., and Saurwein, F. (2021). Covering technology risks and responsibility: automation, artificial intelligence, robotics, and algorithms in the media. Int. J. Commun. 15, 5074–5098.
Brennen, J. S., Howard, P. N., and Nielsen, R. K. (2018). An industry-led debate: How UK media cover artificial intelligence. Reuters Institute for the Study of Journalism.
Deuze, M., and Beckett, C. (2022). Imagination, algorithms and news: Developing AI literacy for journalism. Digit. J. 10, 1913–1918. doi: 10.1080/21670811.2022.2119152
Green, S. (2018). When the numbers don’t add up: Accommodating data journalism in a compact journalism programme. Asia Pac. Media Educ. 28, 78–90. doi: 10.1177/1326365X18766767
Higgins, D. (2017). Reflective learning in management, development and education. 2nd Edn. London: Routledge.
Ioscote, F., Gonçalves, A., and Quadros, C. (2024). Artificial intelligence in journalism: A ten-year retrospective of scientific articles (2014–2023). Journal. Media 5, 873–891. doi: 10.3390/journalmedia5030056
Jaakkola, M. (Ed.) (2023). Reporting on artificial intelligence: A handbook for journalism educators. Paris: UNESCO.
Jaakkola, M. (2024). Academic AI literacy: Artificial intelligence in scholarly writing, editing, and publishing. Nord Media Network Online Educational Resources. Nordicom. doi: 10.57943/rp4x-xy56
Kolb, D. A. (2015). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Pearson Education.
Lai, P. C. (2017). The literature review of technology adoption models and theories for the novelty technology. ISTEM J. Inf. Syst. Technol. Manag. 14, 21–38. doi: 10.4301/S1807-17752017000100002
McCarthy, J., Minsky, M. M., Rochester, N., and Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. Available online at: https://raysolomonoff.com/dartmouth/boxa/dart564props.pdf (Accessed June 24, 2025).
Moreno-León, J., Robles, G., and Román-González, M. (2016). Code to learn: Where does it belong in the K-12 curriculum? J. Inf. Technol. Educ. Res. 15, 283–303. doi: 10.28945/3521
Nonaka, I., and Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the dynamics of innovation. Oxford: Oxford University Press.
Ouchchy, L., Coin, A., and Dubljević, V. (2020). AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media. AI Soc. 35, 927–936. doi: 10.1007/s00146-020-00965-5
Popat, S., and Starkey, L. (2019). Learning to code or coding to learn? A systematic review. Comput. Educ. 128, 365–376. doi: 10.1016/j.compedu.2018.10.005
Seligman, M. E. P., and Csikszentmihalyi, M. (2000). Positive psychology: an introduction. Am. Psychol. 55, 5–14. doi: 10.1037/0003-066X.55.1.5
Swedish Internet Foundation (2024). Svenskarna och AI [The Swedes and AI]. Available online at: https://svenskarnaochinternet.se/utvalt/svenskarna-och-ai-2024 (Accessed June 24, 2025).
Uren, V., and Edwards, J. S. (2023). Technology readiness and the organizational journey towards AI adoption: an empirical study. Int. J. Inf. Manag. 68:102588. doi: 10.1016/j.ijinfomgt.2022.102588
Keywords: artificial intelligence, positive-constructive pedagogy, AI literacies, personal relationship, technology education
Citation: Jaakkola M (2025) Operationalizing positive-constructive pedagogy to artificial intelligence: the 3E model for scaffolding AI technology adoption. Front. Commun. 10:1610111. doi: 10.3389/fcomm.2025.1610111
Edited by:
Kelly Merrill Jr, University of Cincinnati, United StatesReviewed by:
Janne Fagerlund, University of Jyväskylä, FinlandCintia Ines Boll, Federal University of Rio Grande do Sul, Brazil
Iris Heung Yue Yim, University of Cambridge, United Kingdom
Copyright © 2025 Jaakkola. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Maarit Jaakkola, bWFhcml0LmphYWtrb2xhQGd1LnNl