Your new experience awaits. Try the new design now and help us make it even better

CONCEPTUAL ANALYSIS article

Front. Psychol., 03 December 2025

Sec. Theoretical and Philosophical Psychology

Volume 16 - 2025 | https://doi.org/10.3389/fpsyg.2025.1638774

This article is part of the Research TopicAdvancing Personal Construct Psychology: Interdisciplinary Perspectives and ApplicationsView all articles

Constructivist psychology principles of human–AI collaboration

  • Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia

Generative artificial intelligence (GenAI) is becoming a part of our lives, personally and professionally. Much of the current discourse focuses on its capabilities and ethical risk, while some important questions remain under explored. For example, how should we collaborate with GenAI? Or what kind of psychological assumptions about humans and knowledge are needed to guide interaction between people and GenAI? This conceptual paper proposes that constructivist psychology, with its focus on meaning making, complexity, anticipation, and agency, offers a promising framework for human–AI collaboration. Rather than treating GenAI as only a tool for increasing efficiency, whose products we passively consume, the paper invites for rethinking the collaboration as co-construction between human and artificial meaning making. First, the paper traces the historical convergence of metaphors between computer science and psychology. What follows is an articulation of core constructivist principles for human–AI collaboration. The paper also outlines a conceptual foundation for designing AI agents that are based on the principles of constructivist psychology. Finally, implications, limitations and future studies are outlined. The general genre of the paper is not deconstructive in terms of unravelling biases or positioning that GenAI may perform, but an exploration of how we can redirect existing biases and positioning with principles of constructivist psychology.

Introduction

Human and artificial meaning making: a history of recursive metaphors

Studies of minds and machines seem to have a shared history. Cognitive revolution of the middle 20th century marked the beginning of using the computer metaphor for studies of human cognition (Miller, 2003). The project of cognitive psychology used concepts developed by computer science to model how we think. We started to think about thinking through the lens of information processing, retrieval, storage, input, output and errors. For example, study of human memory was based on the computer metaphor in terms of short-term and long-term storage, encoding and decoding of information, retrieval failure etc. Similarly, attention was modeled as some form of a central processing unit (CPU) that was capable of processing a limited amount of information, filtering irrelevant data and managing information flow. Human decision-making was also broken into logical steps that resembled computer algorithms. Studies of language also followed the computer metaphor with the goal to find the rule of language that the human “processor” uses to derive meaning. The “mind as computer” metaphor was fruitful and led to the development of important models of human mind (Girgenzer and Goldstein, 1996).

At the same time, computer scientists started using the “machine as mind” metaphor to develop computer capabilities of the time. Ideas of “artificial intelligence” and “machine learning” convey this pattern of recursive metaphors. Turing (1950) was focused on the question of whether machines could think and learn. His famous test was based on the criteria of successful imitation: if machines could convincingly imitate humans, this could be considered a form of machine “thinking.” Under this concept, Turing (1950) had in mind the “imitation” of thinking, rather than thinking itself. In other words, Turing’s view was that the question “can machines think” should be replaced with “can machines do well in the imitation game” of the thinking tasks. Moreover, Turing introduced the concept of “learning machines” that were included randomness in computation. This was a parallel with the evolutionary concept of biological randomness that also allows for genetical combinations. In sum, Turing (1950) thought that a fruitful direction for development of computer science was making them think and as humans do. This also included making mistakes and learning again.

While cognitive science was built on implicit determinism of computer science of the time, computer scientists were visioning probabilistic predictive machines. About the same time Personal construct psychology (PCP; Kelly, 1955) was being introduced in a similarly visionary way as focusing on human anticipation. The key human process in the PCP model is anticipating what comes next and crafting a “working theory” out of it. The “theory” could get validated or invalidated, while the concept of invalidation has a similar evolutionary frame as Turing’s machine “mistakes.” Another similarity may be seen in the pragmatic definition of truth. According to Turing (1950), if a machine operates in a way that cannot be distinguished from a human, then for practical purposes, it can be considered “thinking.” According to Kelly (1955), if a person’s “theory” enables effective anticipation, its predictive efficiency would define it as “true.”

Perhaps most importantly, Kelly’s PCP with all its subsequent constructivist developments (Chiari and Nuzzo, 1996; Fransella, 2016; Mair, 2000; Neimeyer, 1993; Pavlović, 2011, 2015; Procter and Winter, 2020; Raskin, 2002; Stojnov, 2003; Tschudi, 1977) implicitly recognized the complexity of the anticipation process. In this paper, constructivist psychology will be used as a general term for approaches, including PCP, that focus on meaning making. It can be argued that the constructivist theory, its view of humans, as well as strategies of intervention in various fields embraced a high degree of complexity. Rather than relying on more simple theoretical approach, jargon or methodology of intervention, constructivist theory acknowledged that simplistic input–output models do not always work, that context shapes what works or what does not work, as well as that experimenting with theories is a way to adapt in the world of complexity.

Constructivist psychology sees humans as meaning makers or active interpreters of their experience. Meaning making refers to this interpretive process by which humans derive their understanding from experience. Rather than passively receiving information, humans actively structure their experience in an ongoing and dynamic process. In line with the view of humans as meaning makers, constructivism assumes an agentic view of human beings. Agency refers to our capacity to act intentionally and make choices in shaping our experience. In Kelly’s (1955) terms, even though our interpretations may be constrained by our current meaning making structures, we are agents because we have the ability to choose how to interpret the meaning of our experience. This idea is at the core of the process of reframing or reconstruction of meaning.

Turing’s (1950) call for randomness in computation also acknowledged that machine meaning making also needs to operate in less deterministic and more generative way. In other words, it could be argued that meaning making was a common ground both for visionary psychology and visionary computer science of the middle 20th century. Study of minds and machines converged once again around the common metaphor: of human and artificial meaning making. Emergence of Generative AI (GenAI) marked the beginning of a new type of machine that could generate new output by using probabilistic models to predict outcomes. In line with Turing’s (1950) vision, this probabilistic nature of GenAI allowed it to learn and adapt by updating its parameters in line with new data. According to Vaswani et al. (2017), modern day learning machines are Transformers, designed to weigh the importance of different elements in a data sequence in order to generate an output. These computational weights are compared to the “attention” mechanisms as they allow the machine to decide what matters and create meaning out of input. This feature is key to generativity of modern-day AI. Discovery of Transformers and GenAI was once again connected to the “computer as mind” metaphor (Pavlović, 2025). This time the metaphor would be—“computer as a meaning making mind.”

In sum, the interplay of metaphors in the study of mind and machines appears to have followed an interesting recursive pattern. At one point, computer science “borrowed” the language that facilitated the progress of cognitive psychology. At another point, computer scientists were visioning about learning machines that came more than half a century later. But psychology that shares the metaphor of this vision of computer science was there about the same time the vision was created. If GenAI emerged in the first decades of the 21st century, constructivist psychology with its matching metaphor of meaning making has been around much earlier. How do we make the best out of this convergence? It seems plausible to ground the psychological principles of human–AI collaboration in a framework of a matching complexity level. That is why constructivist psychology may serve as a framework for guiding our collaboration with the “meaning making machines.”

Previous literature on human–AI collaboration

The emerging literature is already pointing to some of the challenges that call for constructivism as a grounding psychology for human-AI collaboration. Raees et al. (2024) pointed out that current research on human–AI collaboration has been primarily focused on AI explainability and increasing our understanding of how AI operates. Studies have been focused on technology-centered view, in which humans are seen as recipients or validators, rather that co-creators or agents capable of influencing AI design as end users (Raees et al., 2024). In other words, in current research on human–AI collaboration there is a need to move towards more human-centered approaches, that would allow for more active role of end-users in co-designing AI. Other studies have pointed to the phenomenon of metacognitive laziness, which occurs when users depend on technology that offloads metacognitive processes, such as evaluation, monitoring, and deep engagement, to AI (Fan et al., 2025). Again, authors call for more active user engagement in interaction with AI, rather than passive “consumption” of AI products. The study also calls for the need of more collaborative approach in human–AI interaction.

Some of the principles for human–AI collaboration have already been established to address the challenges outlined above. As an example, Mollick (2024) outlined four principles: (1) Always invite AI to the table; (2) Be the human in the loop; (3) Treat AI like a person (but tell it what kind of a person it is); (4) Assume this is the worst AI you will ever use. These principles call for experimentation with AI, co-creation and nurturing a growth mindset. However, they may be expanded with greater psychological depth and more robust theoretical frameworks—something that constructivist psychology may provide. Drawing on Human Factors research Simkute et al. (2024) have pointed to principles of designing human–GenAI interaction with the focus on productivity loss. The authors have emphasized importance of continuous feedback to users, AI personalization, ecological interface design, as well as clear task allocation between users and AI. According to the authors of the study (Simkute et al., 2024), these principles aim to increase user agency in adapting to GenAI and reduce cognitive load that comes from disrupted workflows. Robertrson et al. (2024) have developed prompting protocols based on constructivist theory, with the focus on active engagement in the human–AI co-construction of knowledge through GenAI. According to Xu and Gao (2024), we should not be talking about human-AI interaction at all, but about their teaming instead, pointing to the importance of collaboration, shared responsibilities and human-centered design. Sparks and Wright (2025) have pointed to three different perspectives in modeling human agency in interaction with GenAI: economist, realist and constructivist. They argue that the economic model of human agency has dominated the field, although the constructivist approach puts emphasis on user autonomy and making autonomous choices in interacting with AI chatbots. In Pavlović (2025), GenAI has been positioned as an agent of artificial meaning making, that does so from large data parameters, while its generative nature has been treated as an example of construction of multiple perspectives. Moreover, four constructivist principles of human-AI collaboration have been established: AI as co-creator or collaborator, that contributes in creating multiple perspectives with artificial meaning making; (2) Partnering with AI, as a principle that cautions against the passive role of humans as recipients of AI outputs and calls for collaborating with AI as a form of social construction of meaning; (3) Distributed metacognition, pointing to the need of collaborative scaffolding when it comes to human-AI teaming. In previous literature, distributed metacognition has been referred to as metacognition shared between the learner and technology (Darvishi et al., 2024; Kirsh, 2014). In this paper, distributed metacognition is defined more narrowly to human-AI teaming. (4) Distributed agency, as shared roles and accountabilities between humans and AI. Previously, distributed agency has been used as a term to denote decision-making that is shared between individuals and AI (Slota, 2020). In this paper, we build on this definition by adding a more narrow focus on human-AI teaming.

In sum, this view of human–AI collaboration calls for shaping AI in our professional domains, designing our own custom AI agents, shaping their design, structuring feedback and evaluating output (Pavlović, 2025). Building on these principles, this paper introduces a more detailed explanation of the principles and their practical applications.

A framework for collaboration between human and artificial meaning making: building constructivist AI agents

Previous research has pointed to the need for end-users to co-create AI products, in order to support agency, active envolvement and resist passive consumption and metacognitive laziness (Fan et al., 2025). What this implies is that everyone becomes included in the process of AI agent design. Rather than using the generic chatbots that are ready made in the market (e.g., ChatGPT, Gemini, Claude, DeepSeek and many others), we may customize them so that they work more closely in line with our needs, preferred styles of communication, value systems, pedagogies or even personal philosophies. By engaging as end-users in GenAI agent design we contribute to the movement of human-centric approach of AI development.

Constructivist principles of human-AI collaboration may serve in this process of custom GenAI agent design in line with the human-centric approach. Imagine you wanted to create your own GenAI agent that works in line with the constructivist principles. What would that imply? Implications of the four principles for the AI agent design are provided in Table 1.

Table 1
www.frontiersin.org

Table 1. Constructivist principles of human–AI collaboration and their implications for agent design.

In sum, an AI agent built on constructivist principles could potentially address the current critique of human-AI interaction patterns (Pavlović, 2025). Specifically, basing AI agent design on constructivist principles could minimize user passivity and increase active involvement, not only in the process of generating content, but also in shaping the interaction itself. Underlying this general statement is an assumption that dialogical principles of constructivist psychology, when appropriately built in AI agents, may support user involvement in some contexts.

To illustrate how AI agents could actually operate based on constructivist principles, let us use two case examples as illustrations. The first example would be from a field of facilitating change—building an AI agent that has the role of a Constructivist creativity coach. The second example would be from a general business domain—building a Constructivist customer support agent. This line-by-line analysis resonates with traditions in qualitative thematic analysis that attend to semantic layers of meaning in interaction (Braun and Clarke, 2006).

Case example 1: constructivist creativity coach

The custom agent for this illustration has been made in ChatGPT using the option My GPTs. This option allows users to build their own custom agents by providing a name for an AI agent, its description and instructions. A very simple version of a Constructivist creativity coach instruction is provided in Table 2.

Table 2
www.frontiersin.org

Table 2. Instruction for a constructivist creativity coach.

Once we click the option “create“, we have built a custom GPT or a custom version of ChatGPT that is grounded in the instructions provided for its creation. In this case, it is a custom AI agent that serves the role of a coach for creativity, based on the principles of constructivist psychology. The instruction could have been much more complex, but for the purpose of illustrating what kind of constructivist agents can be built with simple instructions, this simple version is intentionally provided. To provide clarity about what interaction patterns can be co-created with a Constructivist creativity coach, an imaginary conversation with this AI agent and me is provided in Table 3, together with the commentary.

Table 3
www.frontiersin.org

Table 3. User interaction with the constructivist creativity coach.

Case example 2: constructivist customer support agent

The custom agent for this second illustration has also been made in ChatGPT using the option My GPTs. A somewhat more complex version of a Constructivist customer support agent is provided in Table 4. This AI agent provides insight into how constructivist principles work when it comes to a different domain, more structured and often frustration saturated communication in the customer support field.

Table 4
www.frontiersin.org

Table 4. Instruction for a Constructivist customer support agent.

This custom GPT has been technically built in the same way as the Creativity coach. In terms of design principles, it is also based on the principles of constructivist psychology, but the instruction is somewhat more complex, including the role, goals and practical patterns. Still, the instruction could have been even more complex for a real customer support agent. To provide clarity about what interaction patterns can be co-created with a Constructivist customer support agent, an imaginary conversation with this AI agent and me is provided in Table 5, together with the commentary.

Table 5
www.frontiersin.org

Table 5. User interaction with a Constructivist customer support agent.

Comparing the cases: interaction patterns through the lens of constructivism

The two case illustrations demonstrate how the same psychological principles can guide AI behavior across different contexts. Despite differences in tone, domain and context, both agents exhibit a shared logic of dialogical interaction and meaning co-construction. Their patterns of interaction align with the four principles of constructivist human–AI collaboration. Both agents move beyond delivering generic answers, engaging in co-construction of meaning with the user. The creativity coach avoids providing generic solutions and invited open exploration and narrative reframing (What story are you telling yourself?), while the support agent invited the user to describe her understanding of the issue before offering suggestions. In both cases, the AI does not provide conclusive statements but leaves space for co-creation, ambiguity and multiple possible meanings. The interactions are co-designed in real time. In both examples, the AI adjusts its tone and role based on user input. When the user declines a certain kind of support. The agent does not shut down but reframes its response to match user’s preferences. This illustrates a form of partnership, in which the structure of interaction is shaped through negotiation. Both agents support metacognitive reflection. The creativity coach explicitly invites the user to notice her underlying assumptions and the shifts in perception. Similarly, the support agent acknowledges uncertainty and asks clarifying questions that support the user in choosing a course of action. This distributed metacognition allows the AI to function as a reflective partner. Perhaps most significantly, the agents support user’s agency by offering optional directions, affirming the user’s choices and recognizing when to step back. For instance, when the user in the customer support interaction states she will proceed alone, the agent responds with encouragement and leaves the door open for future engagement. This highlights a key principle of constructivist design: agency should be shared, visible and negotiable.

In sum, both cases illustrate a shift from transactional to dialogical AI. They move away from efficiency-driven automation toward partnered meaning making. This approach transforms how AI interacts, collaborates and co-creates with human users. The two illustrations may suggest that constructivist design principles could be robust enough to guide AI development across multiple domains, from creative exploration to structured problem solving.

Implications of adopting constructivist principles for human–AI collaboration

One fundamental implication of adopting constructivist principles in human-AI collaboration is about end-user agency. End-users can and should design their interaction with AI in line with their own preferences, communication styles or metacognitive strategies. They do not have to be constructivist psychologists to be entitled to apply the constructivist principles in their collaboration with AI. The future of AI literacy, from my point of view, will be less about technology and more about the mindset of learning and playing an agentic role as GenAI evolves. Psychologists, however, have an ever more impactful role to play in this process. As psychologists we are uniquely positioned to design the interaction and communication patterns that are deeply human-centered. Constructivist psychology provides some of the avenues about how that may be possible. What this implies is that the future of psychology may be collaboration design in no-code environments. Let us take a look at more specific implications of this view.

Redefining human–AI roles

Constructivist principles challenge the idea that AI is just a tool or that humans are passive recipients. Dialogical roles mean active participation in interaction, mutual adapting and co-creating meaning. For design, this implies incorporating mechanisms of user-led interaction flow, co-authoring and role assignment.

Supporting reflective thinking and epistemic development

Even before GenAI, we had the challenges of epistemic development in learners (Pavlović, 2008; Schommer, 1994). With GenAI, we are facing another challenge of AI becoming an epistemic authority (Cooper et al., 2024). Constructivist framing of human-AI collaboration may have a preventative role in this process by embedding complexity in the epistemic profile of human-AI interaction design patterns. This not only moves users away from metacognitive laziness, but also invites them into embracing complexity in our own meaning making and complexity of its interaction with artificial meaning making agents. In other words, by embedding constructivist principles into AI design, we could engage users in higher order epistemic beliefs and stages of development.

Expanding AI literacy into the mindset domain

End-users should be agents of their interaction patterns with GenAI: they should assign roles, reflect on the meaning making processes and lead changes as needed. These are the mindset shifts that psychologists and educators can also support. Current levels of AI development allow end-user customization and educating end-users how to be agents in their collaboration with GenAI is key to mindset shift from “consumer” to “agent.”

Designing for change

When applied to the domain of human-AI collaboration, constructivist principles call for “built to change” mindset. This means accepting that technology is ever evolving, that principles last longer than tools, but that even the best practices or principles also need to change and adapt over time. Constructivist principles of human–AI collaboration are therefore always a work in progress.

Limitations and open questions

A question that also needs to be addressed is whether there are limitations or risks underlying the constructivist foundation of human–AI collaboration. First of all, we could imagine excessive open-endedness or complexity in responses to lead to inefficiency or user frustration in certain contexts. Drawing on complexity theory (Snowden and Boone, 2007), it can be argued that constructivist principles may not fit well with contexts that imply lower levels of complexity in human–AI collaboration. Examples would be contexts that require clear and simple patterns of interaction or responses (e.g., asking GenAI to provide a certain date or specific information that are clearly true or false). Another concern or limitation comes from an assumption that highly personalized, constructivist agents, may reinforce user bias or lead to the opposite of its intention—to highly dogmatic GenAI responses. Finally, while this paper invites end-users to become designers of human-AI collaboration, more systemic design patterns cannot be ignored. In other words, this paper focuses on design principles that are possible once AI models have been made available by developers. The same principles could be applied to the more systemic level of AI design. At the same time, systemic design patterns may at some point limit end-user agency and interactivity. In addition, there is a deeper philosophical issue of whether systemic design can ever be fully retrained, as well as positionality of those who attempt to “re-train” them. Future research could examine more latent or affective connotations of human–AI interactions from constructionist, discursive or micro-linguistic perspectives (Degner et al., 2012; Parker, 2012).

Conclusion

In conclusion, constructivist psychology may provide useful grounds for conceptualizing human–AI collaboration. The paper starts with a premise that people actively construct their reality and that this premise may also guide interaction with GenAI. As we make sense of our general experience as humans, we are now confronted with making sense of our interaction with GenAI. This interaction is on the other hand guided by artificial meaning making, based on complex computational models and large data sets, with end products being constructions in natural language. In this landscape human and artificial meaning making come together and we need to make sure we design this collaboration with an appropriate level of underlying complexity. The encounter of human and artificial meaning making is an opportunity for learning, creation, reflexivity, critical thinking and epistemic growth. At the same time, risks emerge that may draw the outcomes of this encounter on the opposite dimensions, such as metacognitive laziness, passivity, bias, uncritical thinking and epistemic regression. As end-users we can have agency in how we approach our collaboration with GenAI. As psychologists and educators our responsibility goes beyond optional.

Author contributions

JP: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was financially supported by the Ministry of Science, technological development and innovation of the Republic of Serbia, as part of the project of University of Belgrade—Faculty of Philosophy (contract number: ugovora 451-03-66/2024-03/200163).

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author declare that Gen AI was used in the creation of this manuscript. Generative AI was used in creating outputs related to illustrations of human-AI communication.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa

Crossref Full Text | Google Scholar

Chiari, G., and Nuzzo, M. L. (1996). Psychological constructivisms: a metatheoretical differentiation. J. Constr. Psychol. 9, 163–184. doi: 10.1080/10720539608404663

Crossref Full Text | Google Scholar

Cooper, G., Tang, K. S., and Rappa, N. A. (2024). “Generative artificial intelligence as epistemic authority?: perspectives from higher education” in Artificial intelligence applications in higher education. Eds. Helen C, and Diane B (New York: Routledge), 106–122.

Google Scholar

Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., and Siemens, G. (2024). Impact of AI assistance on student agency. Computers and Education, 210:104967. doi: 10.1016/j.compedu.2023.104967

Crossref Full Text | Google Scholar

Degner, J., Doycheva, c., and Wentura, D. (2012). It matters how much you talk: on the automaticity of affective connotations of first and second language words. Biling. Lang. Cogn. 15, 181–189. doi: 10.1017/S1366728911000095

Crossref Full Text | Google Scholar

Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., et al. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56, 489–530. doi: 10.1111/bjet.13544

Crossref Full Text | Google Scholar

Fransella, F. (2016). “What is a personal construct” in The Wiley handbook of personal construct psychology. eds. D. Winter and N. Reed (Chichester: John Wiley & Sons), 1–8.

Google Scholar

Girgenzer, G., and Goldstein, D. (1996). Mind as computer: birth of a metaphor. Creat. Res. J. 9, 131–144. doi: 10.1207/s15326934crj0902&3_3

Crossref Full Text | Google Scholar

Kelly, G. A. (1955). The psychology of personal constructs. New York: Norton.

Google Scholar

Kirsh, D. (2014). “Metacognition, distributed cognition, and visual design” in Cognition, education, and communication technology. eds. P. Gärdinfors and P. Johansson (New York: Routledge), 147–179.

Google Scholar

Mair, J. M. (2000). Psychology as a discipline of discourse. Eur. J. Psychother. Couns. 3, 335–347. doi: 10.1080/13642530010011995

Crossref Full Text | Google Scholar

Miller, G. (2003). The cognitive revolution: a historical perspective. Trends Cogn. Sci. 7, 141–144. doi: 10.1016/s1364-6613(03)00029-9

PubMed Abstract | Crossref Full Text | Google Scholar

Mollick, E. (2024). Co-intelligence: working and learning with generative AI. New York: Penguin Publishing Group.

Google Scholar

Neimeyer, R. A. (1993). Constructivism and the problem of psychotherapy integration. J. Psychother. Integr. 3, 133–157. doi: 10.1037/h0101185

Crossref Full Text | Google Scholar

Parker, I. (2012). Discourse dynamics. London: Routledge.

Google Scholar

Pavlović, J. (2008). Pozicije subjekata i diskursi o znanju: analiza govora nastavnika i učenika. Neobjavljena magistarska teza. Beograd: Filozofski fakultet.

Google Scholar

Pavlović, J. (2011). Personal construct psychology and social constructionism are not incompatible: Implications of a reframing. Theory and Psychology, 21, 396–411. doi: 10.1177/0959354310380302

Crossref Full Text | Google Scholar

Pavlović, J. (2015). Imagining possible futures: scenarios for constructivist psychology. Stud. Meaning 5, 221–245.

Google Scholar

Pavlović, J. (2025). Partnering with AI in coaching and human skills development. New York: Routledge.

Google Scholar

Procter, H., and Winter, D. (2020). Personal and relational construct psychotherapy. Cham, Switzerland: Palgrave Macmillan.

Google Scholar

Raees, M., Meijerink, I., Lykourentzou, I., Khan, V. J., and Papangelis, K. (2024). From explainable to interactive AI: A literature review on current trends in human-AI interaction. International Journal of Human-Computer Studies, 189:103301.

Google Scholar

Raskin, J. D. (2002). “Constructivism in psychology: personal construct psychology, radical constructivism, and social constructionism” in Studies in meaning: exploring constructivist psychology. eds. J. D. Raskin and S. K. Bridges (New York: Pace University Press), 1–25.

Google Scholar

Robertrson, J., Ferreira, C., Botha, E., and Oosthuizen, K. (2024). Game changers: a generative AI prompt protocol to enhance human-AI knowledge co-construction. Bus. Horiz. 67, 499–510. doi: 10.1016/j.bushor.2024.04.008

Crossref Full Text | Google Scholar

Schommer, M. (1994). Synthesizing epistemological belief research: Tentative understandings and provocative confusions. Educational psychology review, 6, 293–319.

Google Scholar

Simkute, A., Tankelevitch, L., Kewenig, V., Scott, A. E., Sellen, A., and Rintel, S. (2024). Ironies of generative AI: understanding and mitigating productivity loss in human-AI interaction. Int. J. Hum. Comput. Interact. 41, 1–22. doi: 10.1080/10447318.2024.2405782

Crossref Full Text | Google Scholar

Slota, S. C. (2020). Designing across distributed agency: values, participatory design and building socially responsible AI. Participatory design conference.

Google Scholar

Snowden, D., and Boone, M. (2007). A leader’s framework for decision making. Harv. Bus. Rev. 85:68

PubMed Abstract | Google Scholar

Sparks, J., and Wright, A. T. (2025). Models of rational agency in human-centered AI: the realist and constructivist alternatives. AI Ethics 5, 3321–3328. doi: 10.1007/s43681-025-00658-z

Crossref Full Text | Google Scholar

Stojnov, D. (2003). Psihologija ličnih konstrukata: teorija i terapija. Beograd: Zepter Book World.

Google Scholar

Tschudi, F. (1977). “Loaded and honest questions: a construct theory view of symptoms and therapy” in New perspectives in personal construct theory. ed. D. Bannister (London: Academic Press), 321–350.

Google Scholar

Turing, A. M. (1950). Computing machinery and intelligence. Mind 59, 433–460. doi: 10.1093/mind/LIX.236.433

Crossref Full Text | Google Scholar

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al. (2017). Attention is all you need. Adv. Neural Inf. Proces. Syst. 30, 6000–6010. doi: 10.48550/arXiv.1706.03762

Crossref Full Text | Google Scholar

Xu, W., and Gao, Z. (2024). Applying HCAI in developing effective human-AI teaming: a perspective from human-AI joint cognitive systems. Interactions 31, 32–37. doi: 10.1145/3635116

Crossref Full Text | Google Scholar

Keywords: constructivist psychology, GenAI, AI agents, distributed metacognition, human–AI interaction

Citation: Pavlović J (2025) Constructivist psychology principles of human–AI collaboration. Front. Psychol. 16:1638774. doi: 10.3389/fpsyg.2025.1638774

Received: 31 May 2025; Revised: 05 October 2025; Accepted: 17 November 2025;
Published: 03 December 2025.

Edited by:

Sabrina Cipolletta, University of Padua, Italy

Reviewed by:

Guillem Feixas, University of Barcelona, Spain
Elena Botts, University of Essex, United Kingdom

Copyright © 2025 Pavlović. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jelena Pavlović, cGF2bG92aWNoLmplbGVuYUBnbWFpbC5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.