- 1Department of Educational Technology, Hanyang Cyber University, Wangsimni-ro, Seoul, Republic of Korea
- 2Department of Education, Busan National University of Education, Gyodae-ro, Yeonje-gu, Busan, Republic of Korea
Introduction: This study explores how novice instructional designers interact with Generative AI in their instructional design processes. While existing research has primarily focused on experienced designers, there is limited understanding of how novice instructional designers (NIDs) develop relationships with AI tools.
Methods: Through Behavioral Event Interviews with seven novice instructional designers and narrative analysis using Labov's framework, this study examines their evolving patterns of AI interaction and perceptions of AI's utilization.
Results: The findings reveal three distinct interaction patterns: AI as an auxiliary tool, feedback provider, and co-creator. Participants' relationships with AI evolved from initial resistance to strategic utilization, eventually viewing AI as a collaborative partner. The study identified six developmental stages in AI adoption. A notable finding was AI's adaptable role, alternating between a More Knowledgeable Other (MKO) and a Less Knowledgeable Other (LKO) depending on context.
Discussion: These findings contribute to understanding how NIDs can effectively integrate AI while maintaining professional autonomy.
1 Introduction
GenAI has established itself as a tool performing various roles in instructional design (ID), with its potential applications in educational settings continuously expanding (Baidoo-Anu and Owusu Ansah, 2023). In the ID process, genAI not only holds value as a tool for enhancing design efficiency (Hodges and Kirschner, 2024) but can also be utilized in various aspects of ID, including setting learning objectives, designing interactive learning activities, and developing assessments (Amado-Salvatierra et al., 2023; Choi et al., 2024). Furthermore, it can serve as an auxiliary tool for creative ID activities, such as helping instructional designers brainstorm ideas (Luo et al., 2025; McNeill, 2024).
Research has examined how experienced instructional designers utilize genAI, their perceptions of it, and the opportunities and limitations of AI adoption (Amado-Salvatierra et al., 2023; Choi et al., 2024; McNeill, 2024). In contrast, there is relatively limited research exploring how novice instructional designers (NIDs) who first encountering genAI utilize it in the ID process, how their perceptions change, and what barriers they face in adopting genAI.
ID using genAI presents challenges different from traditional approaches. Not only does it require verification of AI-generated content accuracy, management of ethical risks, and effective prompt writing (McNeill, 2024), but it also demands critical thinking throughout the entire ID process (van den Berg and du Plessis, 2023). NIDs may face the challenges of using the new technology of genAI for ID purposes.
Considering that technology acceptance varies depending on one's experiences with new technology (Beaudry and Pinsonneault, 2010), the positive and negative experiences that NIDs have in utilizing genAI could influence their future AI adoption. Specifically, understanding what problems NIDs encounter in the early stages of AI use will provide important clues for developing ID competencies.
Based on these issues, this study aims to explore NIDs' experiences using genAI, focusing on their interactive relationship with genAI. Through this, strategies are expected to be derived to help NIDs effectively adopt and employ AI. The research questions are as follows:
1. What are the patterns and characteristics of interaction between novice instructional designers and genAI in the ID process?
2. What are novice instructional designers' perceptions of the usability and utility of genAI in instructional design?
2 Theoretical backgrounds
2.1 Generative AI and instructional design
GenAI has established itself as a versatile tool in ID, with expanding applications in educational environments. Studies show that genAI contributes to various aspects of ID, including learning objectives, materials development, and assessment design (Amado-Salvatierra et al., 2023). In MOOC design, AI supports content generation, provides learner-customized materials, and automates assessment (Amado-Salvatierra et al., 2023). Research demonstrates that genAI helps instructional designers solve creative problems and reduce workload (Choi et al., 2024). However, AI cannot replace creative human intervention, and human review remains essential to address reliability and bias issues (Giannakos et al., 2025).
In ID frameworks like ADDIE, genAI serves multiple functions. During Analysis, AI automates needs analysis and survey data processing, while in Design and Development, it generates draft activities and organizes materials (Luo et al., 2025). In Implementation, AI analyzes learner interactions to adjust experiences, and in Evaluation, it supports grading and feedback (Kumar et al., 2024). However, mere automation risks undermining human creativity and expertise in the ID process (Amado-Salvatierra et al., 2023). Therefore, maintaining human-centered design strategies while utilizing AI's capabilities is crucial (Giannakos et al., 2025).
The application of genAI in ID presents both opportunities and challenges. Reliability of AI-generated content, bias in materials, and ethical concerns remain significant issues (Giannakos et al., 2025). AI-generated content may lack accuracy, and algorithms may contain inherent biases (Hodges and Kirschner, 2024). Copyright and learner data protection are critical ethical considerations (Giannakos et al., 2025). Instructional designers must acknowledge these limitations and implement critical review processes, fostering effective AI-human collaboration in ID (Amado-Salvatierra et al., 2023).
2.2 Novice instructional designers' use of gen AI
Previous research on the characteristics of NIDs, who have limited experience in connecting theoretical knowledge of ID to practical implementation, has analyzed their difficulties and suggested ways to enhance their ID competencies (Ge et al., 2005; Uduma and Morrison, 2007; Ugur-Erdogmus and Cagiltay, 2019). NIDs generally tend to understand problems superficially and attempt to simplify them, finding it difficult to grasp core issues in unstructured problem situations (York and Ertmer, 2016). They also tend to rely heavily on multimedia development tools and automated ID tools, limiting creative thinking (Uduma and Morrison, 2007), and applying ID models rigidly, making it difficult to respond flexibly to new problem situations (Ge et al., 2005).
These characteristics manifest specifically in each phase of the ADDIE model. In the analysis phase, they either focus only on surface-level requirements without in-depth analysis of learner characteristics and environment or skip this phase altogether (Hoard et al., 2019). In the Design phase, they struggle with setting measurable learning objectives and rely on existing templates (Cuesta-Hincapie et al., 2024). In the Development phase, excessive reliance on familiar tools can lead to a decrease in content quality (Uduma and Morrison, 2007). In the Implementation phase, technical problem-solving and incorporation of learner feedback are insufficient (York and Ertmer, 2016), and in the Evaluation phase, they perceive it merely as a formal procedure that fails to lead to substantial ID improvements (Kenny et al., 2005). The difficulties NIDs experience in ID can impact design quality and learning outcomes.
Automated ID support tools can help overcome the difficulties faced by NIDs (Uduma and Morrison, 2007; Ugur-Erdogmus and Cagiltay, 2019). As a more sophisticated automated tool, genAI can complement novice designers' limitations through various functions such as data analysis, content generation, and feedback provision. During ID, in the analysis phase, it can automatically analyze large-scale learner data to provide insights into learner characteristics and needs, and in the Design phase, it may support creative approaches by presenting various educational strategies and storyboard samples. In the Development phase, it automatically generates or improves various multimedia content, and in the Implementation phase, it collects and analyzes learner feedback in real-time to monitor the effectiveness of ID. In the Evaluation phase, it can measure learner outcomes and suggest improvements through assessment tool generation and automatic grading. In this way, genAI can contribute to supporting novice designers' decision-making and improving design quality at each stage of ADDIE. GenAI has the potential to complement NIDs' lack of experience while securing both creativity and efficiency in the design process.
However, genAI cannot simply solve all ID problems. First, effective prompt engineering skills based on a clear understanding of the purpose and context of ID are needed to successfully achieve the designer's intended outcome (Park and Choo, 2025). Additionally, while genAI is useful for deriving creative ideas through continuous discussion and alternative generation with NIDs (Uduma and Morrison, 2007), some studies also report that this can decrease NIDs' creativity (Giannakos et al., 2025).
Therefore, to help NIDs effectively utilize AI and develop their ID competencies in the long term, there is a need to deeply understand the experiences they have while interacting with genAI in the ID process.
3 Research methods
3.1 Research participants
The participants were students who enrolled in the “Theory and Practice of Instructional Design” course during the fall 2024 semester at Graduate School in South Korea. The graduate program primarily attracts professionals who have not majored in ID at the undergraduate level. These participants can be classified as NIDs, as while they have conducted ID based on their experiences, they are first encountering theory-based ID through formal education.
After the semester, out of the 38 total enrollees, 11 students had experience using AI in their ID projects, and 7 of them voluntarily participated in interviews. The demographic distribution of participants is presented in Table 1.
The course was structured as a team project-based learning, comprising 10 weeks of video lectures and three online real-time seminars over a 15-week period. The video lectures included ID theories and models, supplemented by brief 5-min introductions to various AI tolls for ID. While the instructor consistently encouraged the use of AI in team projects, this remained an independent choice and did not affect grading. Beyond lectures and seminars, each team received 1–2 online or offline mentoring by instructor for reviewing interim outputs and providing feedback on final deliverables.
The objective of the project was to design a 16-session regular university course, with teams autonomously selecting their target audience and subject matter. Teams were required to select 1–2 key sessions (equivalent to 3 h of regular instruction) that best represented their course characteristics, developing detailed designs, learning materials, and assessment tools for these sessions.
3.2 Data generation and analysis
This study employed Behavioral Event Interviews (BEI) for data collection. BEI is a specialized interview technique that focuses on eliciting detailed accounts of specific behavioral incidents rather than generalized opinions or hypothetical scenarios (McClelland, 1998). This method was selected because it allows researchers to capture concrete experiences and actual behaviors of novice instructional designers as they interact with AI.
The BEI protocol was structured in three phases. First, participants were asked to recall and describe specific instances when they used AI during their ID projects. Second, they were prompted to elaborate on these instances using probing questions that focused on their actions, thoughts, and feelings at each stage of interaction with AI (e.g., “What exactly did you do when AI generated content that didn't meet your expectations?,” “What was going through your mind when you first tried to write prompts for the AI?”). Third, participants were asked to reflect on how these experiences affected their subsequent interactions with AI in later design tasks.
In-depth interviews were conducted via Zoom, lasting 60–90 min. All interviews were recorded and transcribed verbatim. During the interviews, participants shared their actual AI interaction screens (prompts and responses) through screen sharing, which provided valuable contextual information about their interaction patterns. This approach allowed us to observe the specific language used in prompts, the quality and nature of AI responses, and participants' immediate reactions to these responses—details that might be lost in purely retrospective accounts. Post-interview, with participant consent, we collected their AI utilization records generated during the project for additional analysis, including chat histories, saved prompts, and iterations of design documents that incorporated AI-generated content.
The collected data was analyzed using Labov's Narrative Analysis framework (Labov, 1972; Riessman, 2008). Initially, two researchers independently conducted open coding on the transcribed interview data to derive meaning units. Subsequently, codes were refined using the constant comparative analysis method, iteratively comparing and reviewing the derived codes (Strauss and Corbin, 1998).
Narrative analysis was selected as the primary analytical framework for several compelling reasons. First, the temporal dimension of narrative analysis aligns perfectly with our aim to understand how NIDs' relationships with AI evolve over time. Unlike thematic analysis, which might segment experiences into discrete categories, narrative analysis preserves the chronological flow and developmental nature of human-AI interactions. Second, narrative analysis is particularly effective for capturing the subjective experience and meaning-making processes of individuals as they encounter and adapt to new technologies (Bruner, 1991). This aspect was crucial for understanding how participants' perceptions of AI shifted from initial resistance to potential partnership. Third, Labov's structural approach to narrative analysis provides a systematic framework for comparing experiences across different participants while still honoring the unique contours of each individual story.
As Polkinghorne (1995) argues, narrative analysis is particularly valuable when studying phenomena that involve transformation and change over time—precisely what we aimed to capture in NIDs' journeys with AI. Furthermore, narrative analysis allows researchers to examine both the content of experiences (what happened) and the form of their telling (how participants construct and make sense of their experiences), providing a richer understanding of human-technology relationships than methods focused solely on content analysis.
To ensure research reliability, cross-validation between researchers was conducted, and Cohen's Kappa coefficient was calculated to verify inter-rater reliability. The initial coding showed an inter-rater reliability of 0.82, and discrepancies were resolved through researcher consensus. Additionally, member checking was performed to enhance analysis validity, confirming that the analysis results appropriately reflected participants' experiences.
The meaning units derived through open coding were then reconstructed according to Labov's six-stage narrative structure. The six stages of Labov's narrative structure are outlined in Table 2.
In this study, we analyzed each of the seven participant interviews using this six-stage structure, then integrated individual analysis results to identify common patterns and distinctive interaction characteristics in Using AI. During the analysis process, an additional review was received from one qualitative research expert to verify the appropriateness of the analysis. The advantages of narrative analysis include understanding individual experiences within context, capturing changes over time, and simultaneously analyzing individual cases and general patterns (Clandinin and Connelly, 2000). This enabled a comprehensive understanding of AI utilization in ID beyond mere functional aspects, incorporating meaningful, contextual, and causal relationships.
4 Research results
4.1 Alice's narrative
Orientation: Initially, Alice exhibited resistance due to unclear understanding of AI's role. “I had resistance to AI. But I thought I should understand it before rejecting it.” She had significant uncertainty about the reliability and educational appropriateness of AI-provided information, and concerns about AI potentially replacing instructional designers. However, while contemplating how new technology might impact education, she decided to experiment with AI directly.
Complicating action: Through the research project, she began utilizing AI. Starting with simple queries, she discovered AI could provide more logical responses than anticipated. “I used AI like a team member. AI was actually better than a free-riding team member.” She found it particularly interesting that AI could serve as an idea generator in instructional design. She gradually built a collaborative relationship by analyzing AI-generated content and supplementing necessary elements.
Evaluation: Through experiencing both AI's advantages and limitations, she concluded that AI should be utilized selectively. “AI's output isn't well-organized enough to use directly. I had to restructure it for use.” She recognized that AI's information quality was inconsistent and required modification to fit learner contexts. Consequently, she determined that critically reviewing and adjusting AI's outputs was essential when utilizing AI.
Resolution: She realized that more sophisticated prompt writing was crucial for effective AI utilization and progressively improved her questioning structure. She judged that having AI generate drafts and instructional designers review and modify them was appropriate. This approach was perceived to support creative thinking while reducing repetitive work in the design process.
Coda: AI can serve as a strategic tool in instructional design, complementing human creativity. However, directly applying AI's automatic outputs has limitations, and instructional designers' critical thinking and adjustment are essential. Research should continue toward developing collaborative relationships between AI and humans for effective utilization.
4.2 Brad's narrative
Orientation: With his programming expertise, Brad became interested in AI's potential while researching technology-based ID methods. Initially viewing AI as a simple information retrieval tool, he recognized its potential for optimizing ID work. “AI seemed like a simple auxiliary tool at first, but I gradually realized it required learning to use it effectively.”
Complicating action: While utilizing AI across various ID stages, he discovered it could perform roles beyond simple text generation. “When asked to expand, it provides more concrete and detailed suggestions.” Particularly, while using AI in a feedback-providing role, he developed methods to improve AI's responses through iterative feedback.
Evaluation: He emphasized that while AI can maximize productivity when well-utilized, final review by human experts is essential. “Don't trust genAI. You must always add materials and review.” He stressed that blindly trusting AI's responses is dangerous, and the instructional designer's role is to select and modify AI-provided information according to context.
Resolution: Brad improved his work process by having AI create drafts and supplementing them as an instructional designer. He concluded that analyzing various scenarios provided by AI and structuring them for actual educational implementation was most effective.
Coda: While AI is a productivity-enhancing tool, it has limitations without human review and critical thinking. Rather than unconditionally accepting AI-provided data, the process of deriving and modifying new ideas based on it is important. Future research should explore ways to utilize AI feedback more sophisticatedly.
4.3 Cathy's narrative
Orientation: As an instructional designer, Cathy maintained a cautious stance regarding the essence of education and technology's role: “I felt AI might be useful for simple tasks, but I wasn't sure it had real educational value.” While skeptical about whether AI could have real effectiveness in education beyond performing auxiliary roles in learning, she began gradually exploring AI's role and possibilities through direct utilization in ID projects.
Complicating action: She attempted experimental application in some ID processes using AI's feedback function. She took an approach of not unconditionally accepting generated content, evaluating the accuracy and educational validity of AI-provided information. “I didn't use AI as a team member from the start. I needed to recognize that AI's suggested information wasn't the answer.” The process of reviewing and modifying AI's responses as needed continued.
Evaluation: Upon discovering that AI-generated material quality was inconsistent, she felt that more refined prompts and continuous review processes were necessary. “AI sometimes lost track, so we needed a way to continuously summarize and review.” She recognized that for the collaborative relationship between AI and humans to develop, AI needed to play a complementary role.
Resolution: She determined that having instructional designers review and modify AI-provided drafts was most effective. She emphasized that human creative intervention was essential to complement AI's limitations, and her AI utilization methods became increasingly sophisticated.
Coda: While AI can play a complementary role in instructional design, using automatically generated materials directly is inappropriate. Critical thinking and adjustment processes by instructional designers are essential for effective AI utilization, and human-technology collaboration should be emphasized.
4.4 Dora's narrative
Orientation: As an instructional designer, Dora conducted various experiments to find strategies for efficiently utilizing AI. She was interested in whether AI could provide substantial help in instructional design beyond being a simple information search tool: “I was curious about how far AI could go beyond simple search functions, so I tried different ways to push its limits.”
Complicating action: She experimented with strategies to repeatedly perform ID feedback through dialogue with AI. “I repeated feedback by compiling materials, organizing them, and requesting review from AI again.” As her collaborative relationship with AI developed, she sought ways to improve AI's response quality by refining prompt strategies.
Evaluation: She realized that repeated interaction was essential to enhance AI's feedback function. “I always requested verification from AI as if getting confirmation.” A more collaborative AI utilization method formed through the process of humans adjusting and improving drafts provided by AI.
Resolution: She found it essential to review and modify AI's responses to complement AI's limitations and developed strategies to increase the reliability of AI-provided information.
Coda: AI can establish a collaborative relationship with instructional designers and can be more effectively utilized through continuous feedback and modification processes.
4.5 Elaine's narrative
Orientation: Elaine, valuing creative instructional design, was interested in how AI could complement this aspect. While questioning its reliability and creative contribution, she gradually explored whether AI could facilitate creative idea generation beyond simple information provision in educational contexts during initial stage. “Initially I didn't trust AI, but through dialogue, I realized it could provide increasingly refined responses.”
Complicating action: Through repeated interactions with AI's ideas and feedback, Elaine discovered that AI could expand instructional designers' creative thinking. “While conversing with AI, creative ideas kept emerging.” While experimenting with integrating AI into existing ID methods, she began considering strategic utilization methods to complement AI's limitations.
Evaluation: She realized that AI's information quality was inconsistent and that the process of modifying it according to context was essential. “I thought I needed to double-check the information AI provides.” She determined that while AI's ideas could suggest creative directions, instructional designers must necessarily concretize and adjust these roles.
Resolution: She developed strategies to utilize AI as an auxiliary tool for creative instructional design. She increased utility by constructing new scenarios based on AI's responses and transforming these into concrete instructional designs. She emphasized that rather than directly applying AI's results, it was essential for instructional designers to complement and modify them.
Coda: While AI can be utilized as a tool to facilitate creative thinking in the ID process, instructional designers' critical thinking and adjustment are essential. Strategic approaches combining human intuition and experience are needed for effective AI utilization, and research should continue in the direction of developing collaborative relationships with AI.
4.6 Frances's narrative
Orientation: While working on early childhood safety education instructional design, Frances was interested in how AI could contribute to learner-centered instructional design. However, she was initially skeptical about AI's role. “AI might be educationally useful, but I thought it had many limitations in terms of considering learner responses.” While acknowledging AI's ability to generate customized learning materials, she questioned whether it could reflect actual learners' understanding and needs.
Complicating action: While utilizing AI in the ID process, she discovered that AI could help automate repetitive tasks and provide materials suited to learners' levels. “I saw the possibility of designing educational content itself with AI. However, AI's outputs needed review.” She began utilizing AI by generating learner-customized materials based on AI's suggested ID drafts and modifying them.
Evaluation: She emphasized that AI-provided content quality needed to be adjusted according to learners' levels and requirements. “AI saves time, but the essence of education is done by people.” She realized that rather than utilizing AI independently, it was essential for instructors to adjust it and modify it according to the learner's context.
Resolution: She determined that the most effective approach was to generate educational materials in draft form using AI, then have instructors review and adjust them according to learner characteristics. She established a strategy where AI's functions were utilized while instructors maintained final decision-making authority.
Coda: While AI can be a useful tool in instructional design, educator judgment and learner-centered design are essential. Rather than directly utilizing AI's automatically generated content, the process of supplementing and adjusting it is crucial. Future research needs to explore ways for AI to more effectively reflect learner responses.
4.7 George's narrative
Orientation: As an instructional designer, George sought to explore how AI could complement the instructor's role. Initially, his trust in AI was low, particularly questioning whether AI could make creative contributions to the ID process. “At first, I thought AI wouldn't be very useful. I saw it just as a simple automation tool.” However, he gradually discovered that AI could be useful in text generation, analysis, and structuring work.
Complicating action: While applying AI to ID projects, he realized that AI could make significant contributions to information organization and material analysis. “Initially I only used AI about 20%, but gradually the utilization ratio increased.” He particularly found it effective to use AI's pattern analysis and summary functions to organize learning materials more efficiently.
Evaluation: He emphasized that while AI could play an auxiliary role in the ID process, humans should make final creative decisions. “Even when using AI, my creative thinking must be maintained. If AI does 80% and I do 20%, there's a problem.” He recognized that rather than using AI-generated materials directly, instructional designers needed to analyze and transform them.
Resolution: He built an optimal collaboration model by utilizing AI's automation functions while having instructional designers review and adjust them. While actively utilizing AI's data analysis and summary functions, he made clear that human instructor intervention was essential.
Coda: While AI can play an auxiliary role in the ID process, instructor intervention is essential in creative problem-solving and decision-making processes. Future research tasks will include adjusting the collaborative relationship between AI and humans and exploring ways AI can be utilized more creatively.
Evolution of interaction patterns Participants initially shared a collective skepticism regarding AI's role in education. Like Alice, Cathy and Elaine began as AI skeptics, fearing generic, “cookie-cutter” content and questioning its educational appropriateness. Elaine specifically noted she “was quite negative” about AI's creative contribution, viewing it merely as a shortcut tool. Consequently, their early interactions were characterized by strict verification rather than open collaboration.
However, this interaction was not static; it evolved progressively from simple tool usage to sophisticated collaboration. Based on the integrated analysis, we identified three distinct developmental patterns: AI as Auxiliary Tool, AI as Feedback Provider, and AI as Co-creator. This evolution parallels the designers' shifting perceptions and the changing role of AI between Less Knowledgeable Other (LKO) and More Knowledgeable Other (MKO), as illustrated in Table 3.
Initially, participants like Alice and Cathy exhibited resistance, utilizing AI merely for information retrieval (Stage 1). However, as they engaged in iterative dialogue, they began to use AI to validate their design logic (Stage 2). Ultimately, designers such as Brad and Dora advanced to treating AI as a collaborative partner, integrating AI-generated scenarios into their core design strategy while maintaining final decision-making authority (Stage 3). This progression demonstrates that as prompt engineering skills matured, the AI's role shifted dynamically from a subordinate tool to a knowledgeable collaborator.
5 Discussion and conclusion
A key transformation was the shift in AI perception—from a simple tool to a collaborative partner. By automating repetitive tasks, AI enabled designers to focus on creativity and decisions. This aligns studies like Stojanov (2023) and Dwivedi et al. (2023), which portray AI as a mentor-like presence in instructional design.
Establishing AI as a true partner requires strategies to offset its limitations. Participants refined how they interacted with AI and used its input from diverse perspectives, exploring its potential in scenario design and learner personalization. These findings show that AI facilitates creative thinking and through a dynamic, bidirectional relationship in which designers both rely on and critically evaluate AI-generated outputs depending on context.
Also, while prior research has framed AI as a basic automation tool—often emphasizing efficiency or functional use (e.g., Holmes et al., 2019; Luo et al., 2025) and paying limited attention to its feedback functions (McNeill, 2024) —this study highlights AI's strategic roles in ID by showing how designers' identities and instructional agency are shaped through their ongoing interaction with AI.
This finding aligns with Dwivedi et al. (2023) and Stojanov (2023). Participants in this study initially used AI for simple tasks but gradually came to view it as a professional collaborator. This shift supports Stojanov's (2023) claim that AI can act as a technology-based mentor among MKO forms. Notably, participants reported growth in their instructional design capacity through AI interaction. From a constructivist perspective, this finding expands the notion of MKO beyond human experts, suggesting that AI can support learners' ZPD.
Notably, participants reported growth in their instructional design capacity through AI interaction. From a constructivist perspective, this finding expands the notion of MKO beyond human experts, suggesting that AI can support learners' ZPD. As participants improved their prompt engineering skills, a virtuous cycle emerged—deeper collaboration enhanced their ability to engage with AI, reinforcing its role as a dynamic learning partner. This qualitative shift in interaction is visibly demonstrated in Box 1, which contrasts a participant's initial query with their refined, context-aware request.
This study first identified distinct features of AI as an MKO: constant availability, patient repetition, and adaptive responses based on user needs. These qualities created a safe space for instructional designers to learn at their own pace.
Interestingly, participants also perceived AI as LKO in some situations, likening it to a novice colleague who required guidance—actively shaping AI's outputs through their own expertise. They maintained critical engagement with AI, verifying and modifying its suggestions—highlighting the ongoing need for human judgment. AI's outputs were at times overly general or lacked contextual nuance, reflecting its limitations.
This fluid identity also invites a reconceptualization of NIDs not only as learners but also as active instructors who shape AI outputs, highlighting reciprocal form of human-AI agency in which instructional designer's metacognitive awareness develops through guiding, questioning, and revising AI-generated content.
This shift in perception varied not because of changes in AI performance but because of the expertise required at each stage of the ID process and the participants' capabilities. Figure 1 illustrates the fluid dynamic identified in this study. Unlike the traditional fixed MKO model, the NID-AI relationship operates on a bidirectional spectrum. As shown, the role shifts based on task complexity and user expertise—transitioning from AI guiding the novice (first arrow/MKO) to the novice correcting the AI (second arrow/LKO) through a process of contextual negotiation (oblique line).
Figure 1. The fluid MKO-LKO spectrum in AI-NID interaction. The relationship functions on a bidirectional spectrum depending on task complexity and contextual needs. The interaction flows from AI guiding the designer (Green/MKO), through a negotiation phase for critical auditing (Amber), to the designer asserting expertise to correct the AI (Red/LKO).
These findings provide important implications for the direction of instructional designers' capability development in future AI-based ID environments. When utilizing AI, instructional designers need to develop metacognitive capabilities that can flexibly alternate between learner and mentor roles depending on the situation, optimally combining their expertise with AI's functions. Designers remained alert to algorithmic bias—e.g., Cathy caught AI ‘assuming all learners had stable internet'—underscoring the need for context auditing.
This study reconceptualizes generative AI as a fluid MKO/LKO partner that dynamically interacts with NIDs, illustrating how shifts from initial resistance to co-creation cultivate metacognitive growth and professional judgment. By offering a theoretical lens for understanding human–AI co-agency in ID, the findings underscore the need to prepare educators for increasingly collaborative and adaptive AI-mediated work. Longitudinal design diaries could trace whether the MKO/LKO fluidity stabilizes or cycles as designers gain expertise. As human–AI collaboration continues to advance, qualitative improvements in instructional design will require robust frameworks that guide intentional, reflective, and ethically grounded AI integration.
Data availability statement
The interview transcripts and recordings collected for this study are not publicly available in accordance with the Institution Review Board (IRB) regulations to protect participant privacy and confidentiality.
Ethics statement
The studies involving humans were approved by Hanyang Cyber University Institutional Review Board. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
SH: Methodology, Writing – review & editing, Writing – original draft. JL: Writing – review & editing, Writing – original draft, Conceptualization, Investigation.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00210717).
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Amado-Salvatierra, H. R., Chan, M. M., and Hernandez-Rizzardini, R. (2023). Combining human creativity and AI-based tools in the instructional design of MOOCs: Benefits and limitations. In Proceedings of 2023 IEEE Learning With MOOCS (LWMOOCS) (IEEE), 1–6. doi: 10.1109/LWMOOCS58322.2023.10306023
Baidoo-Anu, D., and Owusu Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. J. AI. 7, 52–62. doi: 10.61969/jai.1337500
Beaudry, A., and Pinsonneault, A. (2010). The other side of acceptance: studying the direct and indirect effects of emotions on information technology use. MIS Q. 34, 689–710. doi: 10.2307/25750701
Choi, G. W., Kim, S. H., Lee, D., and Moon, J. (2024). Utilizing generative AI for instructional design: exploring strengths, weaknesses, opportunities, and threats. Tech. Trends. 68, 832–844. doi: 10.1007/s11528-024-00967-w
Clandinin, D. J., and Connelly, F. M. (2000). Narrative Inquiry: Experience and Story in Qualitative Research. San Francisco, CA: Jossey-Bass.
Cuesta-Hincapie, C., Cheng, Z., and Exter, M. (2024). Are we teaching novice instructional designers to be creative? A qualitative case study. Instr. Sci. 52, 515–556. doi: 10.1007/s11251-023-09656-2
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., et al. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inform. Manage. 71:102642. doi: 10.1016/j.ijinfomgt.2023.102642
Ge, X., Chen, C. H., and Davis, K. A. (2005). Scaffolding novice instructional designers' problem-solving processes using question prompts in a web-based learning environment. J. Educ. Comput. Res. 33, 219–248. doi: 10.2190/5F6J-HHVF-2U2B-8T3G
Giannakos, M., Azevedo, R., Brusilovsky, P., Cukurova, M., Dimitriadis, Y., Hernandez-Leo, D., et al. (2025). The promise and challenges of generative AI in education. Behav. Inf. Technol. 44, 2518–2544. doi: 10.1080/0144929X.2024.2394886
Hoard, B., Stefaniak, J., Baaki, J., and Draper, D. (2019). The influence of multimedia development knowledge and workplace pressures on the design decisions of the instructional designer. Educ. Technol. Res. Dev. 67, 1479–1505. doi: 10.1007/s11423-019-09687-y
Hodges, C. B., and Kirschner, P. A. (2024). Innovation of instructional design and assessment in the age of generative artificial intelligence. TechTrends. 68, 195–199. doi: 10.1007/s11528-023-00926-x
Holmes, W., Bialik, M., and Fadel, C. (2019). Artificial Intelligence in Education: Promise and Implications for Teaching and Learning. Boston, MA: The Center for Curriculum Redesign. doi: 10.1007/978-3-319-60013-0_107-1
Kenny, R., Zhang, Z., Schwier, R., and Campbell, K. (2005). A review of what instructional designers do: questions answered and questions not asked. Canad. J. Learn. Technol. (Rev. Canadienne Apprentiss. Technol.) 31, 55–67. doi: 10.21432/T2JW2P
Kumar, S., Gunn, A., Rose, R., Pollard, R., Johnson, M., and Ritzhaupt, A. D. (2024). The role of instructional designers in the integration of generative artificial intelligence in online and blended learning in higher education. Online Learn. 28, 207–231. doi: 10.24059/olj.v28i3.4501
Labov, W. (1972). Language in the Inner City: Studies in the Black English Vernacular. Philadelphia: University of Pennsylvania Press.
Luo, T., Muljana, P. S., Ren, X., and Young, D. (2025). Exploring instructional designers' utilization and perspectives on generative AI tools: a mixed methods study. Educ. Technol. Res. Dev. 73, 741–766. doi: 10.1007/s11423-024-10437-y
McClelland, D. C. (1998). Identifying competencies with behavioral-event interviews. Psychol. Sci. 9, 331–339. doi: 10.1111/1467-9280.00065
McNeill, L. (2024). Automation or innovation? A generative AI and instructional design snapshot. IICE Official Conference Proceedings,187–194. doi: 10.22492/issn.2189-1036.2024.17
Park, J., and Choo, S. (2025). Generative AI prompt engineering for educators: practical strategies. J. Spec. Educ. Technol. 40, 411–417. doi: 10.1177/01626434241298954
Polkinghorne, D. E. (1995). Narrative configuration in qualitative analysis. Int. J. Qual. Stud. Educ. 8, 5–23. doi: 10.1080/0951839950080103
Riessman, C. K. (2008). Narrative Methods for the Human Sciences. Thousand Oaks, CA: Sage Publications.
Stojanov, A. (2023). Learning with ChatGPT 3.5 as a more knowledgeable other: an autoethnographic study. Int. J. Educ. Technol. High. Educ. 20:35. doi: 10.1186/s41239-023-00404-7
Strauss, A., and Corbin, J. (1998). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (2nd Edn.) Thousand Oaks, CA: Sage Publications.
Uduma, L., and Morrison, G. R. (2007). How do instructional designers use automated instructional design tool? Comput. Hum. Behav. 23, 536–553. doi: 10.1016/j.chb.2004.10.040
Ugur-Erdogmus, F., and Cagiltay, K. (2019). Making novice instructional designers expert: design and development of an electronic performance support system. Innov. Educ. Teach. Int. 56, 470–480. doi: 10.1080/14703297.2018.1453853
van den Berg, G., and du Plessis, E. (2023). ChatGPT and generative AI: possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Educ. Sci. 13:998. doi: 10.3390/educsci13100998
Keywords: generative AI, human-AI collaboration, instructional design, narrative analysis, novice instructional designers
Citation: Han S and Lim JY (2026) Between scaffolds and shifts: novice instructional designers' experiences with generative AI. Front. Educ. 10:1708167. doi: 10.3389/feduc.2025.1708167
Received: 18 September 2025; Revised: 27 November 2025;
Accepted: 04 December 2025; Published: 15 January 2026.
Edited by:
Chayanika Uniyal, University of Delhi, IndiaReviewed by:
Rajiv Kumar Verma, University of Delhi, IndiaAnuradha Singhal, University of Delhi, India
Copyright © 2026 Han and Lim. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Ji Young Lim, anlsaW1AYm51ZS5hYy5rcg==