- Department of Journalism, University of the Basque Country (UPV/EHU), Bilbao, Spain
The rise of Artificial Intelligence (AI) is presenting both technical and ethical challenges for media organisations, creating an urgent need for professional training. This study explores how media professionals in the Basque Country are equipping themselves to face these challenges. Using a mixed-method approach, it combines a survey of 504 active professionals with in-depth interviews with six innovation leaders from major regional media outlets. The findings reveal that only 14.1% of professionals have undergone AI training, mostly through self-learning. Larger, internationally focused companies are more proactive in providing training, while local and traditional media organisations show significant gaps. Technical and managerial roles are leading the way in adopting AI, whereas newsroom staff are notably behind. The study highlights the pressing need to enhance AI training, with a particular focus on ethical and technical aspects, both through in-house programmes and formal education pathways.
1 Introduction
The adoption of AI in newsrooms and media organisations has introduced a range of complex challenges for journalists and media professionals (Beckett et al., 2023; Coeckelbergh, 2020). As news production and distribution evolve, AI offers opportunities for efficiency and innovation, but it also brings risks and vulnerabilities (Brennen et al., 2018).
Much like previous technological advancements in the digitalisation of media, AI is a tool that requires careful consideration of its ethical implications. Challenges such as reinforcing biases through technology (Leiser, 2022), increasing misinformation and manipulation (García-Marín and Salvat-Martinrey, 2022), and safeguarding privacy, can all be mitigated with robust media literacy and targeted training. These measures empower journalists to not only use AI tools effectively but also navigate the ethical dilemmas they present (Aissani et al., 2023; Baldessar and Zandomênico, 2024; Barceló-Ugarte et al., 2021; Helberger and Diakopoulos, 2023; Ufarte Ruiz et al., 2021).
AI, with its continuous evolution, holds the potential to transform our lives, work, and relationships. However, it also raises serious ethical and social justice concerns, particularly regarding the widening of existing disparities (e.g., race and gender) (Brundage et al., 2018) and the growing sophistication and dissemination of misinformation, especially through advanced deepfake creation (Phelan, 2022; Sohrawardi et al., 2024).
Nonetheless, AI is not solely a cause for concern among journalism professionals (Peña-Fernández et al., 2023a). In its more constructive application, it can serve as a powerful tool to combat informational disorders and malicious content (Bontridder and Poullet, 2021; Manfredi-Sánchez and Ufarte-Ruiz, 2020; Rubin, 2022). This technology has the ability to match the speed and sophistication of digital falsehoods, reducing the effort and time required by fact-checking professionals and enhancing their capacity to respond effectively to misinformation.
For AI to strengthen journalism’s role in supporting democracy (Lin and Lewis, 2022), it is vital to establish adequate regulations. These should include clear guidelines on authorship (Hofeditz et al., 2021; Díaz-Noci et al., 2024) and enforce transparency (Larsson and Heintz, 2020). AI’s integration into media does not only impact editorial workflows but also necessitates significant shifts in the skills and competencies required for journalists (Danzon-Chambaud and Cornia, 2023; Demmar and Neff, 2023; Gómez-Diago, 2022; Lopezosa et al., 2023; Noain Sánchez, 2022; Ufarte-Ruiz et al., 2020).
2 State of the art
The prominence of AI, heightened since late 2022 with the emergence of generative AI and tools like ChatGPT, has spurred significant innovation across various fields, including journalism (Bakke and Barland, 2022; García-Orosa et al., 2023; Tejedor and Vila, 2021). Although AI is not a novel concept in media (Manfredi-Sánchez and Ufarte-Ruiz, 2020), efforts to automate news production by major outlets and agencies have provided examples over the past two decades. For instance, The Big Ten Network has used automated systems for sports reporting since 2007, while Quakebot at The Los Angeles Times has been offering real-time earthquake updates since 2014. Agencies such as Reuters and Associated Press have also automated parts of their services (Danzon-Chambaud, 2021; Sánchez-García et al., 2023).
A revolutionary shift has been the democratisation of generative AI tools, with ChatGPT as a leading example. Numerous companies have introduced similar platforms (Motlagh et al., 2023; Singh et al., 2023). The societal adoption of these technologies has been remarkably swift, with ChatGPT reaching 100 million users within just 2 months, outpacing TikTok (9 months) and Instagram (two and a half years) (Milmo, 2023).
The emergence of such tools can be seen as the latest phase in media’s digital transformation, redefining its nature within the framework of the Fourth Industrial Revolution (Micó et al., 2022; Dhiman, 2023). This transformation is expected to have a widespread impact across all areas of communication, from news production processes (Sánchez-García et al., 2023) to adjustments in normative values (Peña-Fernández et al., 2023b).
AI is reshaping journalism in multiple ways. By automating content production and reducing reporters’ workload, it fosters innovation and professional specialisation, allowing journalists to focus on more cognitive and creative tasks (Wu et al., 2019). Generative AI can also enable coverage of previously unprofitable topics (Atasoy et al., 2021) and deliver more personalised content (Hermann, 2022).
Integrating AI into newsrooms seems inevitable, as this technology has become essential for the evolution of both media and journalism itself (Terol, 2023; García-Orosa et al., 2023; Gutiérrez-Caneda et al., 2023).
For professionals, concerns about AI extend beyond industrial or sectoral issues such as changes in production processes, potential job losses, or shifts in required skill sets. There is considerable uncertainty about its impact on public opinion and democratic societies.
Since its release in November 2022, OpenAI’s platform has raised concerns regarding the spread of misinformation (Aydın and Karaarslan, 2023; Opdahl et al., 2023). Although the March 2023 release of ChatGPT-4 showed improvements in source citation, misinformation remains a persistent issue (Gutiérrez-Caneda et al., 2023).
During this period, discourse among media professionals has increasingly focused on the potential negative consequences of AI in journalism (Beckett et al., 2023). Concerns include the potential for AI reliance to degrade journalistic quality (Calvo-Rubio and Rojas-Torrijos, 2024), reinforce inherent biases (Cloudy et al., 2023; Leiser, 2022), and encourage unethical practices such as content farms and plagiarism (Palacios Tapia, 2023; Subiela-Hernández and Vizcaíno-Laorga, 2023).
Despite these risks, the primary concern among professionals since the advent of generative AI remains its role in spreading false information, fostering misinformation, and exacerbating polarisation (Berrocal-Gonzalo et al., 2023; Peña-Fernández et al., 2023a).
In response to the growing social and political concerns about misinformation, governments and institutions have intensified efforts to mitigate its impact. Recent measures include the development of tools like fact-checking algorithms, detection systems (e.g., random forests), bots and chatbots, and monitoring platforms, all of which heavily rely on AI (Arias Jiménez et al., 2022; Garriga et al., 2024; Moreno Espinosa et al., 2024; Alonso González and Sánchez Gonzales, 2024). These initiatives combine information verification with media literacy programmes, ongoing training, and professional development for journalists.
Given the technical and ethical complexities involved, training for current and future professionals has become a critical priority, particularly in a context where technological evolution is outpacing existing educational structures.
In this context, this study examines how media professionals in the Basque Country perceive the opportunities and risks associated with AI, their current training levels, and what they consider necessary to adapt to this rapidly evolving landscape.
3 Methodology
A mixed-method approach was employed, combining quantitative survey data with qualitative insights gathered through semi-structured in-depth interviews. This methodology was chosen to provide a comprehensive understanding of journalists’ and media workers’ experiences and perceptions while also quantifying trends and relationships across variables.
The survey was conducted online, with supplementary telephone support, during May and June 2024. A total of 504 responses were collected from journalists and communication sector professionals working in media, companies, or institutions in the Basque Country. Participants were identified through the Open Communication Guide provided by the Basque Government,1 which lists active media organisations in the region, and through the Basque Journalists’ Association.
According to existing data (Basque Government, 2022; Pérez et al., 2023), it is estimated that approximately 5,000 people work in the media sector in Euskadi. Therefore, for a 95% confidence level, the margin of error of the survey is ±4.15%.
The survey employed an online panel and ensured respondent anonymity. No personal data were collected, eliminating risks related to data storage and handling. The sample included 276 men (55.1%), 223 women (44.5%), and two individuals who identified as “other” (0.4%). Variables such as years of professional experience, type of media, scope of the organisation, job role, and responsibility levels were also analysed.
In addition to the survey, six semi-structured in-depth interviews were conducted with senior figures specialising in technology, innovation, and AI from the most prominent media outlets based in the Basque Country. These interviews, lasting between 30 and 50 min, were conducted via videoconference in April and May 2024.
Based on the state of the art, this study seeks to answer the following research questions:
RQ1: What specific AI training have journalists received so far?
RQ2: What are the perceived training needs among journalists in this area?
RQ3: What are the current policies of media organisations regarding AI training, according to media leaders?
4 Results and analysis
4.1 AI training among media professionals
The findings reveal that AI training levels among journalists are generally low. Despite the growing importance of this technology in the media industry, only 14.1% of professionals have received any form of training in this area. In terms of gender, women (15.9%) reported slightly higher training rates than men (12.7%).
Significant differences in AI training are observed across different types of media (Table 1). For instance, professionals engaged in roles further from traditional journalism, such as corporate communication and advertising, show a stronger commitment to AI training.
Among respondents, 44.4% of those working in advertising agencies and 21.2% of those in communication departments reported receiving AI training, far exceeding the levels observed among those in purely journalistic roles. A more business-oriented and economically driven approach to communication activities may explain this higher investment in AI training, as it offers opportunities to optimise both time and costs in content production.
A notable difference in AI training levels is also evident among professionals working in outlets that produce purely journalistic content. Those employed in digital-native sections or outlets reported higher training rates (25.8%) compared to their counterparts in more traditional media formats, such as radio (10.8%), print (9.6%), and television (5.8%).
In summary, the closer media professionals are to traditional journalistic tasks and outlets, the lower their levels of AI training tend to be. Conversely, training is significantly higher among individuals working in digital areas of media organisations. Similarly, the further removed their roles are from core journalistic activities and the closer they are to persuasive functions—such as advertising or corporate communication—the greater the likelihood of having received AI training.
The findings also highlight a correlation between the size and scope of media organisations and the AI training their employees receive (Table 1). Larger organisations with broader dissemination areas show higher training levels, with international outlets reporting nearly double the training rate (21.7%) of local outlets (12.2%). These results suggest that the adoption of AI in media may exacerbate the existing technological gap that has emerged since the onset of media digitalisation. Smaller outlets, in particular, are less prepared to meet this challenge.
The data further confirms that professionals more distanced from traditional journalistic tasks, such as editors and presenters, have received less AI training compared to those in technical roles or working in corporate communication and advertising (Table 2).
Moreover, it is unsurprising that individuals in leadership roles have, on average, received twice as much training as those without such responsibilities (19.9% compared to 9.4%). The disruptive nature of AI technology, its potential impact on work processes, and the availability of greater training opportunities are likely factors influencing this trend. This pattern suggests that more experienced professionals, particularly those in managerial or strategic positions, are more engaged in adopting and implementing AI technologies. Their roles often require staying updated on technological advancements to make informed strategic decisions.
Similarly, employees with longer tenures in their organisations are more likely to have received training. Their stability and seniority in the workplace, coupled with their presence in leadership roles, likely contribute to these higher training levels. Additionally, a perceived technological gap may have driven them to seek further training. These findings underline the importance of integrating AI training into the early and mid-stages of professional development to equip workers for technological challenges.
4.2 AI training policies in media organisations
AI training is a critical factor for adopting this technology effectively. Among respondents who reported never using AI, 43.3% cited a lack of training as the primary barrier.
The study found that only 8.5% of participants received AI training provided by their organisations, whether online or in person, compared to 12.9% who pursued training independently.
By organisation type, advertising agencies, digital media, and communication departments stand out for their commitment to AI training (Table 3). In contrast, traditional and journalistic media outlets have offered little training in this area to their employees.
This trend aligns with the types of roles that reported higher training levels (Table 4), which were predominantly technical and commercial positions—areas more peripheral to the production of current affairs content.
In terms of the scope of media operations, 13% of employees at international outlets reported seeking AI training independently, the highest percentage among all categories. This reflects a greater personal initiative to acquire AI skills in broader dissemination environments. However, despite the correlation between media reach and AI training, a significant proportion of professionals across all levels remain untrained. In local outlets, 87.8% of professionals reported no AI training, a figure that, while slightly lower, remains substantial in regional (86.4%), national (84%), and international (78.3%) contexts.
Technical professionals stand out as the most trained group in AI, with 66.7% having received training. This aligns with the technical nature of AI, which demands expertise in data analysis, programming, and advanced software use. In management and leadership roles, 25% of professionals reported AI training, highlighting its strategic importance for decision-making and innovation.
Media leaders, however, emphasise that an exclusively technical approach to AI is insufficient. They stress the need to integrate technical understanding with ethical considerations to address the core challenges of journalism. As one interviewee noted: “We need people in newsrooms trained in these topics, and, of course, at a technical level. But perhaps, also on an ethical level, it is important to have technical knowledge of AI [.] to know when and how to use it” (I4).
Following technical roles, graphic design professionals (18.75%) and production staff (15.8%) reported higher training levels. Advertising and corporate communication professionals also showed notable rates (16.7 and 14%, respectively). Journalists in editorial roles (13.7%) and presenters (6.3%) had the lowest training rates. According to media leaders, the limited immediate application of AI in fast-paced editorial work may explain this lower adoption. As one leader remarked: “Internally, it has not affected us yet; at the moment, we are not using anything. But given the breadth of what we cover, it could be useful, yes. Something has indeed been considered” (I6).
The analysis also reveals a strong overlap between individual initiative and organisational support for training. Among those who received any form of training, 52.1% did so through both personal and company efforts, 39.4% relied solely on self-directed learning, and only 8.5% received training exclusively from their employer. This suggests that corporate training initiatives largely benefit those already motivated to learn, leaving a majority without access to such opportunities. Traditional media outlets and more journalistic roles appear to have the most room for improvement in this regard.
This self-motivated approach aligns with comments from media leaders: “It does not all depend on companies; individuals need that drive to keep training and equip themselves with more tools to do their job. Continuous training is not just a company’s responsibility; it is also part of the worker’s responsibility” (I6).
The findings underscore the need for enhanced corporate training opportunities to provide media professionals with a deeper and more practical understanding of AI. Media leaders emphasise that this training must be ongoing: “After just 2 months, you are already outdated” (I1). Thus, training efforts cannot be one-off initiatives but must be sustained over time: “We need to keep updating our skills because technology evolves so rapidly. What was cutting-edge a year ago is likely obsolete today” (I2).
Given the rapid pace of change in AI, the focus has shifted to continuous professional development. As one leader noted: “The market demands far ahead of what educational institutions can offer. These are skills everyone working with AI should know. For example, you must be careful about what you provide to AI systems, as you could be sharing confidential internal company information” (I1).
While media leaders recognise the urgency of training to address AI-driven changes, they also stress the importance of foundational knowledge in technical aspects of AI (I2, I4, I5). One leader commented: “We need to improve competencies and prioritise training for technology use in general, as media organisations include teams of all ages with varying technological capacities” (I3).
5 Discussion and conclusions
This study on AI training among communication professionals in the Basque Country reveals a significant gap in specialised training, despite the growing importance of AI in the media industry (Husnain et al., 2024). The findings indicate that only a small percentage of journalists have received AI training so far, most of which was self-directed. The continuous training policies offered by companies and the self-directed learning initiatives of journalists emerge as essential needs in an environment undergoing rapid and profound transformation.
The disparities identified among different types of organisations and professional profiles highlight key areas where training efforts should be directed. Larger communication companies with broader dissemination reach have made greater investments in training their professionals, while smaller, local organisations have so far made fewer efforts in this area. This disparity suggests that AI could further widen the technological gap already created by media digitalisation, intensifying the divide between major international platforms and smaller, resource-constrained outlets.
Similarly, the data also reveal a differential adoption of AI between core and peripheral aspects of media operations. While areas adjacent to journalistic activity (technical, commercial, etc.) are adopting this technology earlier through training, professionals directly involved in content creation are lagging behind in terms of training received. This may be influenced by scepticism among journalists about the potential impact of AI on their profession (Peña-Fernández et al., 2023b).
Media leaders in the Basque Country acknowledge the need for greater ongoing training in AI (Noain Sánchez, 2022). Technological training is viewed as essential, not only to enhance individual skills but also to ensure the responsible and effective use of AI in media. In the short term, organisations are advised to provide in-house or company-supported training programmes to enable collaboration between media professionals and technical staff, fostering awareness of the risks associated with improper AI use. In the longer term, university-level training programmes should also address the broader implications of AI applications in media, focusing on its potential to generate misinformation, perpetuate biases, and exacerbate social inequalities (Opdahl et al., 2023).
In summary, this research highlights the urgent need to expand AI literacy and training among communication professionals (Deuze and Beckett, 2022; Gómez-Diago, 2022; Larrondo-Ureta and Peña-Fernández, 2024). Only through continuous and comprehensive AI training—encompassing both technical and ethical aspects—can media professionals effectively meet contemporary challenges and ensure the responsible use of AI in the media industry.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
BS: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Writing – original draft. SP-F: Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Validation, Visualization, Writing – review & editing. JP-D: Data curation, Formal analysis, Investigation, Methodology, Software, Writing – review & editing. AL-U: Conceptualization, Funding acquisition, Investigation, Project administration, Supervision, Validation, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This article is part of the research project “Impact of artificial intelligence and algorithms on online media, journalists and audiences” (PID2022-138391OB-I00) funded by the Spanish Ministry of Science, Innovation and Universities and by the European Commission NextGeneration EU/PRTR and “The Impact of Artificial Intelligence on Basque Media and Their Professionals” (US 23/10), funded by the University of the Basque Country (UPV/EHU). The authors belong to the Basque University’s Consolidated Research Group System “Gureiker” (IT-1496-22).
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declare that Gen AI was used in the creation of this manuscript. Generative AI was used to adapt references to Harvard format and to check English language.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
References
Aissani, R., Abdallah, R. A. Q., Taha, S., and Al Adwan, M. N. (2023). “Artificial intelligence tools in media and journalism: roles and concerns.” in Proceedings of the 2023 International Conference on Multimedia Computing, Networking and Applications (MCNA). June, pp. 19–26. IEEE.
Alonso González, M., and Sánchez Gonzales, M. (2024). Inteligencia artificial en la verificación de la información política: Herramientas y tipología. Más Poder Local 56, 27–45. doi: 10.56151/maspoderlocal.215
Arias Jiménez, B., Rodríguez-Hidalgo, C., Mier-Sanmartín, C., and Coronel-Salas, G. (2022). “Use of chatbots for news verification Communication and applied technologies.” in Proceedings of ICOMTA 2022 Singapore Springer Nature Singapore. pp. 133 143
Atasoy, B., Efe, M., and Tutal, V. (2021). Towards the artificial intelligence management in sports. Int. J. Sports Exer. Train. Sci. - IJSETS, 7, 100–113. doi: 10.18826/useeabd.845994
Aydın, Ö., and Karaarslan, E. (2023). Is ChatGPT leading generative AI? What is beyond expectations? Acad. Platform J. Eng. Smart Syst. 11, 118–134. doi: 10.21541/apjess.1293702
Bakke, N. A., and Barland, J. (2022). Disruptive innovations and paradigm shifts in journalism as a business: from advertisers first to readers first and traditional operational models to the AI factory. SAGE Open 12, 1–13. doi: 10.1177/21582440221094819
Baldessar, M. J., and Zandomênico, R. (2024). “Journalistic ethics in the face of news produced by artificial intelligence” in Creating culture through media and communication. eds. S. V. Moreira, K. Moles, L. Robinson, and J. Schulz, (Leeds: Emerald Publishing Limited), 131–137.
Barceló-Ugarte, T., Pérez-Tornero, J. M., and Vila-Fumàs, P. (2021). Ethical challenges in incorporating artificial intelligence into newsrooms. News Media Innovation Reconsidered: Ethics and Values in a Creative Reconstruction of Journalism, 138–153.
Basque Government (2022). Censo del mercado de trabajo. Specific Statistical Body of the Department of Labor and Employment. Available online at: https://www.eustat.eus/elementos/ele0021400/censo-del-mercado-de-trabajo-oferta/inf0021420_c.pdf (Accessed March 22, 2024).
Beckett, C., Sanguinetti, P., and Palomo, B. (2023). “New frontiers of intelligent journalism” in Blurring boundaries of journalism in digital media: New actors, models and practices. eds. M. C. Negreira-Rey, J. Vázquez-Herrero, J. Sixto-García, and X. López-García, (Cham: Springer International Publishing), 275–288.
Berrocal-Gonzalo, S., Waisbord, S., and Gómez-García, S. (2023). Polarización política y medios de comunicación, su impacto en la democracia y en la sociedad. Prof. Inferm. 32:e320622. doi: 10.3145/epi.2023.nov.22
Bontridder, N., and Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data Policy 3:e32. doi: 10.1017/dap.2021.20
Brennen, J. S., Howard, P. N., and Nielsen, R. K., (2018). An industry-led debate: how UK media cover artificial intelligence. Reuters Institute for the Study of Journalism. Available online at: https://www.oxfordmartin.ox.ac.uk/publications/an-industry-led-debate-how-uk-media-cover-artificial-intelligence (Accessed March 22, 2024).
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., et al. (2018). The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv. Available online at: https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf (Accessed March 22, 2024).
Calvo-Rubio, L. M., and Rojas-Torrijos, J. L. (2024). Criteria for journalistic quality in the use of artificial intelligence. Commun. Soc. 37, 247–259. doi: 10.15581/003.37.2.247-259
Cloudy, J., Banks, J., and Bowman, N. D. (2023). The str(AI)ght scoop: artificial intelligence cues reduce perceptions of hostile media bias. Digit. Journal. 11, 1577–1596. doi: 10.1080/21670811.2021.1969974
Danzon-Chambaud, S. (2021). A systematic review of automated journalism scholarship: guidelines and suggestions for future research. Open Res. Europe 1. doi: 10.12688/openreseurope.13096.1
Danzon-Chambaud, S., and Cornia, A. (2023). Changing or reinforcing the “rules of the game”: a field theory perspective on the impacts of automated journalism on media practitioners. Journal. Pract. 17, 174–188. doi: 10.1080/17512786.2021.1919179
Demmar, K., and Neff, T. (2023). Generative AI in journalism education: mapping the state of an emerging space of concerns, opportunities, and strategies. Journal. Educ. 12, 47–58.
Deuze, M., and Beckett, C. (2022). Imagination, algorithms and news: developing AI literacy for journalism. Dig. J. 10, 1913–1918. doi: 10.1080/21670811.2022.2119152
Dhiman, D. B. (2023). The challenges of computational journalism in the 21st century. SSRN. Available online at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4456378 (Accessed March 22, 2024).
Díaz-Noci, J., Peña- Fernández, S., Meso- Ayerdi, K., and Larrondo- Ureta, A. (2024). The influence of AI in the media workforce: how companies use an array of legal remedies. Tripodos 55, 33–54. doi: 10.51698/tripodos.2024.55.03
García-Orosa, B., Canavilhas, J., and Vázquez-Herrero, J. (2023). Algorithms and communication: A systematized literature review. Comunicar. 74, 9–21. doi: 10.3916/C74-2023-01
García-Marín, D., and Salvat-Martinrey, G. (2022). Tendencias en la producción científica sobre desinformación en España. Revisión sistematizada de la literatura (2016-2021). adComunica. 23, 23–50. doi: 10.6035/adcomunica.6045
Garriga, M., Ruiz-Incertis, R., and Magallón-Rosa, R. (2024). Artificial intelligence, disinformation and media literacy proposals around deepfakes. Observatorio (OBS)*, 18.
Gómez-Diago, G. (2022). Perspectivas para abordar la inteligencia artificial en la enseñanza de periodismo: Una revisión de experiencias investigadoras y docentes. Rev. Lat. Comun. Soc. 80, 29–46. doi: 10.4185/RLCS-2022-1542
Gutiérrez-Caneda, B., Vázquez-Herrero, J., and López-García, X. (2023). AI application in journalism: chatGPT and the uses and risks of an emergent technology. Prof. Inferm. 32:e320514. doi: 10.3145/epi.2023.sep.14
Helberger, N., and Diakopoulos, N. (2023). The european AI act and how it matters for research into AI in media and journalism. Digit. Journal. 11, 1751–1760. doi: 10.1080/21670811.2022.2082505
Hermann, E. (2022). Artificial intelligence and mass personalisation of communication content: an ethical and literacy perspective. New Media Soc. 24, 1258–1277. doi: 10.1177/14614448211022702
Hofeditz, L., Mirbabaie, M., Holstein, J., and Stieglitz, S. (2021). Do you trust an AI-journalist? A credibility analysis of news content with AI authorship. In: ECIS Proceedings, June.
Husnain, M., Imran, A., and Tareen, H. K. (2024). Artificial intelligence in journalism: examining prospectus and obstacles for students in the domain of media. J. Asian Dev. Stud. 13, 614–625. doi: 10.62345/jads.2024.13.1.51
Larrondo-Ureta, A., and Peña-Fernández, S. (2024). La formación de periodistas en la era de la inteligencia artificial: Aproximaciones desde la epistemología de la comunicación. Anu. ThinkEPI 18:e18e11. doi: 10.3145/thinkepi.2024.e18a11
Larsson, S., and Heintz, F. (2020). Transparency in artificial intelligence. Internet Policy Rev. 9:1469. doi: 10.14763/2020.2.1469
Leiser, M. R. (2022). “Bias, journalistic endeavours, and the risks of artificial intelligence” in Artificial intelligence and the media: Reconsidering rights and responsibilities. eds. T. Pihlajarinne and A. Alén-Savikko (Cheltenham: Edward Elgar Publishing), 8–32.
Lin, B., and Lewis, S. C. (2022). The one thing journalistic AI just might do for democracy. Digit. Journal. 10, 1627–1649. doi: 10.1080/21670811.2022.2084131
Lopezosa, C., Codina, L., Pont-Sorribes, C., and Vállez, M. (2023). Use of generative artificial intelligence in the training of journalists: challenges, uses and training proposal. Prof. Inferm. 32:8. doi: 10.3145/epi.2023.jul.08
Manfredi-Sánchez, J. L., and Ufarte-Ruiz, M. J. (2020). Inteligencia artificial y periodismo: Una herramienta contra la desinformación. Rev. CIDOB d’Afers Int. 124, 49–72. doi: 10.24241/rcai.2020.124.1.49
Micó, J. L., Casero-Ripollés, A., and García-Orosa, B. (2022). “Platforms in journalism 4.0: the impact of the fourth industrial revolution on the news industry” in Total journalism. Studies in big data. eds. J. Vázquez-Herrero, A. Silva-Rodríguez, M. C. Negreira-Rey, C. Tou-ral-Bran, and X. López-García, vol. 97 (Cham: Springer), 97–112.
Milmo, D. (2023). ChatGPT reaches 100 million users two months after launch. The Guardian, 2 February. Available online at: https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-openai-fastest-growing-app (Accessed March 22, 2024).
Moreno Espinosa, P., Abdulsalam Alsarayreh, R. A., and Figuereo Benítez, J. C. (2024). El Big Data y la inteligencia artificial como soluciones a la desinformación. Doxa Comunicación, 38, 437–451. doi: 10.31921/doxacom.n38a2029
Motlagh, N. Y., Khajavi, M., Sharifi, A., and Ahmadi, M. (2023). The impact of artificial intelligence on the evolution of digital education: a comparative study of OpenAI text generation tools including ChatGPT, Bing chat, bard, and Ernie. arXiv preprint. Available online at: https://arxiv.org/abs/2309.02029 (Accessed March 22, 2024).
Noain Sánchez, A. (2022). Addressing the impact of artificial intelligence on journalism: the perception of experts, journalists and academics. Commun. Soc. 35, 105–121. doi: 10.15581/003.35.3.105-121
Opdahl, A. L., Tessem, B., Dang-Nguyen, D. T., Motta, E., Setty, V., Throndsen, E., et al. (2023). Trustworthy journalism through ai. Data Knowl. Eng. 146:102182. doi: 10.1016/j.datak.2023.102182
Palacios Tapia, A. G., (2023). La influencia de la inteligencia artificial en el periodismo: Uso de herramientas y aplicaciones. Unpublished manuscript.
Peña-Fernández, S., Meso-Ayerdi, K., Larrondo-Ureta, A., and Díaz-Noci, J. (2023a). Without journalists, there is no journalism: the social dimension of generative artificial intelligence in the media. Prof. Inferm. 32:e320227. doi: 10.3145/epi.2023.mar.27
Peña-Fernández, S., Peña-Alonso, U., and Eizmendi-Iraola, M. (2023b). El discurso de los periodistas sobre el impacto de la inteligencia artificial generativa en la desinformación. Estud. Mensaje Period. 29, 833–841. doi: 10.5209/esmp.88673
Pérez, F., Broseta, B., Escribá, A., López, G., Maudos, J., and Pascual, F. (2023). Los medios de comunicación en la era digital. Fundación BBVA.
Phelan, P. (2022). Are the current legal responses to artificial intelligence-facilitated ‘deepfake’ pornography sufficient to curtail the inflicted harm? N. Engl. L. Rev. 9, 20–45.
Rubin, V. L. (2022). “Artificially Intelligent Solutions: Detection, Debunking, and Fact-Checking” in Misinformation and Disinformation (Cham: Springer). doi: 10.1007/978-3-030-95656-1_7
Sánchez-García, P., Merayo-Álvarez, N., Calvo-Barbero, C., and Diez-Gracia, A. (2023). Spanish technological development of artificial intelligence applied to journalism: companies and tools for documentation, production and distribution of information. Prof. Inferm. 32:e320208, doi: 10.3145/epi.2023.mar.08
Singh, S. K., Kumar, S., and Mehra, P. S. (2023). ChatGPT and Google Bard AI: a review. In: Proceedings of the 2023 International Conference on IoT, Communication and Automation Technology (ICICAT). June, 1–6. IEEE.
Sohrawardi, S. J., Wu, Y. K., Hickerson, A., and Wright, M. (2024). “Dungeons and deepfakes: using scenario-based role-play to study journalists' behaviour towards using AI-based verification tools for video content.” In: Proceedings of the CHI Conference on Human Factors in Computing Systems, May, 1–17.
Subiela-Hernández, B. J., and Vizcaíno-Laorga, R. (2023). Retos y oportunidades en la lucha contra la desinformación y los derechos de autor en periodismo: MediaVerse (IA, blockchain y smart contracts). Estud. Mensaje Period. 29:88081. doi: 10.5209/esmp.88081
Tejedor, S., and Vila, P. (2021). Exojournalism: a conceptual approach to a hybrid formula between journalism and artificial intelligence. J. Med. 2, 830–840. doi: 10.3390/journalmedia2040048
Terol, T. M. (2023). Innovación mediática: Aplicaciones de la inteligencia artificial en el periodismo en España. Textual Vis. Media 17, 41–60. doi: 10.56418/txt.17.1.2023.3
Ufarte Ruiz, M. J., Calvo-Rubio, L. M., and Murcia Verdú, F. J. (2021). Los desafíos éticos del periodismo en la era de la inteligencia artificial. Estudios sobre el Mensaje Periodístico 27, 673–684. doi: 10.5209/esmp.69708
Ufarte-Ruiz, M. J., Fieiras-Ceide, C., and Túñez, M. (2020). La enseñanza-aprendizaje del periodismo automatizado en instituciones públicas: Estudios, propuestas de viabilidad y perspectivas de impacto de la IA. Anàlisi Quad. Comun. Cult. 62, 131–146.
Keywords: artificial intelligence, journalists, media, training, digital transformation
Citation: Sarrionandia B, Peña-Fernández S, Ángel Pérez-Dasilva J and Larrondo-Ureta A (2025) Artificial intelligence training in media: addressing technical and ethical challenges for journalists and media professionals. Front. Commun. 10:1537918. doi: 10.3389/fcomm.2025.1537918
Edited by:
Maria O’Brien, University of Galway, IrelandReviewed by:
Michał Kuś, University of Wrocław, PolandCopyright © 2025 Sarrionandia, Peña-Fernández, Ángel Pérez-Dasilva and Larrondo-Ureta. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Simón Peña-Fernández, c2ltb24ucGVuYUBlaHUuZXVz