AI application in journalism: ChatGPT and the uses and risks of an emergent technology

AI integration in media newsrooms is changing professional routines, required profiles and journalistic products. The acceleration in the development of this technology since the end of 2022 makes it necessary to review these tools in order to analyse their uses and risks, as well as their possible applications in the journalistic field. OpenAI launched Chat-GPT in November 2022 open to the public. This tool has been a key element in this technological revolution. This paper analyses ChatGPT applications in journalistic newsrooms through its introduction during a working day in the routine of different professionals. A mixed methodology was used to carry out this experience: a benchmarking of AI tools applied to journalism was created, a walkthrough experience was carried out with ChatGPT and, based on the results obtained in these first two phases, the experiment with journalists was designed. A total of 12 journalists of different ages and sectors with little or no previous contact with technological tools based on artificial intelligence, as the aim of the study is to observe the application in newsrooms without the need for technical training, participated in this research. The results show the usefulness of the tool to automate mechanical processes, rewrite texts, analyse data and even serve as a content idea creator. On the other hand, journalists have also perceived significant risks such as inaccuracy of AI as well as lack of ‘empathy’.


Introduction
The automation of routine and monotonous tasks in the media is a present and future reality.The network society of the third decade of the third millennium is structured around the Internet galaxy (Castells, 2001) and its rampant platformization (Van-Dijck; Poell; De-Waal, 2018), and immersed in the fourth industrial revolution, which significantly modifies production systems as well as different aspects of society (Micó;Casero-Ripollés;García-Orosa, 2022).In this context, the introduction of artificial intelligence in the processes of searching, production, dissemination and management of communication messages has established a platform of turbines that will progressively drive the exponential multiplication of communication flows and personalization, with new ethical challenges (Hermann, 2022).Today there is a certain enthusiasm for the arrival of artificial intelligence, due to its potential to transform and introduce efficiency into communicative processes, representing a technocentric vision of communication, but there are risks and challenges due to a lack of transparency from socio-legal and scientific-computing perspectives (Larsson; Heintz, 2020).These are two dimensions that need to be monitored and analysed, as far as possible, to understand and correct possible dysfunctions.
The complexity implied by the changes underway in the journalistic field not only stimulates renewed debates in the field of communication, but also drives new transformations in the communication ecosystem that bring opportunities and challenges in the context of the next technological revolution, in which it is announced that machines will drive machine learning, imitate human thoughts and behaviours and perform new cognitive functions (Samuel et al., 2022).Big companies use machine learning in their activities, both for big data analysis in general and through different strategies, especially in digital marketing and programmatic advertising.The disruptive impact of artificial intelligence, which affects all phases of the advertising process (Martínez-Martínez; Aguado-Terrón; Sánchez-Cobarro, 2022), has multiplied intermediaries and contributed to changes in the media environment, which is increasingly in need of innovative business models (Evens; Raats; Von-Rimscha, 2017) that ensure their sustainability in a scenario in which artificial intelligence affects the processes, practices and results of new companies (Chalmers;MacKenzie;Carter, 2021).Several companies have already revolutionized their business models using artificial intelligence (Mishra;Tripathi, 2021), but the models still have room for further enrichment and efficiency as renewed integrated systems are explored for the new scenario of the fourth industrial revolution (Ross; Maynard, 2021), because of the radical change it implies for society as a whole.
Artificial intelligence, which experts recognise as one of the current technologies that will drive a more efficient journalism, has entered all departments of newspaper companies and, together with different tools, is guiding many of the changes that are taking place in newsrooms.The possibilities it offers are very different, ranging from the elaboration of simple news items to the exploration of new dimensions for a piece or for the improvement of interactions with users through information processing.The application of algorithms and artificial intelligence to journalism has developed at a dizzying pace in a very short period (Parratt-Fernández; Mayoral-Sánchez; Mera-Fernández, 2021), which has also resulted in significant academic interest and output.However, as the changes introduced by the latest generation tools are very recent and their impact is highly relevant, it is necessary to monitor them periodically and in as timely a manner as possible to understand the trends and the most used tools right now.The results of this monitoring should provide perspective for the regulatory challenges facing artificial intelligence, in accordance with the socially acceptable limits of our cultural context.
In November 2022, OpenAI launched ChatGPT (Márquez, 2023), an AI tool with a conversational interface that answers users' questions and can perform actions involving natural language generation.This launch served as a starting point, and after that date many different AI initiatives accelerated and came to light.Google announced Bard, a conversational AI that is not yet available in all territories (Google, 2023); Microsoft included a 'Chat' function in its search engine with the help of OpenAI (Fernández, 2023a) that works in a similar way and offers the possibility of creating images with artificial intelligence, something already seen in DALL-E, another OpenAI system that creates images from descriptions made in natural language (OpenAI, 2023a).In March 2023, Elon Musk created his own AI company, X.AI, which he publicly revealed a month later (Prego, 2023).In May 2023, Google announced that it would include a generative AI response at the top search results (Hazard-Owen, 2023), which led OpenAI to promise a new paid version of ChatGPT with more functionalities, such as a direct connection of the tool to the network, which up to this point could only be done with the help of external software.
However, the arrival of ChatGPT was also controversial and reopened ethical and deontological debates on the limits of technology uses, which in some cases even reached the legal sphere.In Italy its use has been banned for not respecting the data protection law (Buj, 2023).In March 2023, more than 1,000 AI leaders, including Elon Musk, Steve Wozniak (co-founder of Apple), Jaan Tallin (co-founder of Skype) and Max Tegmark (MIT), signed a letter calling for a halt to AI training, especially those programs that may have a higher capability than GPT-4, for at least six months to work on making current systems more robust and reliable (Aguilar, 2023).The problem highlighted in this missive is the lack of control and knowledge about the AI systems that are being created and the risks that they may pose.
Realising that artificial intelligence can help them with some of the challenges they were facing (De-Lima-Santos;Ceron, 2022), newspaper companies began to show an early interest in the communicative processes involved in the application of natural language processing, the detection of informative trends, and in the automatic production of texts (Canavilhas, 2022).According to focus studies, there is a belief among journalism experts and professionals that this technology -if current forecasts are fulfilled-will play an important role in the industry, with three main lines of application: the automation of content -textual and audiovisual-the verification and improvement of access to information and monetization -including subscription and loyalty systems-and the personalization of content (De-Lara; García-Avilés; Arias-Robles, 2022).
There is also a current of opinion among communication professionals and academics that this technology will not have a negative impact on the journalistic labour market (Calvo-Rubio; Ufarte-Ruiz, 2020) and that professionals need to be adequately prepared to incorporate these state-of-the-art tools into their practices.This vision, however, coexists with fears that, in some countries, many journalists may be displaced from the media (Yu; Huang; Jones, 2020).Sánchez-García et al. (2023) even points out that, following their research on AI applied to journalism in Spain, "the experts consulted draw an 'irreversible' technological reality, a 'forced change' that, however, the media receive with 'slowness', 'distrust' and 'unwareness'".Time will help to dispel doubts about the real impact on the labour market in the future, under the watchful eye of research on different geographical areas.
The potential of artificial intelligence to transform journalism will translate into tangible results if it is oriented towards the development of news pieces that are accurate and accessible, that cultivate diversity, that are relevant and timely and, in short, that contribute to a higher quality in the processes of message development so that citizens are better informed and more satisfied with the information they receive (Lin;Lewis, 2022).Without doubting this efficiency -as AI has introduced journalism in an unknown scenario, but which must be explored through a path filled with of challenges and risks -there are many questions on the horizon.This has led to important ethical, labour and social debates in the field of communication in recent years (Túñez-López; Toural-Bran; Valdiviezo-Abad, 2019), but always with the desire to understand the impact and take advantage of the benefits it can bring, such as the extension of current automated textual news to audio and video information on demand, which will favour an unstructured non-linear consumption of the news, and will promote changes in the business model due to new ways of relating to the audience and distribution of the product (Túñez-López; Fieiras-Ceide; Vaz-Álvarez, 2021).Journalists and experts perceive that AI will enhance the capabilities of journalists by saving time, increasing the efficiency of news creation processes, and thereby increasing productivity (Noain-Sánchez, 2022), while also highlighting a perceived tension between the industry and the profession in highlighting the hopes and pitfalls of this technology (Moran;Shaikh, 2022).The different perspectives that coexist point to the need for a better understanding of the consequences based on concrete experiences, with case studies that will then allow more global interpretations.
The need for journalists to be trained in artificial intelligence and the tools used by this technology (Gonçalves; Melo, 2022) and the ethical debates (Noain-Sánchez, 2022) are two areas of particular concern in the profession, which demonstrates the need to implement continuous updating of programs aimed at working professionals, as well as their incorporation into the regulated studies of journalism degrees, and the need for continuous control and supervision of the processes carried out by AI in the journalistic field.
High-tech journalism has become the sign of the digital times of the third millennium.This trend demands competencies from journalism professionals on the crossover between technology and journalistic content creation (López-García; Vizoso, 2021).This gives rise to renewed professional profiles and new names for conceptualizing the impact of AI on the journalism industry."Exo journalism" (Tejedor; Vila, 2021) is one of these new names, that joins others like robot journalism, computational journalism, artificial journalism or automated journalism, around which there is debateeach name introduces nuances, as they are not clearly defined and delimited (Mooshammer, 2022).All this occurs at a time when many studies and contributions recommend searching for common patterns of study for a better understanding of automation in newsrooms and artificial intelligence in journalism (Danzon-Chambaud, 2021).Technology is one of the key elements in the approach to media and journalism strategy and development (Vállez;Codina, 2018), but the different tools and trends at certain times encourage denominations of more or less long life, which now, with automation and artificial intelligence seems to be grouped, preferably, around the umbrella of automated journalism, still little present in the training offer of the curricula of journalism degrees (Ufarte-Ruiz; Fieiras-Ceide; Túñez-López, 2020), although with proposals for its introduction from the applied point of view and from critical reflection (Gómez-Diago, 2022), but increasingly present in newsrooms and academic literature and with all denominations providing nuances (Cohen; Hamilton; Turner, 2011; Karlsen; Stavelin, 2013; Túñez-López; Toural-Bran; Valdiviezo-Abad, 2019; Marconi, 2020; Ca-ChatGPT's limitations are mainly marked by: (1) its limited knowledge of the world, (2) the fact of being disconnected from the network, (3) producing incorrect information and presenting it as truthful, and ( 4) not correctly following the instructions given by the user; furthermore, (5) certain biases are detected, probably derived from the information it received in its training navilhas, 2022; García-Orosa; Pérez-Seijo; Vizoso, 2022; Otero-González, 2022).The number of articles resulting from research in this field continues to grow, which is why we only reference a sample.The growth of academic production linking journalism and artificial intelligence is vertiginous, as shown by recent systematic reviews (García-Orosa; Canavilhas; Vázquez-Herrero, 2023).
Artificial intelligence and automation are part of the world of journalism today and are present in the newsrooms of today's digital media because it is difficult to separate journalism from its technology -it is dependent on some kind of technology (Zelizer, 2019).Hence, although economistic views on its introduction and presence in journalism prevail, more attention needs to be paid to the ethics and ontological limits of automated journalism (Porlezza; Ferri, 2022).For AI to make contributions to good journalism -to improving the functioning of democracy (Lin; Lewis, 2022)-good regulation is necessary, which, among other things, avoids the disappearance of authorship (Krausová; Moravec, 2022) and a rigorous transparency policy that is accompanied by proper monitoring of the communicative processes in which this technology is applied, in the context of the ethical discussion around current technologies (Israel; Amer, 2022).The main professional and ethical issues focus on undermining creativity, a lack of monitoring, bias, transparency, fairness, data utilization and data quality (Ali;Hassoun, 2019).Responses, therefore, should focus on AI's effects on the basic elements of journalism.
The recent emergence of the so-called synthetic media (Ufarte-Ruiz; Murcia-Verdú; Túñez-López, 2023) -media that lack journalists and where all work routines depend on AI-highlights the dizzying speed at which this technology continues to develop.While progress is being made in the regulation and monitoring of the effects of the introduction of artificial intelligence in technologically mediated communication and, above all, in digital journalism, the main challenge that the journalistic profession warns of is the need to know the existing tools and how to use them to avoid distortions, to understand the risks involved and to enter the complex world of the challenges posed by this technology for quality journalism, which can undoubtedly provide strength, although introducing new risks and threats.This technology, like that which has preceded it, must be seen as a new aid and, as it raises doubts and challenges, the ethical dilemmas must be contemplated from the core values that underpin good journalism, such as truth, justice, freedom and responsibility, which must be applied by journalists who, until proven otherwise, have capabilities superior to those of machines (Ventura-Pocino, 2021).What is needed, however, is for journalists to have the ability to monitor technology, which is a training issue and therefore a priority and should be programmed based on pioneering experiences for journalism students (Gómez-Diago, 2022) and for professionals who work in journalism.

ChatGPT and AI democratization
ChatGPT is an OpenAI tool introduced in December 2022 and made freely available.It works with GPT3, an autoregressive learning model, and allows the user to converse with it through a chat.Users can access it for free and do not need to have programming knowledge, although there are paid versions that offer more functionalities.The accessibility and usability of the user interface, both of ChatGPT and other OpenAI tools such as DALL-E, are a large part of the success of these tools, together with the fact that they are open to the public and cause enthusiasm and fascination among users, which facilitates their viralization in social networks.
The recent opening of the AI tool ChatGPT to the public, at the end of 2022, has accelerated the introduction of this type of technology not only in newsrooms but also in other companies in different fields.OpenAI defines itself as a natural language model that interacts conversationally (in chat format), allowing it to "answer follow-up questions, admit mistakes, challenge incorrect assumptions and reject inappropriate requests" (OpenAI, 2023b).
ChatGPT is available in different languages, however, its interaction level depends on the language it is using.Another limitation is that all its knowledge comes from the data used for its training, in 2021.
ChatGPT has certain features and skills that make it especially attractive as an assistant in newsrooms and other communication-related companies.Since it is open to the public, numerous tests have been carried out with this tool in very different fields such as digital marketing, programming, education and journalism.Interviews have been conducted on different topics and a scientific article has even been written in collaboration with it, exploring the possibilities it offers to the field of communication and education (Pavlik, 2023).
In February 2023, OpenAI began to offer a paid version of ChatGPT with the name ChatGPT Plus.This new version offers general access to ChatGPT (even when there are traffic peaks, a current problem with the free version of the tool), a faster response, and priority access to new features.This version is available for $20 per month (Fernández, 2023b).In this context, the objective of this research is to draw the current landscape of the use of artificial intelligence in newsrooms through a radiography of AI tools applicable to journalism and a case study of the OpenAI ChatGPT proposal applied and tested in the journalistic context.
This AI demonstrates its potential to reduce the time it takes to produce, write, manage and disseminate journalistic content, which at the same time could result in a reduced workload for journalists

Methods
To present the current panorama of the use of AI in the journalistic field and to determine the advantages that the appearance of the ChatGPT tool could bring to newsroom routines -the main goal of this paper-a mixed methodology with an exploratory and experimental nature has been chosen.This method allows us to get to know the current panorama in depth and to test the tool.
We start with the hypothesis that (1) the incorporation of AI in newsrooms facilitates the work of journalists, streamlining and automating processes and routines, although (2) AI use for journalistic purposes can be potentially dangerous from an ethical and legal perspective, which makes it necessary to regulate these tools.The methodological design involves three approaches: benchmarking of AI tools applied to journalism, a walkthrough experience with ChatGPT and, finally, an experiment with users following the Experience Sampling Method (Berkland, 2017).

Benchmarking of AI tools in journalism
Following a literature review on automated journalism, an adaptation of the Prisma method (Page et al., 2021) was applied to carry out a systematized exploration of the existing artificial intelligence tools that can be applied in the journalistic field.The search was conducted in two ways: on a general search engine (Google) and through scientific databases (WoS and Scopus).Google searches were conducted between 15 th February and 1 st March 2023, with the following terms: (1) "automated journalism"; (2) journalism AND algorithms; (3) journalism AND "artificial intelligence"; (4) use of AI tools on media.These formulas were used in both English and Spanish and the top ten results of each search were selected from the obtained results.For the searches in scientific databases, the parameters listed in Table 1 were established and the query was carried out during February and March 2023."inteligencia artificial" AND "periodismo" "journalism" AND "algorithms" "periodismo" AND "algoritmos" A total of 129 results were selected.From them, 100 were examined, after excluding duplicate and unrelated results, in search of journalistic initiatives and projects using AI tools.In the review of scientific articles, the abstract, methodology and results sections were examined.The tools were classified according to: functions (natural language generation, speech to text, text to speech, text to image, image to text, image recognition, data analysis); stage of the journalistic process where they are applied (information gathering, production, distribution and verification); and possible tasks to be performed in newsrooms (transcription of interviews or analysis of data from an external source, for example).After this first classification, we proceeded with an in-depth analysis to establish what type of tools are the most used, which were the first to be introduced in newsrooms, and what are the trends in the media in terms of the use of AI.A total of 76 tools from nine countries were collected, providing a snapshot of the roles that AI plays in journalism today and allowing these roles to be tested in subsequent experiences with ChatGPT.

Walkthrough
For the experimental design, a series of tests were conducted with ChatGPT between January and March 2023 to determine its capabilities, limitations and opportunities for improvement.These tests were divided into two parts: a first, completely experimental one, in which we talked with the AI to understand how it works and see what responses it offers to different inputs; and a second one in which we followed a walkthrough methodology (Mahatody; Sagar; Kolski, 2010), simulating the behaviour of a user (journalist) using the tool for the first time, without prior knowledge of its features or limitations.
During the second part, the results of the first part were taken into consideration, as well as the uses of AI applied to journalism obtained from previous benchmarking and the experiences reported in specialized media and by experts in the sector.From here, a list of possible actions was drawn up to evaluate the capabilities and opportunities offered by ChatGPT for the media.
After the exploration, and following the methodology employed by Pavlik (2023), the tool was asked to list the actions it could perform to assist in journalistic routines, to complement the study and compare them with those that had been discovered in practice (Annex I).

Experiment with journalists
To test the possibilities offered by ChatGPT applicable in newsrooms, an experiment was conducted with working journalists from different sectors and with different roles in the media.Based on the results of the benchmarking of AI tools applied to journalism and the walkthrough experience, the Experience Sampling Method (ESM) was followed to obtain data from participants while they performed their daily tasks, allowing them to report "on the nature and quality of their experiences at that moment and in their natural environment" (Berkland, 2017).This ensured that the tests performed with the AI were as similar as possible to how a user would make use of the tool if it were incorporated into their professional routines.
Twelve journalists hired by the media -eight woman and four men, between 24 and 46 years old-agreed to participate.A total of 33.3% of the participants work as journalists in a television medium, 25.0% work in the written press, another 25.0% in radio, 8.3% work in news agencies, and the remaining 8.3% in digital media.
The experiment was conducted between 20 th April and 7 th May 2023, in a non-face-to-face and asynchronous manner.
It consisted of a prequestionnaire, two experimentation phases (A and B) and a final questionnaire.In phase A, participants were guided through the experience; in phase B, they were free to test ChatGPT in any way.
The prequestionnaire delimited the participants' sociodemographic profile, and their previous AI and technological knowledge and affinity in the journalistic field.The tasks that these professionals perform on a daily basis were also collected to check whether the use of ChatGPT could be adjusted to them.Considering the results of the walkthrough experience, it was estimated that the use of ChatGPT would be more appropriate for tasks related to writing, data analysis and hierarchization or organization of information, so it was decided to focus the experiment on the use of this tool on these types of tasks.
Existing relationships with technological innovation are also a very relevant aspect in this experiment, as it was intended to measure the ability of an average journalist to use this AI tool without extensive prior training.When asked about this issue, 66.7% of respondents indicated that they find AI attractive and interesting, but they also raised risks related to privacy and ethics, among other issues; 25.0% of respondents considered the tool useful but they were not enthusiastic about it; while the remaining 8.3% found it useful and very interesting for some professions, however in journalism they only contemplated AI for automated and technical tasks, believing that it can be dangerous in some cases.
All the participants had already heard about ChatGPT but only half had already used this or other AI tools.Among the reasons for not having tried any such software, participants affirmed that they had not had time or patience or that they felt "terrible laziness" and that they did not want to "be a witness of our substitution as professionals."In the workplace, only 25.0% of participants had used before the experiment.They used this technology to translate texts, for brainstorming, data analysis and to create code snippets for data processing.Even when they did not use it, participants considered AI useful in their professional routines and, for those who already used it, they considered it useful to implement more systems with this technology.
In addition to ChatGPT, the participants were asked about their knowledge of other AI tools: 33.3% of respondents also knew about DALL-E, the OpenAI tool that creates images from text.Besides that, two participants knew about the Telegram bot that allows ChatGPT use from this app and one participant mentioned knowing about the AI tool Midjourney.
Before the experiment, participants were asked about which AI systems they would integrate in their workplace and why.The aim behind this question was to observe whether their opinion changed after trying ChatGPT.Among the answers, the idea of using a system to (1) transcribe interviews was repeated; also mentioned was the usefulness of this technology for (2) writing and summarizing texts, (3) subtitling, (4) creating graphics, (5) generating ideas through searches made by other users, ( 6) data analysis and pattern detection, ( 7) transcribing phone calls with sources in other languages, ( 8) fact-checking and information verification, or ( 9) social media management, among other ideas.
After the initial questionnaire, participants went through phase A of the experiment.ChatGPT works with natural language, therefore programming knowledge is not needed to use it.However, is important to note that this tool does not always answer the same way, even when the asked content is the same: the way in which the question is asked influences the answer.During the ChatGPTs first months, numerous experts and professionals published -and continue to publish-lists of prompts or instructions to achieve specific results with the tool and some companies have started to offer browser extensions (such as AIPRM for ChatGPT) that allow users to choose from a list of commands already created to ask the AI what the user needs.
Thus, participants were provided with a guide that included both general orientations and a series of 'standard instructions' in the first phase.Participants could choose between a list of prompts (Annex II) already prepared according to their profiles and in which they only had to modify some variables to adapt them to their professional routines.This document explained how ChatGPT worked so that they could use it whenever they wanted, preferably during their working hours, and provided them with a table containing a series of prompts related to text creation (series 1), text adaptation (series 2), and information hierarchization (series 3).This table included different customizable parameters (in colour) and a fixed part of the text (without colour).The complete list of prompts can be found in Annex II.In phase B, participants were free to use the AI tool as they saw fit.To conclude the experience, participants were asked to fill out a final form evaluating the experience, pointing out positive and negative aspects of the tool.

AI tools applied to journalism
In the table resulting from the benchmarking of AI applied to journalism, 76 tools belonging to nine countries that use this technology in the journalistic field were collected.One of the most common uses of AI in journalism is the automated writing of news whose main source is structured data.Due to the system's own limitations, the news covered in this way is simple information, based on quantitative data and following a clear scheme or structure, such as sports information, market information or election results.Examples include: Wordsmith from Automated Insights, used by the Associated Press agency to write news on sport and stock markets; Heliograf, which also writes autonomous sport and political news for The Washington Post; or Syllabs, which wrote news on the French legislative elections in 2015 for Le Monde (Laboratorio de Periodismo, 2018; García-Avilés, 2019; Manfredi-Sánchez; Ufarte-Ruiz, 2020).
Another function of AI applied to journalism is data analysis.This technology makes it possible to extract patterns and analyse information in a very short time and with great precision, to the point of being able to predict certain events.This is the case with Virality Oracle -a tool used by The Washington Post that predicts which topics will become viral (García-Avilés, 2019)-and Lynx Insights -used by Reuters, this tool compiles and analyses data so that a journalist can then write the news story (Agarwal, 2018).
After studying the range of AI tools applicable to journalism, classified in Table 2, it was concluded that these applications have very diverse functions and that they are applied in all parts of the communication process, from automated writing to data analysis, from image creation to verification.The most common function detected during the benchmarking process was content production, with 40 of the tools analysed being dedicated to this function, whether creating text news, graphics or audiovisual pieces.Another of the most common categories involves data analysis, followed by the chatbot modality.Sometimes tools combine several functions, such as chatbots that offer news verification or news aggregators, which first need to analyse data to then offer personalized content.On many occasions it is external companies that offer the technology to the media: Automated Insights, Narrative Science or Monok were some of the AI companies identified.

Walkthrough experience
To gain an in-depth understanding of the possibilities of ChatGPT in the journalistic field, a series of tests were carried out with the AI, asking it to perform certain tasks based on previous results, obtaining the results shown in Table 3.
From this experience it is clear that ChatGPT's limitations are mainly marked by: (1) its limited knowledge of the world; (2) its being disconnected from the network; (3) producing incorrect information and presenting it as truthful; and ( 4) not correctly following the instructions given by the user (for example, asked for a text of 1,050 words, it offers a longer text); in addition, ( 5) certain biases are detected, probably derived from the information it received in its training, for example, gender bias.It is also important to note that ChatGPT does not always warn of its limitations.In the case of subtitling videos or summarizing texts provided to it through links, the tool pretends to perform the required action, since it cannot access the Internet or 'watch' a video; on some occasions it pointed out the limitation and on others it offered invented information.On the other hand, it is also important to highlight advantages such as: (1) its ease of rewriting, restructuring or translating a text; (2) providing new ideas; and (3) generating different content (a diet, a plan for social networks or an exercise routine, among others).

Newsroom experience
The walkthrough experience provided a significant amount of data to better understand chat behaviour.However, it is necessary for working professionals without experience or specialized training in artificial intelligence to test the tool to avoid potential biases.
In phase A of the experiment (testing the AI tool by following instructions and choosing from a series of pre-established prompts, as shown in Annex II), the journalists indicated that the prompts that best suited their routines were focused on summarizing information and hierarchizing it within a news piece.The prompts most used during the experience were specifically those that allowed writing an interview, writing informative pieces based on data, and hierarchizing information within a news item.

20
Analyse data to rank news, decide which is translated into which language, suggest combinations of images and headlines, etc.

Yes
It is necessary to introduce criteria for it to do so in a justified and orderly manner.
21 Publish on social media No It is not connected to the Internet.It can create copies and ideas, but it cannot publish anything on the network directly (except through APIs).
22 Detect fake information Partially It can detect false information in text, if requested, but not in other formats because it cannot access the content.Again, the problem comes from the data it has received and because it does not have access to real-time information, which makes it wrong in some cases.

Subtitle videos and/or audios in text No
The test was done by trying to provide it with a video via link.It does not have access to the video (because it is not connected to the network) and yet it pretends that it does, making up the subtitles.
24 Summarize videos and/or audio in text No It does not have access to the video.

Conduct interviews (directly) Yes
Interacts with the user as if it were an interviewer, considering their answers to elaborate and/or link to the next question.Introduces itself and says goodbye.
In phase B of the experiment, participants were free to ask the tool to do whatever they wanted.Journalists asked ChatGPT to compose emails, verify potentially fake news, summarize press releases, or rank news stories based on the expected number of hits, among other actions.In addition, they tried to adapt its writing style to the one they normally used in their media and tried to investigate to what extent the AI 'understands' the information it offers or receives.
In the final questionnaire of the experiment, journalists positively highlighted the abilities of AI to streamline and automate part of their work routines without replacing the journalists' work.Summarizing information, generating code, contributing ideas when looking for content ideas, brainstorming and correctly writing information are some of the actions that stand out.Some of the participants also stated that they were surprised by the AI's accuracy and speed in writing, as well as by its ability to synthesize and explain complex information.
Although there were many positive aspects, the professionals detected drawbacks and risks.On a technical level, the impossibility of introducing external content other than text, operating errors (such as difficulties in logging in, creating an account, or the tool being blocked) and the limitation of access to data prior to 2021 significantly reduces the possibilities of ChatGPT at this time and make it difficult to work with.
Regarding the analysis of the content provided by ChatGPT, participants were concerned about the ease with which it includes false information and, especially, because it does so in a format and appearance that favours it being considered relevant, credible and reliable information.Some participants also noted that the AI's headline writing is poor and that its writing of interview questions could be much improved.One participant also mentioned as a shortcoming that the tool "lacks empathy".This issue may be particularly relevant in situations in which the information may offend public sensitivities, something that the AI would not be able to detect and, therefore, would not take "special care" of when conveying the news.
However, after the experience, 50% of participants considered including ChatGPT in their work routines.Of the remaining 50%, half said they were not sure, and the other half said they would not include it.The reasons given by these participants were the possible problems of privacy and data protection, the fact that it offers incorrect information, and even the belief that the use of these tools can cancel out human capacity for analysis and reflection.It is important to note that the AI tool itself offered this same response during the interview (Annex I).
Finally, the journalists provided their perspective on the use of artificial intelligence in newsrooms, selecting the statement that best suited their opinion among those shown in Figure 1.As can be seen, the majority option, chosen by 83.3% of participants, was "AI will not replace journalists, but it will change their professional routines".
Only one participant agreed with the statement "AI may eventually replace journalists, eliminating jobs".The same was true for the statement "AI will never replace journalists nor will it ever come to have much influence in newsrooms".
A AI may eventually replace journalists, eliminating jobs.B AI will never replace journalists, nor will it ever have much ever influence in newsrooms.
C AI should be banned from newsrooms.D AI will not replace journalists, but it will change their professional routines.In most cases, AI complements the work of journalists, it does not completely replace it However, the statement "AI should be banned from newsrooms" was not chosen by any of the participants.Moreover, as Chat itself and one of the participants in the experiment pointed out, "the excessive use of Chat-GPT in routine journalism may result in a loss of human skills and knowledge, such as the ability to investigate and critically analyse".Regarding the debate on the need to regulate AI in relation to aspects such as privacy or copyright, the participants again demonstrated unanimity, with 91.7% stating that legislation is necessary, compared to 8.3% who do not see the need for it.The only participant who pointed out this option did so because "he understands that these rights would fall on the person using the AI".

Discussion and conclusions
The results obtained in this research corroborate the hypotheses put forward and shed light on the panorama of AI in newsrooms, as well as on the possible future uses of these tools in the media.
The first hypothesis raised refers to the transformation caused by the integration of AI into newsrooms.It was established as a previous point that these tools facilitate, streamline and automate journalistic processes and routines.This is proven by the results of the benchmarking carried out, as well as through the walkthrough and the experiment with journalists.In all these methodological processes, AI demonstrates its potential to reduce time spent on production, writing, management and dissemination of journalistic content, which could result in a reduced workload for journalists.
Related to this, it is also observed that a change in professional routines may lead to a change in the required professional profiles or specialization expected of a journalist, as already observed by Salazar-García (2018).
However, the results do not seem to indicate the disappearance of jobs due to the incorporation of AI in newsrooms -at least not imminently.The benchmarking results show that in most cases these tools complement the work of journalists instead of replacing them.Journalists are still essential for providing more context, to review possible errors and biases or the adequacy and quality of the texts produced, among other issues.This is also observed in the walkthrough with ChatGPT: in some cases, it incorporates biased or false information, and it does not always correctly interpret the instructions provided.In the experiment with journalists, participants agreed on the usefulness of certain types of AI tools and 83.3% agreed that "AI will not replace journalists, but it will modify their professional routines".The possibility of artificial intelligence eliminating jobs has already been addressed by academics (Manfredi-Sánchez; Ufarte-Ruiz, 2020) and, although this possibility is real, other research also shows that experts and professionals do not see the disappearance of journalists or their role in newsrooms as likely.Their routines will be modified, but they will not be wiped off the map.It is important to note, however, that although on the one hand this more optimistic perspective exists, there are already media outlets that do not have a single journalist on their staff, the so-called "synthetic media", and that rely exclusively on AI to perform their role as reporting agents (Ufarte-Ruiz; Murcia-Verdú; Túñez-López, 2023).
The second hypothesis is related to the possible ethical, deontological and legal risks that may be associated with the use of artificial intelligence in newsrooms, as well as the need for its regulation.This aspect has already been investigated by academics as it involves complex dilemmas and challenges related to user privacy, the preservation of journalistic ethics and deontology, transparency about the use of AI and accountability mechanisms, algorithmic biases, the veracity of information and the safeguarding of copyright or intellectual property, among other problems (Ufarte-Ruiz; Calvo-Rubio; Murcia-Verdú, 2021; Sanahuja-Sanahuja; López-Rabadán, 2022).The results obtained through the walkthrough with ChatGPT and the experiment with journalists corroborate this second hypothesis and are consistent with the results of previous research.The inclusion of biased or false information, the lack of sources and a particularly careful presentation, which contributes to creating an image of reliability and honesty, make it especially necessary to regulate these tools to avoid further misinformation.That such AI feeds on big data and learns from its interactions with users puts users' privacy at risk.Moreover, by omitting sources in the responses, attribution and copyright also come into dispute.91.7% of the experiment participants felt that the use of AI should be regulated in terms of these issues.
In short, this AI tool offers many solutions and opportunities that are applicable to the journalistic world, reducing the time needed to perform routine tasks.Among the advantages is the ability to write text in different formats, help select topics to be covered in the media or adapt texts to social networks.These are simple tasks, more related to communication than to journalistic production itself, but they can free up journalists and allow them to devote more time to work that requires more research and depth.However, this tool also has several limitations.Perhaps the most serious is that it does not always provide truthful information.Sometimes ChatGPT points out that it is not connected to the Internet and that its data is limited, so its information may not be correct or accurate.But at other times it simply invents information that it does not have, without warning the user.This means that the journalist may publish inaccurate or even false information, if they are not an expert on the subject.That the information to which it has access is limited to data up to 2021 also means that, even if the content it publishes was correct at some point, it may already be outdated when the query is made.It is important to note that at the time of writing (May 2023) it is already possible to connect the tool to the Internet through other software The fact that ChatGPT is powered by big data and learns from its interactions with users puts users privacy at risk 91.7% of the experiment participants considered that the use of AI should be regulated and that OpenAI has announced a paid version with real-time connection to the network, so this limitation will be overcome soon.Another limitation that is likely to remain is the preponderance of the most repeated contents and ideas over less common ones, regardless of their veracity or importance, leading to a situation similar to the so-called "tyranny of the majority" in politics (Harper, 2017).According to the results obtained during the walkthrough exercise, the information provided by ChatGPT is often based on the number of times it has found such data and not so much on the quality of the data.In other words, it puts the quantitative before the qualitative.This can perpetuate certain types of biased and more traditional views in some areas of knowledge.
This fits with another of the ethical dilemmas of the use of AI: how the substitution of humans by machines in tasks inherent to their own being affects wider society and at the same time makes it necessary to rethink the curricula in journalism degrees, not only to adjust the professional profiles, as mentioned above, but also to provide new journalists with an ethical-deontological knowledge base that allows them to deal with AI in their jobs without letting it prevent them having critical and independent thinking, which is essential to safeguard the values inherent to journalism (Peña-Fernández; Meso-Ayerdi; Larrondo-Ureta, 2023).

Limitations of the study and future lines of research
The continuous advances in artificial intelligence, and specifically in ChatGPT, mean that some of the noted limitations of the tool have already been overcome in its paid version, as mentioned in the body of the article.On the other hand, future lines of research remain open, such as the exploration of possible codes and manuals for a safe use of AI in newsrooms or possible adaptations of ChatGPT to make it more suitable for journalistic use.
Since the sample of participants was small, the results should be considered with caution.Based on this first study, it would be interesting to carry out another experiment with a larger number of journalists to obtain comparable statistics on the use, risks and advantages of ChatGPT.
Another limitation of AI is the preponderance of the most repeated contents and ideas over the less common ones, regardless of their veracity or importance, which can lead to a situation of "tyranny of the majority"

Figure 1 .
Figure 1.Participants' responses to the future of AI in newsrooms.

Table 1 .
Database searches

Table 2 .
Types of AI tools according to their function

Table 3 .
Results obtained from the walkthrough experience formats, if it is given enough information, it has no problem writing it, but if it has no information it tends to make it up.In some cases, it is difficult to know what is true and what is not.