Who interacts with communicative AI and what attitudes are displayed toward it? Sociodemographic, personality, and futurephobia variables

Communicative applications of artificial intelligence (AI) have burst into the lives of millions of people through products such as ChatGPT , Midjourney


Introduction
Advances in artificial intelligence (AI) have brought about a context of technological disruption in the field of communication.Some AI-based applications are not just tools through which people communicate and interact, but rather social actors with whom people communicate.AI chatbots have garnered significant social and media attention, especially since the launch of ChatGPT by the OpenAI research lab in late 2022.This tool reached 100 million users within just two weeks, compared with TikTok's nine months, Instagram's 26 months, Facebook's 54 months, or Twitter's 65 months to reach the same milestone (Pastor, 2023).Image generation software based on prompts or instructions in natural language have also represented a major innovation and attracted interest from users worldwide, particularly following the beta release of Midjourney in March 2022 and the removal of the waiting list for Dall-E in September of the same year.
Although AI in general, and communicative AI in particular, have great potential to contribute to the well-being of individuals and societies, there are also dangers and concerns that delve into the realm of data protection, copyright issues, and malicious misuse for disinformation and manipulation purposes.In this context, it is important to understand how citizens perceive these tools, as well as the characteristics of early adopters who are embracing them.Perceptions and use mutually influence each other in a feedback loop, and, in turn, can shape the perception of non-users, the evolution of communicative AI tools themselves, and the development of their legal and regulatory framework.However, there are few studies to date that analyze the impact of this technology on society, and most of them come from reports by private companies or governments, with limited references in the academic domain (Zhang, 2021).Additionally, nearly all these studies approach the phenomenon in a general manner or refer to earlier, non-communicative (as defined here) technologies, such as Alexa, Twitter, or Spotify (see, for instance, the pioneering study in Spain by Sánchez-Holgado; Arcila-Calderón; Blanco-Herrero, 2022).
In this context, the present study uses data from an original survey conducted in February 2023, involving a diverse sample of 821 residents in Spain.From the resulting dataset, we examine general attitudes toward communicative AI and investigate the sociodemographic and personality antecedents associated with the favorable perception and use of these technologies.As personality antecedents, we identify and evaluate the possible role of the big five personality factors and 'futurephobia,' conceptualized as a negative attitude toward the future grounded on the perception that what is to come is threatening and unavoidable.

The emergence of AI and its communicative applications
A social science approach to AI defines it as the capability of non-human machines or artificial entities to perform and solve tasks, communicate, interact, and behave logically in a manner akin to humans (Gil de Zúñiga; Goyanes; Durotoye, 2023).In short, this entails making decisions, executing tasks, and making predictions.Despite the current surge in these technologies suggesting a very recent advancement, the truth is that their development began decades ago.The term AI itself was coined in 1956 at the Dartmouth Conference.By that time, Turing's seminal work (1950) on "Computing Machinery and Intelligence," wherein he introduced the concept of the imitation game, had already been published.Turing's contribution, which defines and characterizes the discipline of AI (Gunkel, 2012), was rooted in the challenge of discerning whether machines can think.In what is now known as the Turing test, he formulated a series of questions aimed at ascertaining whether the machine can respond in a way comparable to a human interlocutor and, as such, successfully simulate a human being.
Despite the elapsed time and the progress made in the field, a precise or widely accepted definition of AI is still lacking.Some criticisms are directed toward the very concept, suggesting that these technologies are neither intelligent nor artificial.This viewpoint is presented by Evgeny Morozov, a prominent figure in the critique of technological solutionism.His observation contends that, on the one hand, there is nothing artificial as the training of AI involves prior work by real human beings, from whose creative and professional endeavors it has drawn.On the other hand, he disputes the attribution of intelligence, characterizing AI as primarily predictive engines, engaged in the search for more or less complex patterns (Morozov, 2023).The terminological debate mirrors the concerns raised by Turing (1950, p. 433), who noted that the common use of the terms "machine" and "think" poses an obstacle to answering the question of whether machines can think.
The current advancement, which encapsulates the essence of the term, revolves around the potential for AI to transcend its dependence on human intelligence to define its own existence -namely, autonomous performance.This leads e320502 Profesional de la información, 2023, v. 32, n. 5. e-ISSN: 1699-2407 3 us directly into what Harari (2014) defines as the "other life:" the computer programming field's attempts to emulate genetic evolution: "Many programmers dream of creating a program that could learn and evolve completely independently of its creator.In this case, the programmer would be a primum mobile, a first mover, but his creation would be free to evolve in directions neither its maker nor any other human could ever have envisaged" (Harari, 2014, p. 353).

Communicative aspects of AI
To better understand the communicative aspects of AI, we must again journey several decades, returning to Turing's work and his consideration of communication as the critical factor in artificial intelligence.According to Gunkel (2012), the point of the argument is that intelligence cannot be directly observed and thus needs verification through behaviors akin to intelligence, such as communication, either in a general sense or through verbal conversation.This entails the assumption that communication is a product of intelligence and, therefore, whoever communicates must possess intelligence.It is not coincidental that, during the same timeframe, the boost to and reciprocity between communication and AI coincided, thanks to the contributions of linguistics, mathematics, and logic.
It has been from the year 2022 onward that the most important technological innovations based on AI incorporating (new) communicative aspects have emerged.To some extent, some of these technologies are no longer merely tools through which people communicate and interact, but rather social actors with whom people communicate.And such a perspective does not fit traditional theories that positioned technologies as mere channels (Guzman;Lewis, 2020, p. 73).This overarching paradigm shift can have profound implications for journalism, as asserted by Broussard and colleagues ( 2019): Even within this domain, the creation and dissemination of information may cease to be an exclusively human task, posing a challenge to the balance and interaction between human intelligence and artificial intelligence (see also Chan-Olmsted, 2019 In November 2022, OpenAI launched ChatGPT, wrapping up a year in which they had already introduced the Whisper speech recognition software (in September) and Dall-E 2 (in April).ChatGPT is a specialized chatbot designed to engage in natural language conversations with human users, while also generating coherent responses.Only two months after its release, its estimated number of global users surpassed 100 million (Pastor, 2023).Dall-E, too, operates on the GPT (Generative Pre-trained Transformer) language model framework and is trained on an extensive dataset of images collected from the internet.This image generation AI requires prompts or natural language inputs provided by its users, from which the software generates photographs, paintings, and emojis in diverse styles.In May 2023, Microsoft opened its AI-powered Bing search engine to the general public, after having formed a partnership with OpenAI a few months earlier.The revamped search engine, incorporating GPT4 and Dall-E capabilities, demonstrates its ability to interact with its users through text, photos, images, and graphics (Mehdi, 2023).In reaction to these developments, Google introduced its conversational bot, Bard, to 180 countries in May 2023.Bard relies on LaMDA, Google's "big language model".
Replika's conversational technology has been in existence for as few years, functioning as an "AI companion who is eager to learn and would love to see the world through your eyes" (Luka, 2023).The San Francisco-based tech company Luka released Replika at the end of 2017.Presently, Replika operates on a freemium model and claims to have approximately two million users (Tong, 2023).Replika provides an interface that allows users to chat, exchange images, and connect through voice or video calls with a humanoid customizable avatar powered by AI.Tailored to the user's needs and preferences, Replika can serve as an advisor, a friend, or even a romantic partner.

Social perception and attitudes toward AI
AI remains a relatively unfamiliar phenomenon to over a third of the world's population, with rates of unawareness reaching nearly 60% in countries with high educational standards such as Italy and Japan (Maslej et al., 2023).However, because communicative AI is relatively new and becoming more prominent in the media, people are still in the process of developing their perceptions and attitudes about this emerging technology.Recent studies grounded in framing theory suggest that exposure to technology-related news, science fiction films and literature, and personal conversations about technology predict attitudes toward AI and individual AI-related frames: AI as a means for social progress or as a "Pandora's box" (see Brewer et al., 2022).
According to the most recent report on AI for the year 2023 conducted by Stanford University, China stands out as the country with the most positive perception of this technology.Seventy-eight percent of respondents in this eastern country state that AI offers more benefits than drawbacks to society.In the United States, only 35% of the population believes that AI has a more positive than negative impact, whereas this figure rises to 53% in Spain (Maslej et al., 2023).Globally, a report preceding the recent expansion of communicative AI suggests that people in Eastern countries exhibit more positive sentiment (65%) about AI than people in Western countries (35%) (YouGov, 2021).
The traditional attitude of citizens toward AI, both in Europe and the United States, has been cautious (European Commission, 2017; Zhang; Dafoe, 2020).In the United States, a significant portion of the population perceives positive as-pects of AI use, such as its potential to make life easier and society better (31%) or to save time and enhance efficiency (13%).Concerns are also reported, such as job loss (19%), surveillance and privacy (16%), and lack of human connection (12%) (Maslej et al., 2023).In the case of European countries, acceptance of AI and task automation varies, ranging from a minimum of 7% of respondents in France to a maximum of 25% in Spain (YouGov, 2021).
There are certain specific domains that have been the subject of studies on perceptions of AI, with the medical and occupational fields being prominent examples.In the medical field, a review of studies conducted by Young and colleagues (2021) suggests that patients show positive attitudes toward the role of AI in aspects such as diagnosis, treatment, and patient care, although they also express concerns and prefer collaborative efforts between AI and a human.
In the occupational sphere, a study by Lichtenthaler (2020) identifies clusters of individual characteristics associated with their attitudes toward AI.Those with negative attitudes toward AI tend to prioritize human relationships, show resistance to technological solutions, downplay efficiency and process optimization, and express fears of potential negative consequences of AI usage, such as job loss or data protection issues.On the other hand, those with positive perceptions of AI adopt a neutral stance toward human interactions, are open to the use of emerging technological solutions, emphasize rational choices and process optimization over empathy, relativize both positive and negative consequences of AI usage, and have a pragmatic approach to data privacy, as long as AI adds value to their work.
Currently, ChatGPT dominates over half (52%) of the social media conversation on the topic (Maslej et al., 2023).The net sentiment of the social media community's discourse about AI-related tools (ranging from -100, completely negative, to +100, completely positive) is favorable.By the end of 2022, this net sentiment was +32 for ChatGPT, whereas Dall-E scored 11 points lower (+21) (Maslej et al., 2023).Despite efforts to comprehend attitudes toward AI, further in-depth studies are needed regarding this phenomenon in general (Zhang, 2021) and communicative AI in particular.This is the reason why we formulate our first research question: RQ1: What general attitudes do Spanish adults show toward communicative AI?

Attitudes and use of communicative AI: Sociodemographic antecedents
When a technological innovation enters the market, consumers do not exhibit the same attitudes toward it or adopt it at the same pace.Given that most individuals are risk averse, there is a natural tendency to delay the decision of technology adoption until more information is available.However, this tendency is not uniform among all individuals, and literature on innovation adoption has classified consumers based on their attitudes toward innovations and their varying tendency to adopt them quickly.In his now classic theory, Rogers (2003) distinguishes five distinct different categories of consumers based on their speed of adoption: innovators, early adopters, early majority, late majority, and laggards.
One of the most intriguing aspects of research based on this typology is that these user groups seem to differentiate from one another based on sociodemographic characteristics (Dutton;Rogers;Jun, 1987;Laukkanen;Pasanen, 2008;Rogers, 2003).Rogers (2003) himself points out that those who adopt innovations more promptly (early adopters) tend to have higher educational and economic level.
Being communicative AI a technological disruption, it is reasonable to expect that attitudes toward and use of AI may also be influenced by sociodemographic characteristics.Recent studies based on surveys across several regions worldwide have identified differences in attitudes toward AI based on gender, with men displaying more favorable attitudes (Johnson;Tyson, 2020;Maslej et al., 2023).Similarly, a higher level of education also appears to be associated with more favorable attitudes toward AI, as well as higher income levels (Johnson;Tyson, 2020).Regarding age, the findings are more mixed.While Johnson and Tyson (2020) suggest that younger individuals hold a more positive perception of AI, the YouGov (2021) report does not observe differentiating patterns by age.
Given the scarcity of studies on perceptions and use of communicative AI, with most of them being reports that do not account for potential spurious associations, we consider it pertinent to pose the following research question: RQ2: What are the sociodemographic antecedents of attitudes toward (RQ2a) and use of (RQ2b) communicative AI?

Attitudes and use of communicative AI: Personality antecedents
In addition to sociodemographic characteristics, some previous studies have addressed the role of individual personality characteristics in explaining the adoption and use of technological innovations.To do so, it has been common to draw from research in psychology of individual differences, particularly from one of the most widely accepted theoretical models for understanding personality differences, known as the big-five model.These five major factors stem from several decades of research and aim to systematize, in a comprehensive and concise model, the personality traits that distinguish individuals and manifest in their cognitive, affective, and behavioral styles (McCrae;Costa, 1987).
In brief, the big-five model proposes that most of the variation in human personality can be parsimoniously explained through five basic dimensions: extraversion, agreeableness, conscientiousness, emotional stability, and openness to experience (Gosling;Rentfrow;Swann, 2003;McCrae;Costa, 1987).Each dimension has two poles: for example, those who score low on extraversion are considered introverted (Gosling;Rentfrow;Swann, 2003).
e320502 Profesional de la información, 2023, v. 32, n. 5. e-ISSN: 1699-2407 5 In the early years of the 21st century, some studies linked extraversion and neuroticism with reduced internet usage -particularly when considering the more social functions of the internet, such as chat rooms or discussion groups.The findings of these earlier studies on online technology adoption were interpreted to suggest that the anonymity and control over disclosed information through the web attracted individuals who were more solitary or had greater difficulties in interacting with others (Amichai-Hamburger; Wainapel; Fox, 2002; Amichai-Hamburger; Ben-Artzi, 2003).
While the personality dimensions associated with internet use have changed as the medium itself has evolved (Correa; Hinsley; Gil de Zúñiga, 2010), research on the relationship between personality and the adoption and use of specific communicative technologies has not ceased.Thus, extraversion, openness to experience, and neuroticism seem to be related to certain uses of social media platforms such as Facebook and instant messaging (Correa; Hinsley; Gil de Zúñiga, 2010; Ross et al., 2009).In the case of the technological revolution brought by AI, it is reasonable to hypothesize that some individuals will more readily adopt this technology and hold more favorable perceptions of it: Those with greater intellectual curiosity, propensity to seek novelty and consider alternative approaches, as well as to avoid established conventions and patterns (i.e., those with greater openness to experience, see John & Srivastava, 1999).For the remaining dimensions, we pose our third research question: H1: Openness to experience will be positively associated with favorable perceptions of communicative AI (H1a) and with higher frequency of use (H1b).RQ3: What are the personality antecedents of attitudes toward (RQ3a) and use of (RQ3b) communicative AI?

Futurephobia and its possible role in attitudes and use of communicative AI
Some individuals tend to view the future in a negative way and develop thoughts and emotions connected to potential events that could impact them individually -such as becoming sick or losing a loved one-or collectively -such as a nuclear war or climate catastrophe, see Zaleski, 1996.Anxiety about the future has been defined as a set of fearful and worrisome states triggered by the anticipation (cognitive representations) of possible forthcoming changes (Zaleski, 1996).Frequently, these thoughts are linked to feelings of personal inefficacy, in the sense that "something bad will happen, regardless of one's own actions" (Zaleski, 1996, p. 165).
Anxiety about the future is associated with a pessimistic view of potential solutions to humanity's problems (Zaleski, 1996).
A similar notion has been recently put forward by García-Barnés (2022) in his essay Futurofobia.According to García-Barnés, futurephobia encompasses a set of cognitions and emotions frequently experienced by the generation of the continuous crisis, born in or after the 1980s (García-Barnés, 2022, p. 20).The futurephobic generation grew up in a world that seemed destined to thrive in terms of social well-being, where children were assumed to have a better life than their parents.This generation came to realize that the project of modernity had shattered, and from that point onward, the socio-economic context would only deteriorate.Consequently, they found themselves living in a state of perpetual crisis, which extended to their personal lives as well.In this sense, the futurephobic generation is characterized by a fearful attitude toward a future that can only get worse, adopting conservative attitudes instead of making big bets.Futurephobia also entails an element of inevitability, associated with the idea that it is no longer possible to alter what lies ahead, and thus "our best option is for [the future] not to arrive or to arrive later" (García-Barnés, 2022, p. 19, in Spanish in the original).
Based on these previous ideas, we hypothesize that a higher level of futurephobia will be associated with negative attitudes toward technologies such as AI that emerge in the present and project into the future (with potential to improve people's lives and societal issues).As futurephobics perceive the future as irremediably worse, they will also tend to think that emerging technologies will be useless at best.Based on this perception, it is also reasonable to assume that futurephobics will have a reduced inclination to adopt this technology and make the associated learning efforts.More formally, we hypothesize the following hypothesis: H2: Futurephobia will be inversely associated with favorable perceptions of communicative AI (H2a) and its frequency of use (H2b).

Sample
The data for this study were obtained from the second wave of a larger research project addressing the impact of media uses on political knowledge and social life (budget line 2022/0000587, funded by the Universidad de La Laguna and the Spanish Ministry of Science, Innovation, and Universities).Data were gathered through an online survey hosted on Qualtrics, which was distributed to a diverse sample of residents in Spain.For the distribution of the questionnaire link, we contracted the services of Netquest, a market research company that administers online panels in more than 20 countries.In Spain, Netquest has over 100,000 registered panelists.In November 2022 (first wave, W 1 ), Netquest distributed our questionnaire among 3,571 participants, according to age, gender and educational level quotas that reflected the national distribution of these variables.Between February 17 and 27, 2023, we recontacted the 1,207 W 1 panelists from whom we obtained valid responses and obtained 821 valid responses in W 2 (achieving a retention rate of 68%).AI-related questions were only included in W 2 .This second wave is well-balanced in its gender composition (49.6% female), has a mean age of 50.78 years (SD = 16.26;8.3% aged under 25,11.2% between 25 and 34,17.6% between 35 and 44,35.9% between 45 and 64, and 27% aged 65 or older) and a median education of higher level vocational training (M = 3.83, SD = 1.87 on a seven-point scale ranging from 1 = primary education to 7 = postgraduate and doctoral education; 10.8% reported having only completed primary education studies, while 39.2% have at least mid-level university education).

Variables of interest
-Communicative AI use.
We asked respondents about their frequency of use -over the past weeks, from 1 = never to 5 = all the timeof three types of generative AI-based services with a communicative application, namely "AI chatbots such as ChatGPT", "AI for creating images (Dall-E, Midjourney...)," and "socialization chatbots such as Replika" (Cronbach's α = .87;M = 1.26;SD = 0.59).As inferred from the low mean value, a significant proportion of respondents (76%) never used any of these three services.-Attitudes toward communicative AI (closed-ended).
This dependent variable explores respondents' attitudes toward the contribution of AI to personal and social development, where higher values indicate favorable attitudes and lower values indicate unfavorable attitudes.We asked about their degree of agreement (from 1 = strongly disagree to 5 = strongly agree) with three statements about the same communicative AI programs and applications mentioned in the previous variable: "AI will make our lives easier", "AI will assist us in many tasks that previously required a lot of effort", and "Overall, AI will improve our quality of life" (Cronbach's α = .88;M = 3.30; SD = 0.85).

-Attitudes toward communicative AI (open-ended).
In addition to the items mentioned in the previous variable, an open-ended question was included to explore potential attitudes not covered in the study design.Thus, we sought to assess the cognitive, affective, and behavioral component of attitudes toward AI with the following statement: "Explain in your own words what you think about the programs and applications of artificial intelligence [mentioned in the previous questions] and how the idea of a world in which AI is present in everyday life makes you feel." -Futurephobia.
Based on the notion of futurephobia described by García-Barnés (2022; see also Kantenbacher et al., 2022;Zaleski, 1996), this variable measures a negative attitude toward the future, stemming from the "feeling [that] everything that is to come is going to be worse" and that "nothing you can do will change things" (García- Barnés, 2022, pp. 19-22).We asked respondents to indicate their level of agreement (on a 5-point Likert-type scale) with the following three statements: "Future generations will have a worse future than today's generations," "What is to come in the future is probably worse than what we have today," and "It is difficult for us to do anything to escape a worse future" (Cronbach's α = .73;M = 3.41; SD = 0.89).-Personality traits.
To assess personality traits, we employed the 10-Item Personality Inventory (TIPI), an instrument widely used in the scientific literature that operationalizes the big five personality dimensions (Gosling;Rentfrow;Swann, 2003).The inventory consists of two items for each dimension, and since the data collection was conducted in Spain, a Spanish adaptation of Romero and colleagues (2012) was used.

-Demographic variables.
The first wave of the questionnaire included several demographic variables used as predictors in the regression models.In addition to age, gender, and educational level, which were detailed in the previous section, information on income was also collected.For this purpose, an item on net household income of was included (from 1 = less than €600 to 6 = more than €3,900): M = 3.69, SD = 1.35, median = 4.00 (from 1,801 to 2,700 €).
The futurephobic generation is characterized by an attitude of fear of a future that can only get worse and adopts conservative attitudes instead of placing big bets e320502 Profesional de la información, 2023, v. 32, n. 5. e-ISSN: 1699-2407 7

Statistical analyses
To test our hypotheses and answer our research questions, we first generated a zero-order correlation table for our variables of interest using the pairwise deletion procedure, with the assistance of the SPSS statistical software, version 25.Next, using the same software, we developed ordinary least squares (OLS) regression models, with attitudes toward communicative AI and communicative AI use as dependent variables.
After examining the distribution of the dependent variables, we found that the prevalence of communicative AI use was very low in our sample.This was expected given that the technology became available to the public just a few months before the survey was launched.To minimize the problems that this skewed distribution could cause in statistical significance testing, we followed Boulton and Williford's (2018) recommendations for cases of "true zeros."Thus, we created two new variables from the original variable attitudes toward communicative AI (closed-ended).To construct the first binary variable (0/1), all "never" responses were recoded as 0, while all other responses, ranging from "rarely" to "all the time," where recoded as 1.With this initial step, we divided respondents between non-users and users of communicative AI, without considering their frequency of use of the technology.In a second step, we created another variable by recoding all "never" responses as missing values.This double approach minimizes problems in calculating statistical significance and considers two separate processes: one that governs whether an individual will become an AI user or not (a dichotomous variable), and another process that governs the frequency with which they will use communicative AI once they have decided to use it (variable with "never" responses treated as missing values).In linear models, instead of assuming homoscedasticity, we use the HCREG macro for SPSS, which provides consistent standard errors in the presence of heteroscedasticity (HC0, see Hayes; Cai, 2007).
Regarding the analysis of the open-ended item about attitudes toward communicative AI, we first scrutinized the manifest content of the responses without imposing prior categories.After a discussion among the study authors, we inductively generated six new variables that captured affective and cognitive aspects of attitudes toward AI, along with a seventh, more general variable, which categorized the responses as positive, negative, or neutral attitudes.Subsequently, the first and third authors of the study proceeded to code the entire sample, determining the reliability of the coding process using the Krippendorff's alpha coefficient.The results of the reliability tests led us to discard one of the variables (skepticism/distrust) due to an unacceptable value in the statistic (.553).For the rest of the variables, acceptable Krippendorf's alpha values were obtained, ranging from a maximum of .926(overall positive, negative, or neutral attitude) to a minimum of .701(caution).

Results
The first research question (RQ) addressed the general attitudes of Spanish adults toward communicative AI.The answer to this RQ stems from the analysis of the open-ended item.Firstly, it is observed that many responses do not specifically focus on communicative AI, but rather refer to AI in a more general sense-even though the question was focused on its communicative aspects.The observed tendency to generalize or experience "semantic spillover" suggests that many people are still not very familiar with AI.This leads to a global perception of AI without fully understanding or differentiating its various aspects.For example, a 66-year-old male responded that "it should be used safely, moderately, and only when necessary, especially in medicine." Another 32-year-old male respondent believed that AI-based technologies represent "an important advance that still has a long way to go before being fully implemented [... and that] they should be subject to the user's desire to use them or not, as in the case of vehicles with autonomous driving systems".
The lack of knowledge about AI is explicitly mentioned in up to 15.4% of the valid responses, where respondents state that they are unaware of AI in general or its communicative applications in particular: "I am not familiar with [communicative AI] in depth, only through news I hear on radio and some conversation with family members.I would not like to depend on AI, just as it has been shown how dangerous it is to rely on gas, semiconductors, face masks, etc." (female, 59).
Due to the open-ended nature of the survey question, many of the responses identified only one of the components or dimensions of the attitude, which was often sufficient to determine whether the attitude was positive, negative, or neutral.In this regard, 32.6% of the valid responses reflected a negative attitude toward communicative AI, while 26.8% indicated a positive attitude.Another 25.2% expressed neutral attitude, whereas 15.4% did not yet know what to think or feel.In a more detailed analysis of the affective, cognitive, and -to a lesser extent-behavioral components that constitute these attitudes, elements of "caution," "fear," "opportunity," "dehumanization," and "job threat" were identified.
Elements of caution were present in 11.3% of the valid responses, pointing out the need for regulation or highlighting both the positive aspects and the potential risks that should be taken into account: "It can be a good tool, but it should be very well controlled" (female, 24); "I think it's great that they make things easier for us, but I believe we are being too closely monitored" (male, 39).
In 6.9% of the valid responses, we found components of fear associated with a potential misuse of communicative IA and its subsequent dangers for individuals or society.A 66-year-old man noted that he had been "very happy until now [but] these artificial intelligence things […] make me a little afraid [because he is a] more traditionalist person." Other respondents focused on fears of AI gaining control and autonomously defining its own agenda: "I'm a little afraid that robots could overtake us" (female, 21); "[…] it creates a little fear for how [these technologies] evolve in their knowledge" (male, 52); "I'm scared that they could get out of control and cause global chaos" (male, 44).
Finally, some other responses alluded to the fear of humans ending up "completely foolish" (female, 42) or losing "abilities" (female, 22).
A slightly lower percentage (6.2%)perceived communicative AI as an opportunity, emphasizing its potential medium and long-term benefits.This perceived opportunity was specified in areas such as -research, health, and safety "It can be of great help in issues related to safety or health, like assisting calls to emergency services or anti-suicide chats" (female, 35); -automation of repetitive tasks in the workplace "AI could make certain daily tasks much easier for us, tasks that used to require more effort and are now more manageable" (male, 59); -management of large amounts of information.
Around 6% of the respondents show a negative attitude toward communicative AI, associated with a possible "dehumanization" (male, 24 years) or loss of the traits that make us human: "It will help us greatly.But there is concern that human contact will occur less and less" (male, 65), "[These technologies] impede contact with other people" (male, 71), "[...]There will be no human warmth" (male, 78).
Finally, in another smaller group of responses (3.5%), negative attitudes related to possible job destruction were detected: "They will get rid of human workers" (female, 41), "I don't like [communicative IA] very much, it would take away many jobs and make the rich even richer" (male, 37).The second research question asked about the sociodemographic antecedents of attitudes toward (RQ2a) and use of (RQ2b) communicative AI.Similarly, RQ3 addressed the personality antecedents of attitudes toward (RQ3a) and use of (RQ3b) communicative AI.The zero-order correlation matrix in Table 1 show that education (r = .17,p < .001),income (r = .14,p < .001),extraversion (r = .10,p < .01),and openness to experience (r = .13,p < .001)are positively associated with favorable attitudes toward AI, whereas futurephobia (r = -.10,p < .01)shows a negative correlation.
However, once we test a more rigorous regression model controlling for three blocks of predictors (demographics, personality and futurephobia), we observe that only three independent variables show a statistically significant association with attitudes toward communicative AI (Table 2).Thus, in response to RQ2a, education (β = .104,p < .01) is a positive predictor of favorable attitudes toward communicative AI: Those with a higher level of education show better attitudes toward communicative AI than those with a lower level.Age does not seem to be associated with worse perceptions of communicative AI.Regarding personality variables, higher openness to experience (RQ3a/H1, β = .099,p < .05) is associated with better attitudes toward AI.Futurephobia, on the other hand, is a negative predictor of attitudes toward AI (H2a, β = -.108,p < .01).
Regarding RQ2b and RQ3b, the data in Table 1 indicate that positive attitudes toward communicative AI positively and strongly correlate with the frequency of using such technologies (r = .14,p < .001).In contrast, other demographic and personality variables are negatively related to the frequency of AI usage: age (r = -.27,p < .001),agreeableness (r = -.19,p < .001),conscientiousness (r = -.14, p < .001),and emotional stability (r = -.10,p < .01).Table 3 displays the impact of each of these independent variables in the regression models once we account for the effect of other predictors in the model.In the simplest model (OLS model B, with a right-skewed dependent variable due to excess of "never" responses), only positive attitudes toward communicative AI positively predict its usage (β = .117,p < .01).In contrast, age (β = -.276,p < .001),female gender (β = -.092,p < .05),income (β = -.086,p < .05)and agreeableness (β = -.121,p < .01)are negative predictors.However, due to the skewed distribution of the dependent variable, these results may be imprecise.In summary, educational level, openness to experience and futurephobia are associated with attitudes toward communicative AI: individuals with higher education level, greater openness to experience, and lower level of futurephobia are more likely to express positive attitudes toward communicative AI.This answers RQ2a and RQ3a and provides empirical support for H1a and H2a.As for the use of communicative AI, younger individuals, men, and those with more favorable attitudes toward communicative AI are more likely to decide to use it, while individuals scoring higher on agreeableness tend to use it less frequently -once they have decided to make use of it.This addresses RQ2b and RQ3b and leads us to reject H1b and H2b.

Discussion and conclusions
Since 2022, communicative applications of AI have experienced significant momentum and popularization.Tools like ChatGPT, Dall-E, Midjourney or Replika, have allowed millions of users worldwide to communicate with (and not only through) chatbots, generate images and videos, and even engage in romantic relationships with personalized avatars.
In this context of technological disruption and potential shift in communicative paradigms, this study explores citizens' attitudes toward communicative applications of AI, as well as the sociodemographic and personality characteristics that may explain their favorable perception and usage of AI.The study is particularly relevant for understanding the feedback and mutual influence processes between perceptions and usage, as well as the characteristics of innovators and early adopters.
Our analyses yield several findings that are relevant to understanding the relationships between individual-level characteristics and attitudes and the use of communicative.First, responses to our open-ended item reveal the lack of familiarity of a large proportion of citizens with communicative applications of AI.Many of the respondents perceive AI globally, without differentiating between its multiple facets (e.g., communicative, medical, or autonomous driving related applications).We interpret this "semantic spillover" of communicative AI as a reflection of the societal unawareness of its complexity and potential benefits and dangers, something that may hinder the democratic debate about the implementation and regulation of these technologies.
At a general level, negative attitudes toward communicative AI slightly predominate over neutral and positive attitudes.Some of the most prominent affective and cognitive components of these negative attitudes include fear or concern, caution, and the perception of dehumanizing aspects and job threats associated with the use of communicative AI.In contrast, those with positive attitudes toward AI emphasize the element of opportunity in connection with the challenges of humanity or with people's day-to-day struggles.Some of these perceptions resonate with the diagnosis of some of the developers of these advances, who fear that AI could go wrong and potentially pose an extinction risk to humanity (El País, 2023).And beyond these existential concerns, respondents' answers point to more real and tangible risks, such as privacy violations, lack of transparency, and potential discrimination and biases associated with this technology (Lepri; Oliver; Pentland, 2021).These descriptions share common elements with the findings of Brewer and colleagues (2022) in the United States, who observed that the public framed AI as a tool of progress (9%) or as Pandora's box (13%).
In the light of these findings, developing AI literacy, as well as strengthening mechanisms of public oversight, could mitigate legitimate concerns of the public and promote the use of these technologies for the enhancement of individual and social well-being (Hermann, 2022).
Second, our findings highlight the importance of certain individual antecedents (sociodemographic characteristics and personality traits) in explaining the favorable perception of communicative AI.Our data indicate that individuals with higher education, greater openness to experience, and lower levels of futurephobia tend to perceive communicative AI more positively.The latter two findings align with our hypotheses, whereas the relationship between educational level and favorable attitudes toward communicative AI would benefit from a more detailed analysis beyond what our data can provide.One possibility is that this relationship may be indirect, as higher educational attainment is often associated with higher trust in science and scientists (see, for example, Nadelson et al., 2014).This in turn could explain that those with higher education levels express more favorable attitudes toward tools that are essentially developed by scientists.It is also conceivable that individuals with a higher educational level are better equipped to comprehend the various facets of AI.When questioned about communicative AI, they may not consider other, potentially more problematic applications, such as military uses or autonomous driving.
Third, our study also provides evidence on the characteristics of innovators and early adopters of these communicative technologies.Our two-part model indicates, in its first part, that gender, age, and attitudes toward communicative AI are the key variables that best predict the decision to start using communicative AI.Specifically, men, younger individuals, and those with positive attitudes toward AI are more likely to adopt this technology, regardless of their frequency of use.These findings partially align with previous literature and the postulates of innovation diffusion theory, which suggest that early adopters tend to be men with high levels of education and income (Dutton;Rogers;Jun, 1987;Rogers, 2003).However, in our case, education and income did not predict either the decision to use communicative AI or its frequency of use.This lack of influence can likely be attributed to the freemium model used by many of these applications, which allows users to test them without incurring in any costs.This has not been the case with many other more expensive communication technologies (mobile, satellite radio, internet, etc.), where income has played an important role in their early adoption.Once a decision has been made to use this technology, only agreeableness negatively predicts its frequency of use.
The findings of our study should be considered in the light of its limitations.Firstly, our sample, despite being diverse and nationwide, remains a convenience sample.The online questionnaire was distributed through a link to a sample of pre-registered panelists, implying that the sample may possess higher levels of digital skills than the average population.
In this regard, it is possible that the general population's levels of communicative AI usage might be lower than those observed in our study.However, this discrepancy is not a major concern, as our study is predominantly explanatory and does not aim to ascertain the exact distribution of these perceptions and behaviors among the population (see Baker et al., 2013, for a more detailed explanation).Secondly, we opted for a quantitative method for data collection, namely a survey.Our approach renders our assertions more explicit and allows the application of statistical procedures to test our hypothesis and generate explanations about the antecedents of the attitudes and behaviors under study.However, it is true that "by focusing specifically on what we'll include in our measurement of the concept, […] we also exclude any other meanings" (Babbie, 2007, p. 24).
Other qualitative research methods, such as in-depth interviews, focus groups or participant observation, could provide complementary data that would enable researchers to listen to participants in their own words and provide greater richness of meaning.Finally, another limitation stems from the challenge of analyzing a rapidly changing phenomenon that is not yet fully consolidated.Nevertheless, the research is carried out at a unique moment to study specific users, the pioneers or early adopters, whose attitudes must be captured as the technology is beginning to spread.
All in all, our work has significant implications for better understanding communicative AI and its social perception.Firstly, we contribute to filling the research gap regarding the perceptions of communicative AI in Spain.Up until this point, the most prominent data on social perceptions of AI come from reports by private companies.Our study includes a diverse sample from across Spain and employs multivariate models to reduce the possibility of reporting spurious relationships.Secondly, we propose a line of inquiry into a specific aspect of AI -its communicative applications-and endeavor to distinguish it from other uses of this technology.Additionally, we introduce a novel personality variable, namely, futurephobia, into our models, which we postulate as an important antecedent of attitudes toward AI and its early adoption.Further studies can incorporate futurephobia to ascertain its significance as this or other technologies continue to develop.
In summary, our study sheds light on the attitudes of the Spanish population toward an emerging and likely disruptive phenomenon such as communicative AI.These initial attitudes of the population, particularly those of pioneering users who interact with communicative IA, may play a relevant role in its development and, what could be even more crucial, the legal and regulatory framework that will govern this technology.

Table 1 .
Correlations among the study variablesNote.Cell entries are zero-order pairwise correlation coefficients.n ranges from 782 to 817, differing for each pair of variables because pairwise deletion of missing data was used.Superscripts: a = p < .05,b = p < .01,c = p < .001(two-tailed).

Table 3 .
Boulton and Williford (2018) communicative AI use (Models A-C)The two-part model (C and D, Table3) recommended byBoulton and Williford (2018)solves the issue of the skewed distribution of the dependent variable.Thus, model C explains which variables contribute to the decision to use communicative AI (a dichotomous dependent variable that does not consider usage frequency).According to this model, older respondents are less likely to use communicative AI (RQ2b, odds ratio = 0.956, 95% CI[.944, .969]),aswellas women (RQ2b, odds ratio = 0.598, 95% CI[.396, .904]).Conversely, individuals with favorable attitudes toward communicative

Table 2 .
Regression model predicting attitudes toward AI (model A)AI are more likely to use it (RQ2b, odds ratio = 1.367, 95%CI [1.081, 1.729]).The second part of the model (model D) pertains to the process that determines the frequency of using AI once the decision to use it is made.In this model, age (RQ2b, β = -.130,p < .10)and futurephobia (H2b, β = -.144,p < .10)approach the conventional threshold of statistical significance but do not reach it.In contrast, agreeableness (RQ3b, β = -.259,p < .01) is negatively associated with the frequency communicative AI use -once the decision to use it is made.