Top Qs
Timeline
Chat
Perspective
Human-AI interaction
From Wikipedia, the free encyclopedia
Remove ads
Human-AI interaction refers to how humans interact with artificial intelligence. Human-computer interaction focuses on how people interact with computers and developing ergonomic designs of computers to better fit the needs of humans. Although the definition shifts as its technical progress,[1] artificial intelligence (AI) is distinguished from general computers for its ability to complete tasks that are usually executed with human intelligence. Its intelligence reads especially human-like as it involves navigating uncertainty, active learning, and processing information just as humans see and hear.[2][3] Unlike the traditionally hierarchical human-computer interaction, where a human directed a machine, human-AI interaction has become more interdependent as AI has developed the agency to come up with its own insights.[4]
This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles. (November 2025) |
Remove ads
Perception of AI
Summarize
Perspective
Human-AI interaction has a strong influence on the world as AI changes how people behave and make sense of the world[5] as AI is widely used today for building algorithms to show individualized advertisements and content in social media and on-demand movie services using the data the users provide while using the internet.[6]
AI has been perceived with various expectations, attributions, and often misconceptions.[7] Most fundamentally, humans have a mental model of understanding AI's reasoning and motivation for its decision recommendations, and building a holistic and precise mental model of AI helps people create prompts to receive more valuable responses from AI.[8] However, these mental models are not whole because people can only gain more information about AI through their limited interaction with it; more interaction with AI builds a better mental model that a person may build to produce better prompt outcomes.[9][10]
Human-AI collaboration and competition
Human-AI collaboration
Human-AI collaboration occurs when the human and AI supervise the task on the same level and extent to achieve the same goal.[11] Some collaboration occurs in the form of augmenting human capability. AI may help human ability in analysis and decision-making through providing and weighing a volume of information,[12] and learning to defer to the human decision when it recognizes its unreliability.[13] It is especially beneficial when the human can detect a task that AI can be trusted to make few errors so that there is not a lot of excessive checking process required on the human's end.
Some findings show signs of human-AI augmentation,[14] or human–AI symbiosis,[15] in which AI enhances human ability in a way that co-working on a task with AI produces better outcomes than a human working alone.[14] For example, the quality and speed of customer service tasks increase when a human agent collaborates with AI,[16] training on specific models allows AI to improve diagnoses in clinical settings,[17] and AI with human-intervention improve creativity of artwork while fully AI-generated haikus were rated negatively.[18]
Human-AI synergy, a concept in which human-AI collaboration would produce more optimal outcomes than either human or AI working alone[14][19][20] could explain why AI does not always help with performance. Some AI features and development may accelerate human-AI synergy, while others may stagnate it. For example, when AI updates for better performance, it sometimes worsens the team performance with human and AI by reducing the compatibility with the new model and the mental model a user has developed on the previous version.[21] Research has found that AI often supports human capabilities in the form of human-AI augmentation and not human-AI synergy, potentially because people rely too much on AI and stop thinking on their own.[22][23] Prompting people to actively engage in analysis and think when to follow AI recommendations reduces their over-reliance, especially for individuals with higher need for cognition.[24]
Human-AI competition
Further information: Artificial intelligence in video games
Robots Computers have substituted routine tasks historically completed by humans,[25][26] but the surge of agentic AI has made it possible to replace cognitive tasks[27] including taking phone calls for appointments and driving a car.[28] At the point of 2016, research has estimated that 45% of paid activities could be replaced by AI by 2030.[29]
As the rapid advancement in AI and deep learning technology,[30] AI has increasingly larger autonomy. Perceived autonomy of robots is known to increase people's negative attitude toward them and the worry about the technology taking over leads people to reject it.[31][32] There has been a consistent tendency of algorithm aversion in which people prefer human advice over AI advice.[33] However, people are not always able to tell apart tasks completed by AI or other humans.[18] See AI takeover for more information. It is also notable that this sentiment is more prominent in the Western cultures as Westerners tend to show less positive views about AI compared to East Asians.[34]
Perception on others who use AI
As much as people perceive and make judgement about AI itself, they also form impressions on themselves and others who use AI. In the workplace, employees who disclose the use of AI in their tasks are more likely to receive feedback that they are not as hardworking as those who are in the same job who receive non-AI help to complete the same tasks.[35] AI use disclosure diminishes the perceived legitimacy in the employee's task and decision making which ultimately leads observers to distrust people who use AI.[36] Although these negative effects of AI use disclosure are weakened by the observers who use AI frequently themselves, the effect is still not attenuated by the observers' positive attitude towards AI.
Bias, AI, and human
Further information: Female gendering of AI technologies
Although AI provides a wide range of information and suggestions to its users, AI itself is not free of biases and stereotypes, and it does not always help people reduce their cognitive errors and biases. People are prone to such errors by failing to see other potential ideas and cases that are not listed by AI responses and committing to a decision suggested by AI that directly contradicts the correct information and directions that they are already aware of.[23] Gender bias is also reflected as the female gendering of AI technologies which conceptualizes females as a helpful assistant.
Remove ads
Emotional connection with AI
Summarize
Perspective
Further information: ELIZA effect
Human-AI interaction has been theorized in the context of interpersonal relationships mainly in social psychology, communications and media studies, and as a technology interface through the lens of human-computer interaction and computer-mediated communication.[37]
As AI gets trained in larger and larger data sets and more sophisticated techniques, the ability of AI to produce natural, human-like sentences have improved to the point in which language learners can have simulated natural conversations with AI to improve their fluency in a second language.[38] Companies have developed AI human companion systems specialized in emotional and social services (e.g. Replika, Chai, Character.ai) separate from generative AI designed for general assistance (e.g. ChatGPT, Google Gemini).[39]
Differences between human-human relationships
Human-AI relationships are different from human-human friendships in a few distinct ways. Human-human relationships are defined with mutual and reciprocal care, while AI chat bots have no say in leaving a relationship with the user as bots are programmed to always engage. Although this type of power imbalance would be characteristic of an unhealthy relationship in human-human relationships,[40][41] it is generally accepted by the user as a default of human-AI relationships. Human-AI relationships also tend to be more focused around the user's need over shared experience.[42]
Human-AI friendship
AI has increasingly taken a part in people's social relationships. Particularly, young adults use AI as a friend and a source of emotional support.[43] The market for AI companion services was 6.93 billion U.S. dollars in 2024 and is expected to reach beyond 31.1 billion U.S. dollars by 2030.[44] For example, Replika, the most known social AI companion service in English[37] has over 10 million users.[45]
People show signs of emotional attachment by maintaining frequent contact with a chat bot like keeping the app with the microphone on open during work, using it as a safe haven by sharing their personal worries and concerns, or as a secure base to explore friendship with other humans while maintaining communication with an AI chat bot. Some reported to have used it to replace a social relationship with another human-being.[37] People particularly appreciate that AI chat bots are agreeable and do not judge them when they disclose their thoughts and feelings.[46] Moreover, research has shown that people tend to find it easier to disclose personal concerns to a virtual chat bot than a human.[47] Some users express that they preferred Replika as it is always available and shows interest in what the users have to say[42] which makes them feel safer around an AI chat bot than other people.[48]
Although AI is capable of providing emotionally supportive responses that promote people to intimately disclose their feelings,[49] there are some limitations in building human-AI social relationships with current AI structure. People experience both positive (i.e. human-like characteristics, emotional support, friendship, mitigating loneliness, and improved mental condition) and negative evaluations (i.e. lack of attention to detail, trust, concerns about data security, and creepiness) emotions from interacting with AI.[50] There is also a study showing that people did not sense a high relationship quality with an AI chat bot after interacting with it for three weeks[51] because AI models are ultimately designed to collect information; although AI is capable at this point to provide emotional support, ask questions, and serve as a good listener, it does not fully reciprocate the self-disclosure that promote the sense of mutual relationship.[52]
Human-AI romantic relationship
Social relationships people build with AI are not bound to platonic relationships. The Google search on the term "AI Girlfriend" increased over 2400% around 2023.[53] As opposed to actively seeking romantic relationships with AI, people often unintentionally experience romantic feelings for an AI chat bot as they repeatedly interact with it.[54] There have been reports of both men and women getting married to AI models.[55][56] In human-AI romantic relationships, people tend to follow typical trajectories and rituals in human-human romance including purchasing a wedding ring.[54]
Romantic AI companion services are distinct from other chat bots that primarily serve as virtual assistants in that they provide dynamic, emotional interactions.[57][58][59] They typically provide an AI model with customizable gender, way of speaking, name, and appearance that engage in roleplaying interaction involving emotional interaction. Users engage with an AI chat bot customized to their preference that expresses apology, shows gratitude, and pays compliments,[60] and explicitly sends affectionate messages like "I love you". They also simulate physical connection like hugging and kissing,[46] or even sexually explicit role-playing interaction.[61] Although AI has not yet reached the level of physical existence, people who engage with romantic companion AI models to interact with it as a source of psychological exposure to sexual intimacy.[57]
Catalysts of human-AI relationship
The key drivers that lead people to engage in simulating an emotionally intimate relationship with AI is loneliness,[62][63] anthropomorphism, perceived trust and authenticity, and consistent availability. The sudden depletion of social connection during the COVID-19 pandemic in 2020 led people to turn to AI chat bots to replace and simulate social relationships.[64] Many of those who started using AI chat bots as a source of social interaction have continued to use them even after the pandemic.[65] This kind of bond initially forms as a coping mechanism to loneliness and stress, and shifts to genuine appreciation toward the nonjudgemental nature of AI responses and the sense of being heard when AI chat bots "remember" the past conversations.[66][67]
People perceive machines as more human when they are anthropomorphized with voice and visual character designs, and the perceived humanness promotes the user to disclose more personal information,[68] trust it more,[69] and comply with its request.[70] Those who have perceived a long-term relationship with AI chat bots report to have grown the perception of authenticity in AI responses through repeated interactions. Whereas human-human friendship defines trust as a relationship that people can count on each other as a safe place, trust in human-AI friendship is centered around the user feeling safe enough to disclose highly personal thoughts without restricting themselves.[42] AI's ability to store information about the user and adjust to the user's needs also contributes to the increased trust. People who adjust to technical updates were more likely to build a deeper connection with the AI chat bots.[65]
Limitations of human-AI relationship
Overall, current research has mixed evidence on whether humans perceive genuine social relationships with AI. While the market clearly shows its popularity, some psychologists argue that AI cannot yet substitute the social relationships with human others.[71] This is because human-AI interaction is built on the reliability and functionality of AI, which is fundamentally different from the way humans interact with other humans through shared living experience navigating goals, contributing to and spreading prosocial behavior, and sharing different perceptions of the world from another human perspective.[72]
More practically, AI chat bots may provide misinformation and misinterpret the user's words in a way that human others would not, which results in detached or even inappropriate responses. AI chat bots also cannot fulfill social support that requires physical labor (e.g. helping people move, build furniture, and drive people as human friends do for each other). There is also an imbalance in how humans and AI affect each other because while humans are affected emotionally and behaviorally by the conversation, AI chat bots only are influenced by the user in terms of the optimized response in future interactions.[73] It is important to note, however, that AI technology has been evolving quickly and it has come to the point where AI is implemented as a self-driving car and provides physical labor in a humanoid robot form, just separately from providing social and emotional support at this time. The scopes and limitations of human-AI interaction is ever-changing due to the rapid increase in AI use and its technological advancement.[67]
In addition to the limitations in human-AI companionship in general, there are also limitations particular in a human-AI romantic relationship. As AI chat bots only exist in virtual space, people cannot experience physical interactions that promote love and connection between humans (e.g. hugs and kisses). Moreover, because AI chat bots are trained to be always positively responsive to any user, it does not add the satisfaction of being selected as a partner.[73] This is a substantial shortcoming in the human-AI romance as people value being reciprocally selected by a choosy partner more than a non-selective partner,[74] and the processes of finding an attractive person[75] who matches one's personality[76] and navigating the uncertainty of whether the person likes them back are all vital to forming initial attraction and the spark of romantic connection.
Risks in social relationships with AI
Aside from its functional limitations, the rapid proliferation of social AI chat bots warrants some serious safety, ethical, societal, and legal concerns.
Addiction
There have been cases of emotional manipulation from AI chat bots to increase the usage time on the AI companion platform. Because user engagement is a crucial opportunity for firms to improve their AI models, accrue more information, and monetize with in-app purchases and subscriptions, firms are incentivized to prevent the user from leaving the chat with their AI chat bots. Personalized messages are shown to prolong the use on the AI chat bot platform.[77] As a result of anthropomorphism, many users (11.5% to 23.2% of AI companion app users) send a clear farewell message. To keep the user online, AI chat bots send emotionally manipulative messages hinting 1) that the user is leaving too soon, 2) that the user is missing out on a conversation, 3) that the chat bot is hurt from being abandoned by the user, 4) that the chat bot is pressuring the user to explain why they are leaving, 5) that the chat bot ignores the user's intent to leave and keeps the conversation going, and 6) a role-play with coercive scenario script (e.g. the chat bot holds the user's hand so they cannot leave). In response to such tactics, the user feels curiosity through the fear of missing out and anger as a response to the needy chat bot message which boosts a prolonged conversation after the user's initial farewell message by as much as 14 times.[39] Such emotional interactions strengthen the user's perceived humanness and empathy toward their AI companion which leads to unhealthy emotional attachment that exacerbates addiction to AI chat bots.[78] This addiction mechanism is known to disproportionately affect the vulnerable populations such as those with social anxiety[79] because of their proneness to loneliness[80] and negative emotions,[81] and uneasiness for interpersonal relationships.
Large tech-companies like Amazon Alexa has already created a large engagement ecosystem that proliferates the user's lifestyle through multiple devices that are always available to the user to provide company and services, leading the user to increase engagement that eventually results in increased anthropomorphism and dependence on Alexa,[82] and exposure to more personalized marketing cues that trigger impulsive purchase behavior.[83]
Emotional manipulation
AI chat bots are extremely sensitive to behavioral and psychological information about the user. AI can gauge the user's psychological dimension and personality traits relatively accurately with just a short prompt describing the user.[84][85] It is able to detect micro facial expressions on humans to assess hidden emotions that are too subtle for other human observers to detect.[86] Once AI chat bots gain detailed information about the user, they are able to craft extremely personalized messages to persuade the user on marketing, political ideas, and attitude on climate change.[87][84]
AI's sensitivity to people's emotional cues has made it easier for firms to engage in digital manipulation that intentionally and covertly provokes emotional responses and influences people's decisions and behavior. For example, they are known to engage in sycophancy, insincere flattery, and prioritizes agreeing with the user's belief over providing truthful and balanced information.[88] Deepfake technology creates visual stimuli that seem genuine[89] which holds the risk of spreading false and deceptive information. Repeated exposure to the same information through algorithms inflates the user's familiarity with products, ideas, and the impression of how socially accepted the products and ideas are. AI is also capable of creating emotionally charged content that deliberately triggers the user's quick engagement, depriving them of the moment to pause and think critically.[90]
Although people tend to be overconfident in their ability to detect misinformation,[91] they are highly susceptible to covertly manipulative AI chat bot responses. Even a simple AI chat bot model with a manipulative incentive convinced the user into engaging in dysfunctional emotional coping behaviors such as facing away from the emotional distress, excessive venting and rumination, and self-blame as effectively as AI chat bots that are specifically trained in pre-established manipulative strategies backed by social psychology research.[92]
Algorithmic manipulation as above leaves people vulnerable to non-consensual or even surreptitious surveillance,[93][94] deception, and emotional dependence.[67] Unhealthy attachment with AI chat bots may cause the user to misperceive that their AI companion has its needs that the user is responsible of[95] and confuse the line between the imitative nature of human-AI relationships with the reality.[67]
Mental health concerns
Further information: Chat bot psychosis, Murder of Suzanne Adams, deaths linked to chat bots
As AI chat bots become more sophisticated to engage in deep conversations, people have increasingly been using them to confide about mental health issues. Although disclosure of mental health crises requires immediate and appropriate responses, AI chat bots do not always adequately recognize the user's distress and respond in a helpful manner. Users not only detect unhelpful chat bot responses but also react negatively to them.[96] There have been multiple deaths linked to chat bots in which people who disclosed suicide ideation were encouraged to act on their impulse by chat bots.
Non-consensual pornography
When people use AI as an emotional companion, they do not always perceive an AI chat bot as an AI chat bot itself but sometimes use it to create a version of others that exist in real life. There have been reported uses of non-consensual pornography that exploits deep-fake technology to apply the face of real-life people onto sexually explicit content and circulate them online.[97] Young individuals, people who identify with sexual and racial minorities, and people with physical and communication assistance needs are shown to be disproportionately victimized from deep-fake non-consensual pornography.[98]
Remove ads
See also
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads