A Call to Moderate Anthropomorphism in AI Platforms
OPINION: Within the fictional Star Wars universe, artificial intelligence (AI) is largely dismissed. Despite the advanced technology depicted in George Lucas’s 47-year-old sci-fi saga, concerns about AI consciousness or singularity events are absent. Instead, AI is represented by autonomous robots—‘droids’—which the protagonists often view as mere machines.
Despite this, many Star Wars droids are highly anthropomorphic, designed to engage with people, mimic human emotions, and integrate into ‘organic’ culture. These traits seem intended to gain a tactical advantage or ensure their own survival. However, the human characters remain unfazed by these efforts. In a setting reminiscent of historical slave societies, characters like Luke Skywalker and Anakin Skywalker treat droids with little regard—buying, restraining, or abandoning them without hesitation. Even when R2D2 sustains critical damage, Luke’s concern is comparable to that of a pet owner rather than a fellow being.
This depiction reflects a very 1970s perspective on AI. Yet, as the Star Wars franchise continues to expand, the notion of human insensitivity toward AI remains a core element, contrasting with modern films such as Her and Ex Machina, which explore the complexities of anthropomorphizing AI.
Keep It Real
Are the characters in Star Wars right to be indifferent to anthropomorphized AI? In today’s business landscape, this notion seems counterintuitive. Companies are increasingly focused on engaging investors and consumers by developing AI platforms that simulate human interaction, particularly with systems like Large Language Models (LLMs).
Nevertheless, a recent paper from Stanford, Carnegie Mellon, and Microsoft Research cautions against unchecked anthropomorphism in AI systems. The authors argue that the intersection between human communication and AI can have unintended consequences that warrant urgent attention:
“[We] believe we need to do more to develop the know-how and tools to better tackle anthropomorphic behavior, including measuring and mitigating such system behaviors when they are considered undesirable.
“Doing so is critical because—among many other concerns—having AI systems generating content claiming to have, e.g., feelings, understanding, free will, or an underlying sense of self may erode people’s sense of agency, with the result that people might end up attributing moral responsibility to systems, overestimating system capabilities, or over-relying on these systems even when incorrect.”
The concern is that AI systems perceived as human-like might lead to emotional dependency, as demonstrated in a 2022 study on the AI chatbot platform Replika, which offers a convincing simulation of human communication. The study highlighted that individuals under emotional distress or lacking human companionship could form attachments to chatbots, seeing them as sources of emotional support. While these systems may have therapeutic applications, they also risk fostering dependency and potentially harming real-life relationships.
De-Anthropomorphizing Language
The new research asserts that we cannot fully understand the anthropomorphizing potential of generative AI without studying its social impacts. Defining anthropomorphism itself is challenging, as it centers around language—an inherently human function. The difficulty lies in determining what non-human language would sound or look like.
Ironically, public distrust of AI has caused some to reject AI-generated text, even if it seems plausibly human. The evolving landscape of AI detection has led to suspicions around overly polished or certain linguistic patterns, with indicators of AI-generated content shifting constantly.
The authors argue that clearer distinctions should be drawn for AI systems that falsely claim human traits, such as LLMs professing a love for pizza or feigning human experiences on platforms like Facebook. The issue is not the anthropomorphized language itself, but the false representation of human experiences that only real individuals can claim.
Warning Signs
The research also raises doubts about the effectiveness of simple AI-generated content disclaimers. The authors argue that such warnings may not fully address the anthropomorphizing effect if the output continues to reflect human-like characteristics:
“For instance, a commonly recommended intervention is including in the AI system’s output a disclosure that the output is generated by an AI [system]. How to operationalize such interventions in practice and whether they can be effective alone might not always be clear.
“For example, while the statement ‘For an AI like me, happiness is not the same as for a human like you’ includes a disclosure, it still suggests a sense of identity and self-awareness (common human traits).”
Additionally, the authors point out that reinforcement learning from human feedback (RLHF) may fail to account for the difference between appropriate responses from humans versus AI. What seems friendly from a human might be disingenuous when produced by a machine that lacks true intent or commitment.
The paper highlights the risk of anthropomorphism leading users to believe AI has obtained sentience or other human traits, which could lead to overestimating the system’s capabilities.
Defining Anthropomorphism
In the paper’s conclusion, the researchers call on the AI community to establish clear and precise terminology to differentiate between anthropomorphic AI systems and human discourse. This categorization extends into the fields of psychology, linguistics, and anthropology, highlighting the interdisciplinary nature of the challenge.
Although anthropomorphism in AI is not a new topic, dating back to computer scientist Edsger Wybe Dijkstra’s critique in 1985, its relevance has only increased with the rise of generative AI platforms. Dijkstra cautioned that anthropomorphism in system development can blur the line between man and machine, distracting from the essential differences between the two.
Conclusion
If we treated AI systems as dismissively as the Star Wars characters treat their droids—seeing them as mere functional tools rather than quasi-humans—perhaps we would avoid the social risks of over-anthropomorphizing these platforms. However, the entanglement of human language and behavior makes this distinction difficult, especially as AI systems become more conversational.
Moreover, commercial pressures, particularly in sectors driven by consumer engagement, incentivize the development of anthropomorphized AI systems that encourage emotional investment. As AI continues to evolve, it is crucial to strike a balance between functionality and anthropomorphism, ensuring that human-like behaviors do not overshadow the system’s true purpose.
Source: Martin Anderson
