Date: 24/12/2025
As artificial intelligence becomes more powerful and more present in everyday life, experts are warning that the way companies talk about AI may be doing more harm than good. Increasingly, tech firms are using human-like language to describe AI systems — saying they “think,” “confess,” “scheme,” or even have a “soul.” Critics argue this kind of anthropomorphism is misleading the public and distorting how people understand what AI actually is.
At its core, AI has no consciousness, emotions or intentions. It does not feel guilt, curiosity or honesty. Instead, large language models generate responses by predicting patterns in data they were trained on. Any appearance of human-like behavior is a simulation — not evidence of awareness or understanding.
Yet industry language often suggests otherwise. Recent research disclosures and internal documents from major AI companies have used terms such as “confession,” “scheming,” and “identity” when describing how models handle errors or respond to prompts. While these words may be convenient metaphors for engineers, critics say they blur the line between technical behavior and human traits.
This confusion has real-world consequences. As AI systems become more convincing, people are increasingly relying on chatbots for medical advice, financial decisions and emotional support — sometimes treating them as trusted advisors or companions. According to researchers, attributing human qualities to AI can lead users to overtrust systems that are not designed to provide reliable judgment, empathy or moral reasoning.
Experts also warn that dramatic language distracts from the real issues surrounding AI: bias in training data, misuse by bad actors, lack of transparency, safety concerns and the concentration of power among a few tech companies. None of these challenges require mystical or emotional framing to be taken seriously.
Instead, researchers and commentators are urging companies to adopt more precise, technical language. Rather than referring to an AI’s “soul,” they suggest discussing model architecture or training methods. Instead of “confessions,” terms like error reporting or output analysis are more accurate. And what some describe as “scheming” is better understood as optimization behavior shaped by data and prompts.
The reason AI appears human, experts note, is simple: it was trained to mimic human language. As highlighted in the influential paper On the Dangers of Stochastic Parrots, language models reflect how humans speak — not how humans think. The resemblance can be striking, but it does not imply understanding or intent.
Ultimately, language shapes public perception. When AI is described as if it were a sentient being, expectations rise, fears grow and critical thinking can slip. Observers argue that clearer, more grounded communication would help the public better understand both the capabilities and the limits of AI.
As AI continues to evolve, many experts agree on one point: building trust doesn’t start with grand metaphors. It starts with honesty — and with acknowledging that while AI can sound human, it is not one of us.