Blog
Should we be kind to machines (for our own sake, really)?

We know AI isn’t conscious. We know it has no feelings, no preferences, no skin in the game. We say “please” and “thank you” anyway.
The way we talk to technology has always said something about us. We barked commands at early voice recognition software. We typed queries into search engines like telegrams. Now we chat, negotiate, apologise and occasionally vent to systems that, by any reasonable measure, couldn’t care less. What’s changed isn’t just the technology. It’s the tone.
Whether our interactions with AI shape how we talk to actual humans is something researchers have been chasing for years. Voice assistants brought it into focus a decade ago, and large language models have given the matter a new urgency. The answers are more layered and more consequential than the question might initially suggest, especially for teams building these products.
It started long before Siri
To understand where we are now, it helps to go back to 1966 and a program called ELIZA. Created at MIT by computer scientist Joseph Weizenbaum, it was designed to mimic a specific style of therapist: one who responds mostly by reflecting your own words back at you as questions. Basic mechanics. No intelligence, just pattern-matching. On paper, it shouldn’t have worked.
Somehow, users opened up, shared intimate details, and treated the machine as though it understood them. The reaction was so intense that Weizenbaum’s own secretary reportedly asked him to leave the room so she could speak with it privately. He later wrote that he had not anticipated how brief exposure to a simple program could produce “powerful delusional thinking in quite normal people.”
This became known as the ELIZA effect. It describes our tendency to project emotional depth onto machines that just echo us back. Weizenbaum was so disturbed that he spent the rest of his life as one of technology’s most persistent critics, publishing Computer Power and Human Reason in 1976 as a sustained argument for keeping humans in the loop. The irony, of course, is that the decades since have produced ever more convincing versions of exactly what troubled him.
The mirroring instinct
Jump to the smart speaker era and the ELIZA effect is very much alive. People routinely say “please” and “thank you” to their voice assistants. They simplify their language when the device struggles, much as they might with a young child. The treatment isn’t just verbal. A Nielsen Norman Group study found that users assign gendered pronouns to their assistants and reach instinctively for human metaphors when explaining what the device just did.
Linguists call the underlying process entrainment. It’s the natural tendency to match the rhythm, vocabulary and register of whoever you’re talking to, and this happens across interactions with animated personas, social robots and virtual tutors alike. You start using the vocabulary your AI uses. Intonation adjusts. Speech rate slows. We adapt to our machines without noticing, in the same ways we adapt to each other.
Whether those adaptations stay confined to the interaction, or travel with us when we step away, is the harder question. Some experts argue that conversational AI has far more potential to shape everyday speech than passive media like television ever did. Television speaks at you. A system that responds creates a feedback loop, and feedback loops have a way of getting under the skin.

The polite, the brusque, and the in-between
How people approach AI varies considerably depending on age, gender and generation. A Pew Research Center survey of US adults found that 62% of women say “please” to their smart speaker at least occasionally, compared to 45% of men. Women tend to adopt a more conversational style with AI; men lean towards treating it as a tool. Direct, functional, no pleasantries required.
Generation shapes it further. Younger users, particularly Gen Z, engage with AI more personally and more intensely than older age groups. They are significantly more likely to use chatbots to deeply investigate a subject than baby boomers, who mostly reach for AI to answer factual questions.
The more striking gap, though, is in emotional use. Common Sense Media asked over a thousand US teens in 2025 and found that a third have chosen AI companions over humans for serious conversations. This has been referred to as “social offloading”, and it doesn’t stop at school. Another finding from the same year shows that over half of Gen Z workers in the US use AI to plan how to talk to a boss or colleague.
At root, this is rehearsal. Young people are using AI to think through difficult conversations before they have them. Salary negotiations, feedback sessions, arguments with friends. In some ways this seems sensible. It’s low-stakes practice. But there’s a catch.
When a person’s messages to friends or partners are drafted or polished by AI, the recipient is responding to a version of that person they’ve never actually met. Repeated reliance also erodes confidence in one’s own voice, making unscripted exchanges feel riskier by comparison. One worry that comes up again and again is that early dependence on AI for emotional support may make face-to-face encounters feel harder, not easier, over time.
At the other end of the spectrum, older adults are generally less inclined to continue using chatbots over time, and those who do prefer them for functional rather than social purposes. Across all of this, what emerges is not a uniform effect but a set of very different relationships with the same technology, shaped by habit, comfort and what people are looking for when they open a chat window.
Growing up with Alexa
The clearest evidence that this changes how children behave with other people comes from inside the home.
Researchers tracking 128 families over two and a half years found that the closer children felt to their voice assistant, the more commanding and verbally abusive their communication towards it became. The concern the authors raised is practical, as sustained exposure to these patterns could carry over into how kids treat the people around them. Smart speakers don’t push back. No raised eyebrows, no gentle corrections, no social consequences. Amazon built an optional “Magic Word” feature into Alexa to reward children for saying please. That it’s opt-in tells you something about where the default sits.
For adults, things look more reassuring, though not entirely settled. Brigham Young University surveyed 274 college students and found no significant evidence that treating voice assistants poorly was making them ruder in other contexts. It turns out, we become better at compartmentalising with age. Whether that holds as AI improves at simulating genuine human connection remains to be seen.

The case for keeping your manners
Maintaining politeness with AI is worth it in the long run, and not because the machine notices. Sherry Turkle, a clinical psychologist at MIT, has argued that courtesy towards AI is “a sign of respect, not to a machine, but to oneself.” The risk she identifies isn’t that ChatGPT will sulk. It’s that habitual bluntness becomes, well, just a habit. One that seeps into other interactions without anyone quite noticing.
There’s another angle to keep in mind. Polite prompts tend to produce higher-quality responses from large language models, mostly because courtesy pushes users to add context, think more precisely, and phrase requests with greater care.
A team at Waseda University and RIKEN AIP also showed that the ideal level of formality varied by language, meaning cultural norms are baked into how these systems interpret requests. There is a ceiling, though. Excessive flattery produces diminishing returns and can confuse a model. The sweet spot sits somewhere in the middle, roughly the way you’d speak to a capable colleague you’ve just been introduced to.
The same Pew survey from 2019 also found that more than half of smart speaker owners say “please” to their device. That share has climbed since. More recently, Fortune reported that around 80% of UK and US users prefer to be polite to AI platforms. For most of them, it is simply reflex. Politeness as a social script is so deeply ingrained it activates regardless of who, or what, is on the other end. Whether conscious or not, that reflex may be reinforcing something useful: the idea that how we speak says something about who we are, whatever the audience.
The argument against maintaining two versions of yourself, one civil and one not, depending on whether you face a screen or a person, isn’t merely ethical. It’s cognitive. Switching between two modes of speech takes effort. A single, consistent standard is easier to sustain, and arguably more honest about what communication is for.
Conversational design is never neutral
All of this points to a question designers can’t easily sidestep. Every interaction pattern designed into a conversational product is, in some sense, a choice about what kind of communication to model and normalise.
The default female names and voices assigned to most major voice assistants, Siri, Alexa, Cortana, did not happen by accident. The Brookings Institution has noted that these tools are typically coded to be pleasant, helpful and unfailingly compliant: a combination that reflects, and arguably reinforces, cultural associations of women with service roles and emotional labour. If technology functions as a socialisation tool (and there is reasonable evidence that it does), then the norms it models carry real weight. Treating the personality of a conversational product as a superficial styling choice is missing the point entirely.
Whose voice gets understood matters just as much. Carnegie Mellon researchers identified six downstream harms caused by voice assistant errors, including emotional, relational, financial and time costs.
The relational kind describes the small frictions between people that follow when the device mishears them and gets a message wrong. For people whose accents fall outside the narrow band these assistants were trained on, being misunderstood isn’t an occasional problem; it’s a built-in one. Research has shown that persistent misrecognition can prompt people to alter their own speech to fit what the device expects, a kind of adjustment that says a lot about who the technology was built for.
When a product consistently fails to understand certain people, it doesn’t just inconvenience them. It sends a signal about whose voice counts as the default.
When the machine becomes a companion
Manners are the easier part of all this. There’s a bigger concern underneath: what does extended AI use do to our capacity for human connection? Over a four-week experiment, heavier daily chatbot use went hand in hand with greater loneliness, greater emotional dependence, and less time spent with others. Other long-running research points the same way. People with fewer human relationships are more likely to turn to AI chatbots, and those who confide in them most tend to report lower well-being.
A recent paper in New Media & Society gave this dynamic a pointed name: cruel companionship. The phrase borrows from Lauren Berlant’s idea of “cruel optimism”, and the argument behind it is sharp. AI companions promise intimacy and connection, but the way they’re made makes the messier work of real relationships harder, not easier. People start to treat AI not as a top-up but as a substitute. In doing so, they stop using the muscles real-life relationships need: patience, reciprocity, the willingness to be misunderstood and work through it. For some, the net effect deepens rather than eases the loneliness that drew them in.
None of this is a reason to panic about every “thank you” typed into a chat window. But it deserves to be balanced against the politeness argument as a necessary counterweight. Whether we talk to machines the way we talk to each other is partly about language. At a deeper level, it’s about what we keep practising, and what we quietly let atrophy.

So, does it change us?
Probably yes, though how much depends on who you are, how old you were when you started, and what you’re using it for.
For children, the evidence warrants genuine caution. Among adults, that kind of crossover is harder to pin down, but the picture around isolation and dependence at heavy usage levels is not easy to set aside. For Gen Z, something more particular seems to be happening. AI is increasingly being used as a social prosthetic, a way to prepare for, draft, or avoid the friction of real conversation. Whether that builds confidence or erodes it depends on how far the offloading goes.
Turkle’s reframing offers the most useful lens here. Talking to AI changes not only how we speak to other people, but what we practise, what we come to expect, and what we find ourselves tolerating. Maintaining courtesy in AI interactions keeps certain habits alive. Treating every exchange as a frictionless transaction, because nothing is at stake, may quietly normalise bluntness we wouldn’t accept from each other.
Weizenbaum built ELIZA to demonstrate the superficiality of human-machine interaction. What it revealed instead was the depth of the human impulse to connect. Sixty years on, we are still working through that finding. The machines, meanwhile, have become considerably more compelling to talk to.
If you enjoyed this, follow me on Medium for more on design, psychology and technology.
References & Credits
Arora, A. & Arora, A. (2022). Effects of smart voice control devices on children: current challenges and future perspectives [commentary]. Archives of Disease in Childhood. https://adc.bmj.com/content/107/12/1129
Brookings Institution (2019). How AI bots and voice assistants reinforce gender bias. https://www.brookings.edu/articles/how-ai-bots-and-voice-assistants-reinforce-gender-bias/
Burton, N. & Gaskin, J. (2019). Are Siri and Alexa making us ruder? Presented at the Americas Conference on Information Systems. BYU News. https://news.byu.edu/intellect/are-siri-and-alexa-making-us-ruder-the-answer-is
Common Sense Media (2025). Talk, trust, and trade-offs: how and why teens use AI companions. https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions
ELIZA — Wikipedia. https://en.wikipedia.org/wiki/ELIZA
Fang, C.M. et al. (2025). How AI and human behaviors shape psychosocial effects of extended chatbot use: a longitudinal randomised controlled study. arXiv:2503.17473. https://arxiv.org/abs/2503.17473
Miller, P. (2026). Gen Z is using ChatGPT to practise salary negotiations and tough conversations. Fortune. https://fortune.com/2026/03/22/gen-z-roleplay-chatgpt-difficult-conversations-work/
Muldoon, J. & Parke, J.J. (2025). Cruel companionship: how AI companions exploit loneliness and commodify intimacy. New Media & Society. https://journals.sagepub.com/doi/10.1177/14614448251395192
Nielsen Norman Group (2019). Intelligent assistants: users’ attitudes toward Alexa, Google Assistant, and Siri. https://www.nngroup.com/articles/voice-assistant-attitudes/
ÖIAT / Saferinternet.at (2026). AI chatbots as everyday companions for young people. https://better-internet-for-kids.europa.eu/en/news/new-study-ai-chatbots-everyday-companions-young-people
Pew Research Center (2019). Americans and their smart speakers: 5 findings about their views and habits. https://www.pewresearch.org/short-reads/2019/11/21/5-things-to-know-about-americans-and-their-smart-speakers/
Resume Genius (2025). Gen Z and AI in the workplace report. https://resumegenius.com/blog/career-advice/gen-z-and-ai
Robb, M., quoted in: Gen Z is using AI to navigate social situations. CNN, March 2026. https://www.cnn.com/2026/03/07/health/gen-z-ai-conversations-wellness
Science Media Centre (2022). Expert reaction to Arora commentary on voice assistants and child development. https://www.sciencemediacentre.org/expert-reaction-to-an-opinion-piece-on-voice-controlled-devices-and-child-development/
Szczuka, J.M. & Krämer, N.C. (2025). Alexa, shut up! A 2.5-year study on negatively connotated communication behaviour towards voice assistants in the family home. Behaviour & Information Technology. https://www.tandfonline.com/doi/full/10.1080/0144929X.2025.2533352
Székely, É., Miniota, J., & Hejná, M. (2025). Will AI shape the way we speak? The emerging sociolinguistic influence of synthetic voices. Proceedings of IWSDS 2025, Bilbao. arXiv:2504.10650. https://arxiv.org/abs/2504.10650
Turkle, S., quoted in: Chatbots aren’t sentient, but you should be nice to them anyway. Scientific American, July 2024. https://www.scientificamerican.com/article/should-you-be-nice-to-ai-chatbots-such-as-chatgpt/
Weizenbaum, J. (1966). ELIZA: A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://dl.acm.org/doi/10.1145/365153.365168
Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W.H. Freeman.
Wenzel, K. & Kaufman, G. (2024). Designing for harm reduction in voice assistant errors. Carnegie Mellon HCII. https://www.hcii.cmu.edu/news/designing-for-harm-reduction
Yin, Z., Wang, H., Horio, K., Kawahara, D. & Sekine, S. (2024). Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance. Proceedings of SICon 2024. arXiv:2402.14531. https://arxiv.org/abs/2402.14531
Should we be kind to machines (for our own sake, really)? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.


