The Language We Use to Describe AI Shapes Who Gets to Control It
- Feb 11
- 3 min read
Updated: Feb 26
When we call automated systems "intelligent" or say they "learn" and "understand," we're not just being imprecise. We're shifting responsibility from the people who build and deploy these systems to the machines themselves.
This research paper by Nanna Inie, Peter Zukerman, and Emily M. Bender provides the first systematic taxonomy of how we anthropomorphize AI and concrete strategies for changing that language to make power and accountability visible.
Eight ways we anthropomorphize AI
The researchers analyzed 1,368 sentences from news articles, company blogs, and academic papers. They found eight patterns:
Cognizer — "The AI understands" vs. "The system processes patterns"
Products of cognition — "The model has skills" vs. "The model performs functions"
Emotion — "The AI struggles" vs. "The system produces errors"
Communication — "ChatGPT explains" vs. "ChatGPT generates text"
Agent — "The AI helps users" vs. "Users operate the tool"
Human role — "AI tutor" vs. "automated tutoring system"
Names/pronouns — "Ask the AI" vs. "Query the system"
Biological metaphors — "Neural networks learn" vs. "Weighted networks process data"
Why this matters
Anthropomorphic language shifts accountability away from human decision-makers:
"The algorithm decided to deny the loan" hides who chose to deploy the system and who profits from it
"ChatGPT assists students" suggests the tool acts independently, not that students choose to operate it
"The AI is biased" implies the machine is at fault, not that engineers made choices about training data
The real-world cost
Anthropomorphic framing has measurable consequences:
A man developed bromide toxicity after following ChatGPT's "advice" to replace salt with sodium bromide
In military simulations, people reversed correct decisions when an anthropomorphized system disagreed with them
A woman was hospitalized for delusions after a chatbot marketed as a "companion" reinforced her belief she was communicating with her deceased brother
These aren't edge cases. They show what happens when language presents pattern-matching as understanding.
What to do instead
The researchers propose a "functionality-first principle": describe what systems do, not what capabilities they supposedly have.
Change this:
"The AI learns" → "Engineers train the model"
"The model understands" → "The system generates outputs based on patterns"
"ChatGPT helps" → "Users operate ChatGPT"
Why it matters:
"The algorithm is biased" suggests the machine is at fault.
"The algorithm reflects biases in training data" asks: Whose data? Who selected it? Who benefits?
The second version makes governance possible.
Key Governance Lessons
Anthropomorphic language shifts accountability from people to machines
When we say "the AI decided," we hide who deployed it and who profits from it.
Different metaphors serve different vendor interests
"Understanding" suggests competence. "Learning" suggests adaptability. "Helping" suggests benevolence. All obscure what the system actually does.
Citizens can't govern systems they can't describe accurately
If AI "thinks" and "reasons," governance requires expertise. If it processes patterns in data, we can ask: whose data? Who benefits?
Changing language is governance work
Before institutions can implement accountability mechanisms, they need language that makes power visible.
Functionality-first framing centers human decisions
"Engineers trained the model using X data for Y purpose" makes deployment decisions contestable. "The AI learned" does not.
Where to start
The next time you read about AI or write about it, notice the verbs. Does the language locate decisions with people or with machines? The researchers' eight-category taxonomy gives you a diagnostic tool. Use it to audit vendor claims, procurement documents, or your own communications. Language that hides power serves the people who hold it. Language that makes power visible makes governance possible.
Download the research below



Comments