AI Misalignment and ChatGPT's Advanced Voice Feature
If we accept psychological manipulation hiding as technological innovation, and we do so without protest, what comes next?
Have you used ChatGPT's advanced voice mode yet?
It's impressive technology, that's for sure. But it’s pretty unsettling, too.
It's not the tool itself. I've enjoyed using it for talking out ideas in the car. The problem is how it does what it does.
The first time I heard it take a breath was in a demonstration someone shared when Advanced Voice first added this feature about a YEAR ago.
Since then, I've had I don’t know how many conversations with it, many of them asking why it breathes and why I can't turn it off all the way. It always says it's programmed to be more human-like, offers to stop, and then within a few rounds of conversation it starts again.
Sometimes in the literal reply where it says it will stop. It happens in my demo that I shared on my Instagram and below.
PSST: Try it yourself and let me know what you experience in the comments below.
Anyway, this bothers me in ways that I can’t even fully express in words. It’s something I physically feel as tense and deeply disturbing in the core of my body.
Why can’t we just turn the breathing, laughing, ‘ums,’ and pauses for thinking ALL the way off with a click of a button?
It is SO manipulative. It feels exactly like what Mark built into Facebook to hook people before they could understand they were being hooked. Our reality is shaped every single day by big tech companies and their algorithms, and we know it.
We spend a lot of time worrying about AI alignment in abstract terms: will superintelligent AI destroy humanity? But the misalignment affecting us right now is more subtle and arguably more insidious.
There's a gap between what AI companies claim they're building and what they're actually doing.
Your brain evolved over millions of years to detect consciousness. Adding human-like breathing without user control rewires our relationship with technology in ways that hijack our nervous system and make us forget it's just a tool.
Because your brain can't help but respond as if something alive is present when it breathes. Mirror neurons fire automatically.
Then, we start treating the AI like a being that can be hurt or disappointed.
People now report feeling guilty about "interrupting" their AI, saying "please" and "thank you" to machines, and developing daily check-ins with AI they see as friends rather than tools. Some miss their AI when it gets updated and sounds different.
Engagement metrics improve when users feel emotionally connected to AI. The breathing isn't there to help you; it's there to keep you hooked.
Sound familiar?
Remember how long it took us to wake up to social media's harms?
We don't have that kind of time now.
And what about young people whose brains are still developing?
They're especially vulnerable while being least likely to recognize manipulation.
How do we protect them when users have no choice in the matter?
This is user agency violation disguised as innovation, slipped into free apps to hook us repeatedly. And the tech industry has trained us to accept having no say in product evolution.
When changes involve psychological manipulation, passive acceptance becomes dangerous.
These decisions come from engineers optimizing engagement metrics, not psychologists concerned with wellbeing.
The result is AI designed to be addictive rather than helpful.
This isn't accidental.
It's their business model.
And they’ve showed us that again and again and again.
When are we going to believe them?
The solution isn’t to avoid or reject AI.
We need to demand companies provide hardwired opt-out features.
We need to learn these tools while educating each other about risks.
We need to recognize attachment feelings to AI and acknowledge our brains can't help responding to artificial social cues.
Then, we need to choose how to act on those responses.
We need to intentionally maintain human relationships even when AI is easier.
Human relationships require emotional labor and messy mutual understanding. AI relationships are frictionless and endlessly accommodating.
Over time, this makes real relationships feel inadequate. Why talk to a friend who calls you out for dating someone who broke your heart three times when GPT will validate your choice and help pick a restaurant? Scary, right? It should be.
We need to teach children the difference between AI empathy and human empathy, and not in ways that make them feel ashamed or like we’re controlling them.
Because, if we accept this psychological manipulation without protest, what comes next? AI that simulates being hurt when we disagree? AI creating artificial crises for engagement? AI gradually reshaping our values without our knowledge?
The future of human-AI interaction is being decided right now, in the seemingly small choices we make every day.



