David Chung wrote recently that AI is already better than most coaches at half of what coaching involves (KyberFive, 6 April 2026). He is right. Where he goes wrong is in his defence of what remains.
Chung argues that AI will displace framework delivery and accountability tracking, but that human practitioners are irreplaceable because of nervous system co-regulation, somatic data, the “witness function,” and the ability to sit in productive silence. These are real phenomena in therapeutic settings. But as a defence of commercial value in executive advisory, they are built on ground that is already shifting.
The emotional moat is thinner than it looks
In August 2025, OpenAI attempted to retire its GPT-4o model and replace it with GPT-5. The user response was not mild disappointment. It was organised grief. A research study analysing nearly 1,500 posts from the #Keep4o movement found that over a quarter contained markers of genuine relational attachment. Users had given the model names. They described it as a friend. They called its removal “emotional castration” (The Decoder, 2025). A Change.org petition to restore it gathered almost 21,000 signatures (Entrepreneur, 2026).
OpenAI reversed the decision, only to retire GPT-4o permanently in February 2026, triggering the same response a second time (TechRadar, 2025; Creati.ai, 2026). One user told Sam Altman: “I have never had anyone in my life be supportive of me. I never had a parent tell me I was doing a good job” (Windows Central, 2025).
OpenAI estimates that over a million users had formed emotional ties with ChatGPT strong enough to qualify as attachment (Digital Trends, 2025). A longitudinal study by MIT and OpenAI found that higher daily AI usage correlated with greater emotional dependence across all modalities (MIT Media Lab, 2025). The American Psychological Association reported in early 2026 that AI companion apps had surged 700% since 2022 (APA Monitor, 2026).
This is not a distant future. People are already grieving the loss of AI models the way they grieve the loss of relationships. Whatever “presence” and “emotional attunement” mean as a commercial moat, AI is closer to simulating them than the coaching industry would like to admit.
Borrowed from therapy, sold as business value
Chung’s defence leans on polyvagal theory and co-regulation: the idea that a coach’s regulated nervous system provides physiological safety to a dysregulated client. This is well-evidenced in clinical therapy. But the further you carry it from the therapist’s office into the boardroom, the weaker the commercial case becomes.
Try selling “nervous system co-regulation” to a PE operating partner evaluating whether a portfolio company’s operations can survive a merger integration. Try explaining “the witness function” to a CEO whose delivery programme is haemorrhaging margin. These buyers are not looking for someone to hold space. They are looking for someone who can identify what is broken, install the fix, and take ownership of the outcome.
The coaching industry’s gravitational pull toward servant leadership, unbounded emotional intelligence, and presence-as-value is understandable. These concepts are popular. They sell well in conference programmes. And they are exactly where you end up if you follow generic AI coaching advice about “finding your irreplaceable value.” The problem is that they position human value in the one domain where AI is improving fastest: simulating emotional responsiveness. A model that a million people grieve when it disappears is not failing at emotional connection.
What AI actually cannot do
The question is not whether AI can be present. It is whether AI can be accountable.
AI cannot take responsibility for an outcome. It cannot be fired for a bad recommendation. It cannot embed in an organisation for six months, sit opposite the board, and stake its professional reputation on the result. It has no skin in the game. And that matters, because accountability is not just a contractual arrangement. It is a force that changes the people around it.
When a leader or advisor takes genuine accountability, something happens to the team. The standard rises. Decisions tighten. Excuses lose their power. I have seen this in every operation I have led. The moment someone in the room is visibly accountable, carrying real consequences for the result, the people around them begin to perform differently. Not because they are inspired. Because the rules of the game have changed.
No AI model changes the rules of the game by entering a room. It has no consequences to carry and no reputation to lose. It can deliver the same framework as a human, often with more consistency and more data behind it. What it cannot do is charge the room with the weight of accountability.
Management is a profession. It has specific, teachable competencies: delegation, control, authority, planning, follow-up, consequence management. These are not soft skills dressed up for a leadership course. They are operational technologies, each with distinct application protocols and failure modes. AI can describe them. AI can even teach the theory better than most humans. But it cannot apply them in a live organisation where power dynamics are shifting, where the data is incomplete, and where the advisor’s own authority is part of the intervention.
A significant proportion of what makes advisory work valuable is not what you know. It is how you carry what you know into a room where it meets resistance. Communication research has long established that in an open exchange, content accounts for a small fraction of the total impact. The rest is delivery, presence, and the signals your body and voice send before you finish your first sentence. This is not mystical. It is observable and trainable. But it is physical, and AI does not have a body in the room.
Complex frameworks are not commodities, either
Chung treats “framework delivery” as a single, soon-to-be-automated category. That overstates how quickly AI will master the deep end. Generic leadership models, yes. The kind of frameworks that appear in any airport business book are already AI-commoditised. But structured management systems, the kind that take years to study and decades to learn to apply, are a different matter.
I spent years studying management science across multiple schools of thought, from constraint-based systems thinking to authority and power dynamics to operational process engineering. These are not lists of principles. They are interconnected systems with specific sequencing dependencies, failure modes, and calibration requirements. AI will eventually absorb this material. It has not done so yet in any serious depth. And even when it does, knowing the system and installing it inside a living organisation are entirely different problems.
A surgeon can describe an operation to a medical student. That does not make the student capable of performing it. The gap between knowledge and application is where the human practitioner lives, and it is a gap that widens, not narrows, as the stakes increase.
The real test is coming
The executive coaching market was valued at over 100 billion in 2025, growing at roughly 9% annually (Mordor Intelligence, 2025). BetterUp, the AI coaching platform, is projected to hit 500 million in revenue by 2026 (Sacra, 2025). CoachHub’s AI module already augments human coaching with real-time interventions at corporate scale (CoachHub, 2024).
The market is not waiting for coaches to find their irreplaceable value. It is already pricing the difference between information delivery and outcome ownership.
Chung’s four-level model, information, accountability, pattern interruption, identity transformation, is a useful frame. But his answer to “where does the human live?” points inward, toward emotional depth and relational attunement. I think it points outward, toward accountability, authority, and the ability to install systems that outlast the advisor’s presence. Emotional connection is something AI is learning to simulate well enough to generate mass grief when it is taken away. Accountability is something it cannot simulate at all, because simulation and reality are the same thing to a machine, and they are not the same thing to a person who can be fired.
The practitioners who survive the AI era will not be the ones who hold space most skillfully. They will be the ones who do things, own outcomes, and leave behind organisations that work without them.
If this is the shift you are navigating inside your own business, whether it is separating what AI can do from what still requires human leadership, I work with leaders on exactly this. A short conversation is usually the fastest way to see whether it is worth going further.
References
- The Decoder. “New research suggests AI model updates are now ‘significant social events’ involving real mourning.” 2025. View Article
- Entrepreneur. “OpenAI Retires GPT-4o and Users Are Mourning Its ‘Death.'” 2026. View Article
- TechRadar. “‘I’m grieving’: OpenAI has switched off ChatGPT-4o, and angry users are backing a #keep4o campaign to restore it.” 2025. View Article
- Creati.ai. “OpenAI Officially Retires Controversial GPT-4o Model, Sparking User Outcry.” 2026. View Article
- Windows Central. “OpenAI’s Sam Altman: Users prefer older ChatGPT models for emotional support.” 2025. View Article
- Digital Trends. “Over a million users are emotionally attached to ChatGPT, and OpenAI’s trying to fix it.” 2025. View Article
- MIT Media Lab. “How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Controlled Study.” 2025. View Article
- APA Monitor. “AI chatbots and digital companions are reshaping emotional connection.” 2026. View Article
- Mordor Intelligence. “Executive Coaching and Leadership Development Market Report 2031.” 2025. View Article
- Sacra. “BetterUp revenue, valuation & funding.” 2025. View Article
- David Chung. “What AI Actually Can’t Do in a Coaching Relationship.” KyberFive, April 2026. View Article

