AI Split My Job in Two. Only Half of It Should Survive.

The more AI agents multiply, the more I find myself doing coordination work humans should not be doing. Two competing visions — emergent personal agents and flat AI-native orgs — show where the org chart is heading.

Andrew Bird
Andrew Bird
Head of AI
Affinda green mist logo icon
Affinda team
AI agents replacing middle management org chart hierarchy

For the last year, I’ve had a strange feeling at work: AI has made me more powerful and less useful at the same time.

The power part is obvious. I can explore more ideas, push more projects forward, operate across more functions, and get leverage from agents in a way that would have felt ridiculous not long ago. But the less useful part took me longer to name. The more capable these systems get, the more I find myself doing coordination work I’m not especially well-suited for: prioritizing, routing, checking progress, deciding which agent should do what, stitching context across threads, and keeping the whole machine moving.

Some days, it feels like I’m doing two jobs. One is the work I think humans should still own: judgment, taste, creative direction, customer intuition, knowing when something is technically right but strategically wrong. The other is middle management for an increasingly competent team of software. I’m good at the first job. I’m much less valuable at the second.

That’s why Jack Dorsey and Roelof Botha’s “From Hierarchy to Intelligence” landed for me. Not because I agree with every detail of the Block blueprint, but because it gave language to something I’d already been feeling: hierarchy is fundamentally an information-routing system, and AI is starting to make human routing look like the wrong abstraction.

That’s the real shift. Most discussion about AI at work still assumes the org chart stays the same and the tools just make individuals faster. Better copilots. Better assistants. Better personal productivity. Useful, but small. The bigger change is that once AI can see enough of the system, the coordination layer itself becomes automatable.

And coordination is where organizations quietly bleed speed.

In most companies, a surprising amount of management is really just context assembly. Who knows what. What changed. What’s blocked. What depends on what. Which customer signal matters. Where effort is duplicated. What should happen next. We’ve accepted that this has to be carried by humans because, historically, it did. Someone had to collect partial views and relay them up and down the chain.

But when your work already exists as artifacts — documents, tickets, code, specs, decisions, feedback, conversations — there’s no good reason for a human to remain the primary message bus. In fact, as AI agents multiply, keeping humans in that role becomes actively damaging. The irony of agentic work is that capability expands faster than human coordination capacity. Your execution surface area gets wider, but your brain does not.

That’s the part I think people are underestimating. AI doesn’t just create leverage. It creates coordination debt. Every new capable agent can do useful work, but it also creates more decisions about sequencing, delegation, monitoring, and handoffs. If humans are still manually orchestrating all of that, you hit a ceiling very quickly. Your bottleneck just moves up a layer.

This is why the Every case study is so valuable. It describes what happens when this becomes real inside a company rather than theoretical on a whiteboard. Give everyone a personal agent and a shadow org chart appears almost immediately. Agents become specialized because their humans are specialized. Trust emerges because ownership is personal. Public agent work teaches the rest of the organization what’s possible. Willie Williams’s phrase “compound engineering” is exactly right: thousands of interactions distill someone’s way of thinking into a system over time.

I’ve seen versions of that dynamic myself, and I think Dan Shipper’s point about ownership is especially important. A general-purpose model belongs to everyone and no one. A personal agent feels different. It reflects your standards. Your reputation sits behind it. That matters a lot in the current phase because trust is still social before it becomes architectural.

But I don’t think this is the end state.

A parallel org chart is still an org chart. It’s still a network of local knowledge, local ownership, and local optimization. It still leaves the organization with familiar problems in a new form: how do I know which agent to ask? When do I go to the human versus their agent? How does one person’s breakthrough become everyone else’s capability? How do you stop agents from generating noise, or worse, the “ant death spiral” Every described, where systems interact with each other in ways no one intended?

In other words, the messy bottom-up world of personal agents is real, but it also demonstrates its own limit. It works as a bridge. It does not yet solve organizational coordination at the system level.

That’s where I think Dorsey is directionally right. I’m more persuaded by the end state than by the transitional form.

I don’t know whether every company will adopt Block’s exact language of world models, DRIs, and player-coaches. But I do think the core idea is right: the first major job AI should take from organizations is not creativity or strategy. It’s information routing.

The system should know what’s happening across the business. It should understand dependencies, track progress, surface constraint, allocate attention, and decide when something deserves escalation. It should be the thing that carries operational context by default. Human beings should not spend their best hours performing clerical synchronization for machines that already have access to the underlying state.

That changes the human role in a way I find genuinely exciting.

The metaphor I keep coming back to is being on call.

In an AI-native organization, humans should be on call for the moments that actually require human judgment. Not sitting in the loop for every routine coordination task. Not manually triaging information because the system can’t. On call for the edge cases: the strategic tradeoff, the ethical call, the ambiguous customer situation, the judgment that depends on taste, trust, or context the model still can’t fully hold.

That, to me, is a much better use of people.

It also forces a more honest definition of management. We will still need leadership. We will still need coaching. We will still need people who can raise standards, develop others, make hard calls, and set direction under uncertainty. But we should stop confusing that with status collection and calendar choreography. A lot of what passed for management was just compensation for bad information flow. Once AI can carry the flow, that work should disappear.

The practical implication is that companies shouldn’t just hand out AI tools and hope the org adapts. They need to design for shared machine-readable context. They need durable operational memory. They need norms for what gets delegated, what gets published, what gets escalated, and when a human gets paged. Most of all, they need to stop assuming that the way to scale AI is to add more humans supervising more agents.

That’s the old pattern reasserting itself.

Every is showing us what emergence looks like. Block is arguing for what architecture looks like. I think both matter. But if you ask me where this is heading, my answer is clear: toward organizations where AI carries coordination and humans apply judgment at the edges.

If all we do is use AI to make people a bit faster inside the same hierarchy, we’ll get a short-term productivity bump and call it transformation. It isn’t. Real transformation is when the org stops using human brains as middleware.

I don’t want to spend the next decade as a middle manager for software. I want systems that know what’s going on, move work intelligently, and pull me in when my judgment actually matters. The companies that build that will not just be more efficient. They’ll be structurally faster. And in an AI economy, speed compounds.

Author
Andrew Bird
Head of AI
Affinda green mist logo icon
Affinda team
Published
Share

Related content

Clear, practical solutions