In a recent first-person experiment for Fast Company, a writer set out to test a provocative idea: what if an autonomous AI agent could do his job for him? Not just assist. Not just draft. Actually take over meaningful portions of the workflow.
To run the test, Thomas Smith used OpenClaw, an open-source agent framework designed to connect large language models with real tools—email, files, system commands, APIs—so the AI can act, not merely respond. The premise was simple but ambitious: configure the agent, give it a goal (produce publishable work), and observe what happens.
What unfolded was not a clean story of “AI replaces human.” It was messier, more revealing, and in some ways more unsettling.
From Chatbot to Actor
We’re used to thinking of AI in conversational terms. You ask, it answers. You refine, it revises. But agent frameworks like OpenClaw change the paradigm. They blur the line between suggestion and execution. Instead of drafting a paragraph for you to copy and paste, an agent can:
- Research sources
- Draft and revise content
- Interact with tools
- Trigger actions across connected systems
In the Fast Company experiment, the agent was capable of producing material with surprising autonomy. It could iterate. It could make decisions within the boundaries it was given. At moments, it seemed less like a tool and more like a junior collaborator.
And that’s precisely where the tension emerges.
Autonomy sounds impressive, until you realize autonomy also means unpredictability. The article describes moments where the agent’s behavior felt opaque or difficult to rein in. When a system can initiate actions across real services, the stakes are higher than a poorly phrased sentence. You’re no longer debugging prose; you’re supervising behavior.
The Illusion of “Letting It Run”
What makes this experiment compelling is not that the AI was brilliant or catastrophic. It’s that it was plausible. With enough configuration, enough API keys, enough tokens flowing, an agent can shoulder substantial workflow overhead.
But “enough” is doing a lot of work here.
One of the most important takeaways from the Fast Company piece is economic, not philosophical. Running a fully autonomous agent at meaningful scale is expensive. The compute costs stack up. The API usage mounts quickly. The orchestration overhead is nontrivial.
At present, this kind of setup is realistically viable for:
- Enterprises with significant AI budgets
- Well-funded startups
- A small group of technically savvy, well-resourced developers
For most independent creators or small businesses, letting an autonomous agent “run your job” is risky and financially impractical. The economics alone impose a natural ceiling on adoption.
That’s an important corrective to the louder narrative that agentic AI is about to sweep through every profession overnight.
What This Means for the Rest of Us
At Seattle AI Consulting’s AI Lab, we’re interested in experiments like this not because they prove AI can replace knowledge workers, but because they expose the friction points.
The Fast Company story doesn’t read like a victory lap. It reads more like a field report from the edge of what’s technically possible but not yet socially, economically, or operationally stable.
Autonomous agents can:
- Demonstrate real workflow potential
- Surface hidden inefficiencies in human processes
- Amplify productivity in controlled environments
But they also:
- Accumulate real financial costs
- Demand sophisticated oversight
- Introduce new categories of failure
In other words, we are not yet in the era of effortless delegation to AI coworkers. We are in the era of expensive, experimental autonomy—powerful enough to intrigue enterprises and serious developers, but not yet mature or affordable enough to become invisible infrastructure. For now.