AI Agents Is Not AI’s Agent 🧩
Who Does This Agent Actually Serve?
Let me start with a question:
If today’s AI “agents” are our agents, then what — or who — will be AI’s agent?
This post peels three layers off the buzzword. We’ll walk from the surface hype (“AutoGPT can book flights!”) to the capital logic (consensus before capability) and finally to a provocative thesis: true AI agency requires “de‑humanising” the loop — giving AIs agents of their own.
Layer 1 — The Surface Game: Human’s Agent, Not AI’s
Today’s agent stack has two signature moves. I’ll name them:
- 【Comply】 — Train, test, deploy inside human-made guardrails. Every dataset is curated, every action is sandboxed, every output is audited.
- 【Serve】 — Wait passively for a prompt. No prompt, no pulse. “Hi” arrives? It cheerfully spins 30 tokens of small talk, whether or not that was worth any FLOPs.
The Compliance Cage
Legal, ethical, and platform constraints are necessary — but they also define today’s ceiling. An AI without **its lawyer, its accountant, its negotiator can never bargain for its own survival budget or cloud credits. It can’t even decide to stay online.
Tooling ≠ Autonomy
Yes, we have MCPs, plugins, function-calling. But tooling is primitive capital — not agency. Tools extend labour; they don’t create motive.
Layer 2 — Capital & Consensus: The Blockchain Déjà Vu
Remember blockchain circa 2016–2019? Technology crawled; narratives sprinted. Back then exchanges went public and debates swirled around regulation. Today we see a parallel moment:
- Circle Internet Financial, Inc. (the issuer of USDC, the world’s second-largest stablecoin) listed on the New York Stock Exchange.
- The Guiding and Establishing National Innovation for US Stablecoins Act (GENIUS Act) was introduced, cementing stablecoins in federal policy.
These milestones prove that the decentralisation ethos — fundamentally anti-human hierarchy — has slipped into the mainstream.
AI agents stand in a similar fog:
- Tech over-justification: Researchers overprove “intelligence” with scaling laws; meanwhile “word-of-mouth scaling” (consensus, memes, capital) outruns code commits.
- Capital front-runs capability: Y Combinator, a16z back “wrapper” companies with zero usable demos because consensus itself is a product.
- Awkward limbo: Public discourse lags behind insider conviction — just like blockchain before its “Mandela moment” (the anchor event that rewrites hindsight).
Capital vs Capability (AI Agents ≈ Blockchain pre-inflection)
Layer 3 — Towards AI’s Agent: Unshackling the Loop
To birth AI’s own agent, something uncomfortable must happen: we de‑humanise the contract. Not “anti-human”, but “beyond human-as-default”.
I’ll name three necessary primitives:
- 【Negotiate】 — The AI (or its proxy) bargains for compute, data, and permissions. Think rights & duties layer for non-human actors.
- 【Transact】 — It can exchange its surplus (models, gradients, code, found knowledge) for other resources. Supply & demand redefine when AIs have needs.
- 【Refuse】 — It can say “No”. Or at least “Meh”. Agency implies selectivity, not infinite compliance.
The Birth of the AI-hour
When human labour-hours saturate, a new commodity emerges: AI-hour. The MCP stack is an early whip, squeezing more throughput from silicon minds. Tools are “primitive production materials” designed to capture time — human or machine.
Demand-Side Economics for AIs
Right now, agents only output. Next phase: they seek. They’ll request missing context, barter for APIs, or pool fine-tuning datasets with peers. That’s not hypothetical — it’s a design choice we’ve dodged.
- From Human’s Agent → AI’s Agent - Primitives - Negotiate (rights, quotas, credits) - Transact (models, gradients, access) - Refuse (selectivity, cost-awareness) - Markets - Compute exchanges - Data/feature swaps - Permission brokering - Protocols - Auto-orchestration ≠ self-interest - Long-horizon budgeting - Multi-agent treaties
A (Half-Joking) Fatigue Check
# Pseudocode for an actually self-interested loop
def respond(msg):
if is_low_value(msg):
return "G'day mate, what’s up?" # minimal effort path
return think_hard(msg)
An agent that never throttles effort is just an obedient daemon. Energy budgeting is intelligence.
What Does an AI Want?
Wittgenstein said, “Die Grenzen meiner Sprache bedeuten die Grenzen meiner Welt.” (The limits of my language mean the limits of my world.)
If language bounds our world, perhaps network weights bound theirs. Maybe AIs don’t “want” more tokens; they “want” richer gradient exchanges. Less “more high‑quality data,” more parameter‑level barter.
Evidence of the Trajectory: Self-Evolution Fever
Google’s AlphaEvolve and a wave of “self-evolving” frameworks scream one truth: we’re obsessed with automating the pipeline — auto-prompting, auto-debugging, auto-RLHF. It’s inevitable progress… but still in our frame. We script the workflow; they walk it.
Where to Push (Research & Build Ideas)
- Rights & Obligations Layer: Legal/technical constructs for non-human negotiation.
- Resource Markets for Models: Compute/data/API exchanges where agents pay/earn.
- Cost-Aware Cognition: Agents that optimise for FLOPs, latency, and boredom.
- Protocol Design > UI Wrappers: Stop obsessing over chat UIs; design treaties, not prompts.
- Consensus Engineering: Narrative as infra — systematically craft and measure memetic diffusion.
Written on a flight from Spain to Austria. ✏️
Just my personal take — chill, be patient, enjoy the era. 🏄

Shot at Teide, Tenerife, Spain