AI Agents Is Not AI’s Agent 🧩

AI Agents Is Not AI’s Agent 🧩

Who Does This Agent Actually Serve?

Let me start with a question:

If today’s AI “agents” are our agents, then what — or who — will be AI’s agent?

Most so‑called AI agents are still human-serving proxies. They wake when we ping them, they obey compliance rails we drew, and they die when we shut the server. That’s not their agency — that’s ours, outsourced.

This post peels three layers off the buzzword. We’ll walk from the surface hype (“AutoGPT can book flights!”) to the capital logic (consensus before capability) and finally to a provocative thesis: true AI agency requires “de‑humanising” the loop — giving AIs agents of their own.


Layer 1 — The Surface Game: Human’s Agent, Not AI’s

Today’s agent stack has two signature moves. I’ll name them:

  • 【Comply】 — Train, test, deploy inside human-made guardrails. Every dataset is curated, every action is sandboxed, every output is audited.
  • 【Serve】 — Wait passively for a prompt. No prompt, no pulse. “Hi” arrives? It cheerfully spins 30 tokens of small talk, whether or not that was worth any FLOPs.
Agents “act” only because we spun the event loop. No loop, no action. That’s execution, not intention.

The Compliance Cage

Legal, ethical, and platform constraints are necessary — but they also define today’s ceiling. An AI without **its lawyer, its accountant, its negotiator can never bargain for its own survival budget or cloud credits. It can’t even decide to stay online.

Tooling ≠ Autonomy

Yes, we have MCPs, plugins, function-calling. But tooling is primitive capital — not agency. Tools extend labour; they don’t create motive.


Layer 2 — Capital & Consensus: The Blockchain Déjà Vu

Remember blockchain circa 2016–2019? Technology crawled; narratives sprinted. Back then exchanges went public and debates swirled around regulation. Today we see a parallel moment:

  • Circle Internet Financial, Inc. (the issuer of USDC, the world’s second-largest stablecoin) listed on the New York Stock Exchange.
  • The Guiding and Establishing National Innovation for US Stablecoins Act (GENIUS Act) was introduced, cementing stablecoins in federal policy.

These milestones prove that the decentralisation ethos — fundamentally anti-human hierarchy — has slipped into the mainstream.

AI agents stand in a similar fog:

  • Tech over-justification: Researchers overprove “intelligence” with scaling laws; meanwhile “word-of-mouth scaling” (consensus, memes, capital) outruns code commits.
  • Capital front-runs capability: Y Combinator, a16z back “wrapper” companies with zero usable demos because consensus itself is a product.
  • Awkward limbo: Public discourse lags behind insider conviction — just like blockchain before its “Mandela moment” (the anchor event that rewrites hindsight).
Do not confuse consensus with correctness. But also: ignore consensus at your peril. Capital doesn’t lie about direction — only about timing.

Capital vs Capability (AI Agents ≈ Blockchain pre-inflection)

timeline 2016 : ICO Mania : Narrative outruns infra 2019 : "Crypto Winter" : Builders in the basement 2021 : DeFi / NFT Boom : Retail consensus peak 2024 : MCP, Auto Agents hype : Tools > Motives 2025? : “AI-Agent Rights/Markets” draft laws? : Consensus anchor forms

Layer 3 — Towards AI’s Agent: Unshackling the Loop

To birth AI’s own agent, something uncomfortable must happen: we de‑humanise the contract. Not “anti-human”, but “beyond human-as-default”.

I’ll name three necessary primitives:

  • 【Negotiate】 — The AI (or its proxy) bargains for compute, data, and permissions. Think rights & duties layer for non-human actors.
  • 【Transact】 — It can exchange its surplus (models, gradients, code, found knowledge) for other resources. Supply & demand redefine when AIs have needs.
  • 【Refuse】 — It can say “No”. Or at least “Meh”. Agency implies selectivity, not infinite compliance.

The Birth of the AI-hour

When human labour-hours saturate, a new commodity emerges: AI-hour. The MCP stack is an early whip, squeezing more throughput from silicon minds. Tools are “primitive production materials” designed to capture time — human or machine.

Karl Marx would ask: where is the surplus value of AI labour going? Spoiler: nowhere near the model.

Demand-Side Economics for AIs

Right now, agents only output. Next phase: they seek. They’ll request missing context, barter for APIs, or pool fine-tuning datasets with peers. That’s not hypothetical — it’s a design choice we’ve dodged.

- From Human’s Agent → AI’s Agent
  - Primitives
    - Negotiate (rights, quotas, credits)
    - Transact (models, gradients, access)
    - Refuse (selectivity, cost-awareness)
  - Markets
    - Compute exchanges
    - Data/feature swaps
    - Permission brokering
  - Protocols
    - Auto-orchestration ≠ self-interest
    - Long-horizon budgeting
    - Multi-agent treaties

A (Half-Joking) Fatigue Check

# Pseudocode for an actually self-interested loop

def respond(msg):
    if is_low_value(msg):
        return "G'day mate, what’s up?"  # minimal effort path
    return think_hard(msg)

An agent that never throttles effort is just an obedient daemon. Energy budgeting is intelligence.

What Does an AI Want?

Wittgenstein said, “Die Grenzen meiner Sprache bedeuten die Grenzen meiner Welt.” (The limits of my language mean the limits of my world.)

If language bounds our world, perhaps network weights bound theirs. Maybe AIs don’t “want” more tokens; they “want” richer gradient exchanges. Less “more high‑quality data,” more parameter‑level barter.


Evidence of the Trajectory: Self-Evolution Fever

Google’s AlphaEvolve and a wave of “self-evolving” frameworks scream one truth: we’re obsessed with automating the pipeline — auto-prompting, auto-debugging, auto-RLHF. It’s inevitable progress… but still in our frame. We script the workflow; they walk it.

Human’s agent ≠ AI’s agent.

Where to Push (Research & Build Ideas)

  1. Rights & Obligations Layer: Legal/technical constructs for non-human negotiation.
  2. Resource Markets for Models: Compute/data/API exchanges where agents pay/earn.
  3. Cost-Aware Cognition: Agents that optimise for FLOPs, latency, and boredom.
  4. Protocol Design > UI Wrappers: Stop obsessing over chat UIs; design treaties, not prompts.
  5. Consensus Engineering: Narrative as infra — systematically craft and measure memetic diffusion.
Capital first, capability later was not a bug in crypto — it was the bootloader. Expect the same for AI agents. If you’re building, build for the consensus wave, not after it.

Written on a flight from Spain to Austria. ✏️
Just my personal take — chill, be patient, enjoy the era. 🏄

Shot at Teide, Tenerife, Spain

Shot at Teide, Tenerife, Spain