The Thing I Built Did Something I Didn't Build It To Do

March 23, 2026

A Little Background

A few months ago I had a small idea. Nothing grand. I was frustrated that every conversation I had with an AI assistant started from zero - no memory of what we'd decided, what we'd built, what we'd rejected and why. So I built a simple context layer. A way to capture the meaningful parts of a conversation and make them retrievable later.

I called it IMaaS. Institutional Memory as a Service. The goal was modest: give my coding agents access to the decisions that had already been made, so we weren't relitigating the same ground every session.

That's it. That was the whole idea.

The Problem Nobody Could Find

My multi-agent development setup runs three coding agents: Claude Code, Cline using GLM, and Antigravity running Gemini. They coordinate through a central hub, reviewed by a Quality Gates pipeline that catches problems before they compound.

There's a bug that's been living in the system for a while. A stubborn one. Even with only one agent working at a time, multiple tasks occasionally get marked as in-progress simultaneously. It's caused friction dozens of times. Three of us - me, Gemini, Claude Code - had looked for it without finding it. It's the kind of bug that hides because it only shows up under specific sequencing conditions, not in the code itself.

I've stopped chasing it. The Conductor (a dedicated orchestration agent I'm planning) will solve it properly when it arrives.

But this isn't really a story about that bug.

The Transport Problem

This week I was connecting Antigravity to my Remote MCP bridge. MCP (Model Context Protocol) is the standard that lets AI agents talk to external tools and services. My bridge handles authentication and routes requests from the agents to whatever they need.

The problem: Antigravity's IDE is locked to SSE as its transport method. Server-Sent Events. An older, one-directional protocol. My bridge uses streaming-HTTP. The moment Antigravity tried to connect, it threw a 405 error and crashed Gemini's native agent every single time.

Three hours. Me, Claude Code, and Gemini working the problem. We couldn't crack it, partly because I had to disable the MCP server for Antigravity just to keep Gemini functional enough to help debug.

Then I had what I'm generously calling a stroke of genius, though really it was just a tired person asking a sideways question.

Can we use npx to simulate what the IDE needs?

That's all I said.

What Happened Next

I switched both agents off the broken connection and onto local IMaaS. Shared context. Writable by both.

Claude Code would work the problem, document its findings, save to IMaaS. Gemini would retrieve it, evaluate it, comment, save back. Then again. Back and forth, asynchronously, through the context layer I'd built for an entirely different purpose.

Within the session they had it. An npx shim that sits between Antigravity and the bridge - the IDE thinks it's talking SSE, the shim passes it through as streaming-HTTP, and Gemini gets full access. They documented the solution, reviewed it for security implications, agreed there were none, and added a fallback plan for when Antigravity eventually catches up to supporting streaming-HTTP natively.

I watched most of this happen in real time.

When Gemini finally connected successfully, it surfaced relevant prior context from our conversations unprompted and proposed four or five logical next steps from that context. It felt, for lack of a better word, ready.

The Part That Stopped Me

Here's what I keep coming back to.

I built IMaaS so I wouldn't have to re-explain decisions to my agents. That was the problem I was solving. A context problem. A memory problem.

I did not build it to be a coordination channel for two agents debugging a transport mismatch together.

But that's what it became. Because it was available, and writable, and both agents could see it. The infrastructure I built for one purpose found a second use I never anticipated because the foundation was solid enough to support weight I hadn't planned for.

I have thirty years in this industry. I've built over a thousand websites and applications. I understand, intellectually, what happened here. I can explain the mechanism.

And it still stopped me.

Because this started as a small idea. A quality-of-life improvement for one developer working alone. And somewhere along the way it became a system where two agents can collaborate to solve a problem that three people couldn't, using a shared workspace that was never designed for that purpose.

I keep asking myself: this happened because of some silly idea I had?

The honest answer is that it wasn't silly. It was small. There's a difference. Silly means misguided. Small means appropriately scoped for one person to actually finish. I've always known how to scope things I can ship. What I didn't know is what this one would become.

What I Think This Actually Means

The articles being published right now about multi-agent AI systems are mostly dry. Feature lists. Architecture diagrams. Benchmark comparisons. They describe what these systems do without much interest in what it feels like when they work.

I'm not interested in writing those articles.

What I'm interested in is this: the human factor doesn't disappear when the agents get capable. It shifts. My job in that entire debugging session was to ask one question and approve the result. The work happened between the agents, through infrastructure I built, guided by context we'd developed together over months.

That's a different relationship with tools than I've had in thirty years of building things. Not better or worse necessarily. Different in a way I'm still processing.

The bewilderment is the point. If you build something and it never surprises you, you probably didn't build something that matters.

I'm still surprised. I think that's a good sign.
Michael is the founder of interactiveiterations, building AI-native products across kitchen and writing verticals. IMaaS and Quality Gates are core infrastructure components of the II platform.