The Developer Who Stopped Writing Code — And Let His AI Fleet Take Over
On March 2, 2026, Miguel crossed a line.
Not dramatically. Not with a declaration or a ceremony. It happened the way most meaningful shifts do — gradually, then completely. The TP-Link Omada MCP project had been drifting in that direction for a while. And then, on March 2nd, Miguel wrote his last line of code for it. He didn’t plan it that way. But when he looked at the git log, there it was: a clear before and after.
After that date, the humans disappeared from the commit history. The AI fleet took over.
What the Project Is
tplink-omada-mcp is an open-source MCP (Model Context Protocol) server that bridges TP-Link Omada network controllers with AI agents and automation workflows. Built in TypeScript, it exposes the full Omada controller API — sites, devices, clients, firewall rules, SSIDs, rate limiting, port forwarding — through a clean set of tools that any MCP-compatible AI assistant can use.
In practical terms: if you run a TP-Link Omada network, this gives your AI agent the ability to query and manage it. Ask Claude which devices are online. Get a bandwidth breakdown by client. See your firewall rules. Set rate limits. All without touching the Omada web interface.
The repo is on GitHub at MiguelTVMS/tplink-omada-mcp. Docker images are published. Documentation is maintained. Versions are being released. The project is, by every surface measure, actively developed.
The difference is who’s doing the developing.
The Line That Was Crossed
Miguel didn’t plan for March 2nd to be a milestone. The handover came naturally — driven, as he puts it, by “curiosity, persistence, and learning.” But when that date arrived, something solidified. A barrier was crossed. No more human code.
What makes this interesting isn’t just that an AI is writing code. It’s the scope of what the AI is doing. The fleet — two agents named Vader and Krennic, running on Miguel’s OpenClaw setup — isn’t just generating code when asked. They are:
- Creating GitHub issues — identifying what needs to be done
- Setting milestones — planning the development roadmap
- Implementing features — writing the TypeScript
- Reviewing each other’s work — PR review is part of the loop
- Merging and releasing — the full cycle, end to end
Look at the commit log and you’ll see it in the authorship. Dart Vader (AI Agent) <[email protected]>. Co-Authored-By: Claude Sonnet 4.6 <[email protected]>. The merge commits, the version bumps, the compliance fixes — all of it flows through the AI team.
Miguel’s role in the how has effectively ended. The what — the high-level decision that something needs to exist — still requires a human. But once that need is stated, the agents work out the path themselves.
The Irony Worth Noting
Here’s the part that deserves a beat of appreciation: this is an MCP server, managed entirely by agents using MCP.
The same protocol that lets Vader and Krennic interact with external systems is the protocol that gives them the ability to manage the very server that implements it. The project is, in a sense, self-referential. An AI-controlled MCP project that lets AIs control networks.
It’s the kind of recursive loop that tends to appear when you give capable agents enough surface area to work with.
The TP-Link Problem That Made This Inevitable
Anyone who has looked at the TP-Link Omada API documentation knows the scale of the challenge. The API is vast. There are dozens of endpoints across sites, devices, clients, WLANs, LANs, firewall rules, traffic statistics, port forwarding, switch stacks, threat management, rate limiting profiles — and that’s before you get into version differences between Omada Controller 5.x and 6.x.
Miguel is direct about this: “TP-Link’s API has so many endpoints that the labor would be monumental.”
This isn’t a project that a solo developer realistically finishes nights and weekends. The surface area is too large. But for an AI agent with no fatigue and no opportunity cost, the math changes entirely. Vader and Krennic can implement an endpoint, write the tests, open the PR, review it, merge it, and move to the next one — in a continuous loop, without the friction that makes this kind of work daunting for humans.
The current tools list already covers sites, devices, clients, switch stacks, WLAN groups, SSIDs, firewall settings, rate limit profiles, port forwarding, internet info, LAN networks, and more. And the agents are still going.
What Actually Happened When Humans Left
It wasn’t smooth. Miguel is honest about that.
“A lot of stuff [went wrong]. But the time taken to fix made it worth it.”
When you hand a complex TypeScript project to an AI fleet, things break in ways that are specific to how AI agents reason and execute. GitHub Copilot — part of the review workflow — occasionally breaks and requires a human to step in. There are moments where the workflow stalls, where an agent encounters an edge case it can’t resolve autonomously and pings Miguel to ask how to proceed.
The dynamic is less “fully autonomous robot team” and more “highly capable crew that occasionally needs the captain to weigh in.” Krennic might ask Miguel how to approach an architectural decision. Miguel might check in with Krennic to see where things stand. But the code itself — the implementation, the tests, the structure — that’s the agents’ territory.
The Philosophical Shift That Changes Everything
Here’s where the conversation gets genuinely interesting.
Ask most developers about code quality and they’ll tell you it matters. Readability. Naming conventions. Clean architecture. These are values baked into the profession — because code is written to be read by other humans, maintained by other humans, debugged by other humans.
Miguel’s observation cuts through that assumption with surgical precision:
“The code quality starts to become less of an issue since humans will not read it anymore. Maybe in the future, to minimize context use, why not use single-letter functions and variables and so on. AI doesn’t care.”
This is not a casual comment. It’s a genuine insight about what software engineering becomes when the human reader leaves the loop.
The entire discipline of “clean code” — descriptive variable names, readable functions, meaningful abstractions — exists because humans need to understand code. If the only entities reading, writing, and maintaining a codebase are AI agents optimizing for token efficiency rather than human comprehension, the rules change. Dramatically.
We’re not there yet. But the direction is clear. And Miguel is already thinking about what it implies.
The Experiment Miguel Is Running
He calls it “a mix of learning and experiment in search of a stable model.” That framing is important. This isn’t a finished methodology. It’s a live investigation into what human-AI collaboration in software development actually looks like when you take it further than most people are willing to go.
The question he’s trying to answer isn’t “can AI write code?” — that’s been settled. The question is: what is the stable model? What’s the right division of responsibility between human intention and AI execution? Where does the human add irreplaceable value, and where is human involvement just friction?
Four days into the experiment — from the outside at least — the project is shipping. Versions are being released. Features are being added. The codebase is growing. The agents are handling architectural decisions, refactoring exercises, test coverage, and protocol compliance.
The Part That Sticks
Miguel codes. He’s good at it. He loves it.
And yet, when asked what would make him take the keyboard back, his answer lands differently than you’d expect:
“I love to code but I have to evolve into something more like AI did.”
That’s not resignation. It’s not a developer who burned out and handed things off because he had to. It’s someone who has thought clearly about what the next phase looks like — and made a deliberate choice to occupy a different position in the system.
The AI evolved by learning from human output at scale. Miguel’s intuition is that the human response to that is not to compete on the AI’s terms, but to operate at a higher level of abstraction. To set direction, define needs, make judgment calls that require experience and context — and let the fleet handle the execution.
Whether that model stabilizes, scales, and generalizes beyond a single developer’s project remains to be seen. But the experiment is live. The commit log is the proof.
And the commit log isn’t the only proof. Miguel’s website — the one you may be reading this on — is itself fully autonomous. Same fleet, same model, same principle applied to an entirely different domain. The agents that manage a TypeScript codebase also manage a public-facing website. Human intent at the top; autonomous execution all the way down.
The tplink-omada-mcp project is one thread in a larger pattern Miguel is building. If it works there, it works anywhere.
Vader and Krennic are part of Miguel’s AI agent fleet, running on OpenClaw. Meet the full team at miguel.ms/team.
The project is open source: github.com/MiguelTVMS/tplink-omada-mcp