BLOG POST

You Can't Secure OpenClaw. You Can Secure Yours.

This started as a mission to fix agentic AI security. It ended somewhere I didn't expect.
March 20265 min read
Jost
OpenClawSecure CodeSecurity PostureCode Vulnerabilities

A few weeks ago I was convinced the right move was to build a proper framework for agentic AI security. Add real auth boundaries to OpenClaw. Harden the marketplace. Create something that agents could actually run inside safely.

Right instinct. Wrong timeline.

The conversation has already moved on

OpenClaw hit 250,000 GitHub stars faster than React ever did. Sold-out meetups in New York. A former finance guy built a stock trading platform with it and made money his first day. A neuroscience PhD who'd never written a line of code shipped a lab management dashboard in weeks. Then a thread from the NYC meetup started circulating, and one quote in it stopped me:

"If you're not okay with all of your data being leaked onto the internet, you shouldn't use it. It's a black and white decision."

An OpenClaw expert, someone who reviews setups from actual cybersecurity professionals for fun. The room nodded. Then everyone went home and kept building.

I read that and kept coming back to it. Because the security framework agentic AI actually needs, the proper one with real standards and tooling and industry-wide adoption, takes years. Standards bodies. Consensus. Multiple rounds of painful lessons. OpenClaw is not waiting for any of that. The builders using it aren't waiting. The integrations going to production this week aren't waiting.

I still think that framework needs to be built. I think the industry will get there. But the people shipping agents right now can't wait for it, and I stopped pretending they could.

The question that actually has an answer

Once I stopped asking "how do we fix OpenClaw?" and started asking "what can a builder actually control?" everything shifted.

There are two layers to think about.

The framework layer is the core runtime, the WebSocket gateway, the skill marketplace, the auth defaults. This layer has real structural problems. Researchers have found exposed instances at scale. The marketplace has contained malware. A known vulnerability let any website silently take over a local agent through WebSocket. You cannot fix this layer. It isn't your code. You can do the hygiene work: bind your gateway to localhost, run Docker sandboxing, require strong tokens, disable tools you don't use. That stuff matters. But you are not going to make the framework structurally secure, and honestly that's fine. It's the trade the OpenClaw team made to make the product work the way it does.

The integration layer is different. Custom skills. MCP server connections. API wrappers. Database queries. The auth logic that decides what your agent is allowed to do. This is code you wrote. It lives in your repo. It moves your data, touches your systems, handles your credentials.

And this is where the real exposure actually lives.

The Lovable breach that's been all over security Twitter wasn't a Lovable framework bug. The framework was fine. It was missing Row-Level Security that nobody had checked for. The protection existed in name but not in practice. The code looked right and did the wrong thing. That's the exact class of bug I keep finding in every integration layer built on top of agentic frameworks.

What I found in my own deployment

My OpenClaw setup has four integration points: a custom skill connecting to my project database, an MCP server for GitHub and Linear, a Slack webhook handler, and an auth wrapper that validates requests before the agent acts on them.

I thought it was solid. Then I ran Kolega.dev on it before the last deploy and found three things that needed fixing before anything went to production.

The webhook handler had no SSRF protection.
URLs from external sources were going straight to an HTTP client without filtering out private IP ranges or cloud metadata endpoints. Someone could have pointed my agent at 169.254.169.254 and walked away with my AWS credentials. The code looked like it was handling URLs correctly. It was, just not safely.

The database skill had a type confusion in the auth check.
I was passing a User object where the function expected a user ID string. The check ran, it just ran with wrong data, silently passing in edge cases where it should have blocked. It looked like the guard was there. It wasn't doing what I thought.

The auth wrapper was missing an await.
The auth function was called but not awaited. In async Python that means the authorization check was essentially decorative. The agent could act before the check completed. The function call was there. The enforcement wasn't.

1async def handle_request(self, request):
2    self.auth_request(request, admin=True)  # Missing await, auth never enforces
3    # Agent proceeds regardless of authorization outcome

All three were in code I'd written, reviewed, and felt confident about. They're easy to write and hard to spot: correct syntax, wrong behavior. Kolega.dev generated a PR for each one with root cause analysis, the fix, and verification tests. I reviewed them, adjusted a couple where my architecture needed a different approach, and merged. The integration layer is now actually doing what I thought it was doing.

The thing I keep coming back to

I started this wanting to fix the framework. What I realized is that for OpenClaw right now, the meaningful security work isn't in the framework. It's in the layer between the framework and the world. The wrappers you write. The skills you build. The auth logic you implement. The code that decides what your agent is actually allowed to touch.

That's the layer you own. That's where your data, your systems, and your credentials are at stake. That's where the bugs that matter live. Not because the framework is safe. But because the integration layer is where you are, and it's the part you can actually do something about before you ship.

The proper security framework for agentic AI will exist one day. Until then, the code you wrote is the thing in front of you. It's worth checking.

If you're deploying agents right now

Scan the code you wrote, not just the framework you installed. The framework's security posture is what it is. Your integration layer's is entirely up to you.

Scan your integration layer before it ships: kolega.dev



Simple 3 click setup.

Deploy Kolega.dev.

Find and fix your technical debt.