DEV Community

Cover image for How to run untrusted HTML/JS safely with `allow-same-origin`
Braden Hartsell
Braden Hartsell

Posted on

How to run untrusted HTML/JS safely with `allow-same-origin`

If you’ve spent any time working with iframe security, you’ve probably heard the warning: never use allow-same-origin with untrusted content. It’s the kind of advice that gets repeated so often it starts to feel absolute, and to be fair, there’s a very good reason for that. In the wrong setup, it can turn a sandbox into something that offers far less protection than people think.

That was exactly the problem I found myself staring at while building vibecodr.space. Vibecodr is a social platform for runnable code. People can publish HTML, CSS, and JavaScript creations that other people can open and experience live in the browser. That means the code can’t just be stored or displayed. It has to actually run, because the runtime is the product. Once I accepted that, the question I needed to answer was no longer “how do I stop untrusted JavaScript from executing?” It became “how do I let it execute without giving it access to the rest of my app?”

I think that shift in framing matters a lot. A lot of discussions around untrusted HTML start from the assumption that execution itself is the failure. But if you’re building a product where runnable code is the whole point, execution is not the problem. The problem is execution in the wrong trust boundary. Once I started looking at it that way, the architecture became much clearer.

The first thing I had to get honest about was that allow-same-origin is not really the whole story. What matters is same-origin with what. If untrusted code is running in an iframe that shares origin with your main app, then yes, allow-same-origin is a terrible idea. At that point, you are collapsing one of the most important boundaries you have. But if the code is running on a dedicated cross-origin runtime host that is separate from your first-party app, then the conversation changes completely.

That’s the model I use in Vibecodr. User code does not run on the main application origin. It runs on an isolated runtime origin, and that origin separation is the primary security control. That’s the part I think is often lost when people talk about iframe flags as if those flags, by themselves, are the security model. They’re not. The real model is where the code lives, what state it can reach, and what channels still exist back into the parent application.

That distinction shaped almost every decision that followed. I stopped thinking of the problem as “how do I clean dangerous HTML enough to make it acceptable?” and started thinking of it as “how do I build a runtime where dangerous behavior can exist safely because it’s contained?” That leads to a very different architecture. Instead of pretending user code will remain harmless if I strip enough tags or rewrite enough markup, I assume the code is fully capable of trying things I don’t want. My job is to make sure it can only do those things inside an environment that doesn’t grant meaningful access to first-party state.

The place where this becomes especially important is messaging between the runtime and the host. I think this is one of the easiest parts of a system like this to underestimate. You can isolate the iframe perfectly by origin, and then immediately give half the trust back by building a lazy postMessage bridge. That was something I became much more careful about as Vibecodr evolved. I stopped thinking of postMessage as a convenient communication tool and started treating it like a privileged interface. That means validating the exact allowed origin, checking event.source, verifying message shape, establishing session trust correctly, and making sure sensitive actions are mediated rather than blindly accepted.

That last part matters because a lot of browser capabilities feel innocuous until you imagine them being controlled by code you don’t trust. Downloads, popups, clipboard access, and similar interactions are all the sort of thing that should not simply be inherited because some user-authored runtime asked nicely. In a system like this, the runtime can ask, but the host should remain the decision-maker. For me, that has been one of the most important principles in keeping the platform both expressive and sane.

Another thing I had to get comfortable with was the fact that a real HTML runtime sometimes needs to use primitives that sound alarming in isolation. If you say out loud that your system injects HTML, replays scripts, and executes code from user-authored documents, that sounds bad, because in the wrong place it absolutely would be. But context matters more than the primitive itself. I would never accept that behavior in the main app document. I can accept it inside an isolated runtime boundary whose whole reason for existing is to contain exactly that kind of execution.

That ended up being one of the most useful mental models I found while building this. Instead of asking whether a technique is universally dangerous, I ask whether it is happening in the right place. innerHTML in the first-party app is a hard no. Script replay inside a dedicated sandboxed runtime is a very different conversation. The same goes for other runtime behaviors that would be unacceptable if they were happening anywhere near authenticated application state.

I also learned that fallback behavior matters a lot more than people think. It’s easy to design a strong “happy path” and then quietly leave yourself a compatibility path that weakens the boundary when something goes wrong. In my experience, those are the places where systems become less trustworthy than their architecture diagrams suggest. If the secure path is origin-isolated but the fallback path quietly drops into a weaker model, then you don’t really have the confidence you think you have. A lot of this work, for me, has been about staying honest about where the system is truly fail-closed and where there is still nuance that needs to be handled carefully.

The biggest lesson I’ve taken from all of this is that allow-same-origin is one of those things that only makes sense when discussed as part of a full architecture. On its own, it’s too easy to talk about it as either obviously reckless or secretly fine. I don’t think either of those takes is very useful. What matters is whether the runtime is genuinely cross-origin, whether the parent/child bridge is strict, whether capabilities are mediated, and whether your system avoids quietly reconnecting the isolated runtime back into privileged application state.

That’s why I don’t think the right rule is “always do this” or “never do this.” The right rule is narrower and less catchy: allow-same-origin can only be used responsibly when the rest of the boundary is doing real work. Without that, it’s a shortcut into trouble. With it, it becomes one tool inside a much more deliberate runtime model.

I built Vibecodr because I think code should be more alive on the web. I want people to be able to publish little experiments, tiny apps, visual toys, and weird browser-native creations in a way that feels social and immediate. But if you want that kind of experience to be real, then the execution model has to be real too. And once you accept real execution, you also have to accept the responsibility of building a real trust boundary around it.

That’s what led me here. Not a desire to be reckless, and not a desire to be clever for its own sake, but a desire to make something expressive without hand-waving away the risks. If you’re building in this territory too, I’d genuinely love to hear how you’re thinking about iframe isolation, runtime messaging, and capability boundaries.

And if you’re curious what this kind of platform looks like in practice, that’s what I’m building at vibecodr.space.

Top comments (0)