How to build a plugin system on the web and also sleep well at night | Hacker News

This is one of the cleverest things I’ve seen in JS in a while.

In a nutshell, they use a same origin iframe to ensure the plugin gets its own copy of globals (so it can’t mess up the globals your app uses), coupled with a proxy object which whitelists certain globals for the plugin to use along with certain vars from your app.

Really rather clever, although the guys who develop browsers should consider an API for something like this as it’s becoming such a common use case.

The Realms polyfill is a polyfill for an actual TC39 JS proposal [0]. It’s currently at stage 2. If the proposal gets accepted, you will not need the polyfill. This would also work with any JS embedding (browsers, node, etc…) as it would be baked in the language.


You would still need the polyfill for quite a while. Just not forever.

Google Gadgets (on the iGoogle custom homepage) used a similar scheme for exposing a select API to untrusted JS running in frames. I was mostly exposed to it because Mapplets used the same backend to allow user widgets on Google Maps. The proxy Maps API objects (used to spawn map markers and so on) were similar in API to their non-proxy counterparts, though obviously every interaction with the other-frame content became asynchronous, and that led to a bunch of interesting small differences that could trip you up if you didn’t appreciate the reality of all your data being copied back and forth across that boundary.

that was an entire startup category a decade ago that raised a bunch of VC – netvibes and pageflakes come to mind

Interesting, but who is responsible for the compilation? The plugin developer? The app developer? The realms approach feels a bit cleaner to me… I would think it’s also easier to debug than trying to debug compiled code.

The host page must control the compilation to ensure the safety of its data. The compiled code is still JavaScript, just without any access to variables that the host does not want the embedded script to see.

Well, it’s an option, but I would probably just go with the iframe approach as it seems like less hassle + cleaner!

It has less bloat if you use lots of embedded scripts. My point is that this problem has been solved before in an even cleverer way and was used by hundreds of engineers in MySpace’s heyday, so I’m surprised the authors had not considered it.

We did! But while the approach was perfectly plausible in theory, we didn’t feel good about it.

We didn’t find anybody currently using this approach in production. Projects like Caja seemed unmaintained. So we’d have to reason about the security properties ourselves, but it’s hard because those approaches are more blacklist centric (remove unsafe JS features) than whitelist centric. They also did more than we needed, which increases the attack surface area.

There’s some papers on the topic that formalize Javascript in order to write proofs about it, but they’re quite old and some newer Javascript features like async/await definitely could invalidate some of assumptions behind these proofs.

While I don’t know the exact history, the Realms work does derive from Caja, it’s just the latest in this line of evolution.

The with(proxy) pattern is super clever. It looks like that was created by someone else. Kudos to whoever came up with it. It’s a really nice hack.

wow, we are literally working this problem right now, and I made an off hand comment that it’d be nice if Figma shared how they did their plugins. thank you for this!

edit: having now read the article, this is amazing, lots of get insights here. one question to the author if you are reading: it seems like it would be a worthy idea to open source the 500 line, security-sensitive interface Realm-shim. Selfishly, we would use it, but also, we (and surely others) would add eyeballs to it to ensure it’s correct. Since it’s a small slice of the system, and agonistic to the product itself, it seems unlikely to be part of any kind of technical competitive advantage. Any plans to do so?

Hey, Greg. Former Mozillian here.

Mozilla’s very own Allen Wirfs-Brock started work on this sort of thing
back in 2011 or so, called jsmirrors. The idea at the time was to
inform the design of some reflection/sandboxing APIs that might make
their way into the TC39 spec, but nothing really came of it.

Several years later I had a use case, only with security being the
foremost concern, so I did a bunch of redesign and implementation work
on that, including bugfixes and adding good handling of primitives. So
there’s already code available (around ~1100 here) for the kind of
membrane described in the Figma post.

(At least it’s halfway there; Allen’s original work let you poke remote
objects by way of serializing to JSON, and I neglected that area since
it wasn’t useful to my immediate use case and would have slowed me
down–it sounds like you’d might need to stick something like that back
in. For my own use as a consumer, I just ended up instead leaning on
implementation-dependent details that I knew I could rely on from the
way SpiderMonkey was hosted in Gecko.)

This was born in Mozilla, so it’s already MPL2, and there wouldn’t be
any licensing issues.

Caveat: only ever targeted ES5, so changes for new JS features like
proxies, symbols, and other ES6+ stuff almost definitely violates some
invariants, but I couldn’t tell you offhand whether that could manifest
as security issues. The tests still pass.

Poke at it if you want:

    cd /keybase/public/crussell/projects/jsmirrors && npm test

… or double click tests/ and point it to the jsmirrors/
directory if you aren’t comfortable giving unfettered access to
arbitrary pieces of code you find on the Internet (and you shouldn’t

I think he’s asking about the layer we built on top of the shim to copy objects in and out of it. It’s open-sourceable, we can consider it.

think about this for a second: your competitor is asking to copy your work.

If I were you, I wouldn’t assume good intentions especially considering you are just a start up and in a race for marketshare.

If it were up to me, I would set a date a year or two in the future and then open source it then, but only AFTER you have a large enough lead against your competitors in the market.

There is no reason that I can see to open source the Secret sauce of your entire product.

we’re not competing with Figma (though we are happy customers), and we already open source everything we do, we’re Mozilla.

Even if they were a competitor, it’s still something worth sharing. I wish more companies shared technology that made software more secure.

We’re kinda of all in this together. Security shouldn’t be a feature or a competitive advantage, it should be a standard practice that all developers follow and participate in.

Everyone benefits when software is more secure.

Caja was started at Google under Mark S. Miller, and it’s the predecessor to Agoric’s current sandboxing work, which is what Figma is using here.

Interesting. I guess what is missing from the sandbox described in the article is the HTML/CSS component.

I know I’m just piling on (positively), but this was such as excellent post. Honestly, I think my biggest reaction after reading this was how amazing the engineering culture must be at this company. Working at a startup, with a relatively small team, but still having the Luxury of all that time to try out multiple different approaches, get feedback, can the ones that didn’t work without making it feel like a “failure” in any way.

Major, major kudos. This is how engineering should be done.

I really like Figma’s engineering blog. I find that they do a great job introducing the concepts that need to be understood with the level of detail in their implementation of those concepts. I’m always learning something new when I read an entry.

This is the first time I’ve heard of Realms API or QuickJS, will need to keep those in mind if I ever need to write a plugin system.

Thank you very much! this is very helpful for me. I’m making , it also needs to run users’ code. I thought iframe was the only viable way to run third parties’ untrusted code, I have never heard of Realms shim. I will looking into it!

Thank you for sharing your project, Epiphany.

The introduction article [0] does a great job of explaining its motivation and purpose. I found myself nodding along to the points made, and the whole concept of a social publishing platform for interactive content.


Sure, here are some first impressions from someone who has no real experience with Jupyter, Observable, etc. 🙂

I’m really amazed at the seamless reading and editing experience. This is very well-made, I can see it must have been quite an effort to make it feel so effortless.

The ability for a code block to provide a “mini UI” is perfect. A reader can tweak controls and see immediate visual feedback, in the form of graphs, algorithms with variables..

One control I missed was a stop button – for the second example, I wanted to pause the generation of the 3D city/terrains, to continue reading.

I liked the dotted blue vertical lines to indicate “block focus”, to see what it contains when I click edit.

While dragging a block, I had some difficulty crossing over another block which was long – in fact, I couldn’t swap them, since dragging down didn’t trigger the scroller to start. From what I know, this relates to the “center of gravity” during drag, whether the dragged block goes under another one. As for the scroller, maybe it’s using a range that needs to be bigger, so that when I drag a block “below” the bottom of the screen, it starts scrolling.

Anyway, lots of fun and impressive work! This looks very useful for educational purposes. You’re really hitting a sweet spot with the concept – (just iterating on this phrase to get my head around it), a social publishing platform for interactive content.

Great execution, and I’m looking forward to seeing it grow.

These look awesome! Great work on the reading experience as well as the code experience, something Jupyter notebooks lack.

Yeah, this technology looks great. I’m working on which will allow users to import JavaScript/TypeScript libraries and use their exported functions in flow charts.

I’m pretty early stage but I got a desktop app running last night. I’ve got some documentation work to do but I have a PoC of Devev as a webapp which I’d like to deploy soon. I haven’t resolved all of the security issues yet so this article is a goldmine for me.


There is a way to avoid some of the pain discussed in the article with iframes as well, and it uses techniques from domain-driven design as applied to microservice architectures. So in this analogy the iframes are your “browser microservices” and the main app is also a sort of microservice and they all have to communicate with each other.

The basic idea there is that most microservice architectures are actually subtly monolithic because they have direct communication via The Database. Basically whenever you have a shared database you have a form of tight coupling which defeats the point of microservices. So in DDD, you deal with communications difficulties by creating these Bounded Contexts where a word means one given thing; applying that here with a notion that “meaning” is controlled by The Database, you want to transition to “miniliths” where a set of services has its own local database. Then your two miniliths talk to each other by passing messages back and forward; generally you want these messages to be Events (“this happened over here and I thought you should know about it but I don’t want a response”) rather than Requests (which invite responses and then you have to ask questions about what if the response is not what you expected or it was not received, etc., what happened in the middle?). You don’t have to go the whole way to an Event Sourced Architecture (where your model is entirely determined by a sequence of received Events which have been stored in a database and can be used to “rehydrate” that model from scratch at any given time) to get about 80% of the value of the message-passing.

So translated to this context, basically what you have is a model of your system (possibly simplified) that you keep inside of the iframe and a model you keep in the app; the message bus communicates changes between the two but it seldom needs to serialize the whole structure at any given time. You allow plugins to interact with the in-iframe-model and it sends events “hey this happened” to the outside world, which has to respond to those events by updating its own data model. But fundamentally you have these two separate data models for the thing and they are being held together by a promise of eventual consistency through message-passing the diffs to the structure.

Really nice explanation of how domain-driven design can inform the architecture of browser-based plugins/microservices/miniapps.

It makes sense to separate concerns by passing messages between isolated layers, and I can see how it could apply to other domains similarly. To summarize for my own understanding:

– “Minilith” (love the term!) with a model of your system (possibly simplified) that you keep inside of the iframe

– App with its own model

– The message bus communicates changes between the two but it seldom needs to serialize the whole structure at any given time.. Eventual consistency through message-passing the diffs

For some applications where I want to give users scripting capability, I’ve been thinking of a simplified DSL with its own “VM”, passing some isolated objects from the host app to render results. The architecture you described, of “miniliths” with their own state (or in-memory database) passing messages to the host (or to each other) sounds like a sane and flexible approach.

> Basically whenever you have a shared database you have a form of tight coupling which defeats the point of microservices.

With all the different interpretations of what is a microservice, a common one is that having a shared database makes it not a microservices architecture. But yeah, people call them that regardless.

Your product’s concept looks great. I’ve used UE4 Blueprints for high-level prototyping while dropping down to C++ for stuff that needs tunes performance – it’s insanely powerful combination. The web could use something like that.

Related work by the AMP team is their worker dom project:

The gist is to mirror a subset of DOM apis in workers and project changes back out to the main page.

As far as I know a few companies have tried similar methods, but most write proprietary APIs, rather than using the DOM.

Still in development but the examples are promising.

Has anyone tried running the solution? It doesn’t seem to work…
The below code results in the console logging the document object, which has the document object, and the code hits a ReferenceError when trying to log the ‘a’ variable.
Calls to p.whateverPropYouMakeUp result in the log ‘get for target: …’

    const proxyHandler = {
      get(target, name){
        console.log(`get for target: `, target, name);
        return 'tacos';

    const p = new Proxy({}, proxyHandler);

    with (p){
      console.log(`document with proxy: `, document);
      console.log(`access random property: `, a);

Pardon my ignorance, but what if my JS was “for (;;) {}”? Can this handle heavy-CPU plugins? Maybe in a service/web worker? Part of the Realm API being a good use case for plugins I assume would include this kind of isolation but I admit to not having looked in detail.

Excellent question! Being able to interrupt plugins would be nice, but it was a requirement we had to drop. Instead, we show UI that makes it clear that a plugin froze rather than Figma, in the hope that the user can understand.

WebWorkers have the same issues as iframes where are basically like separate processes where data needs to be copied with message-passing. But if we had gone with the iframe approach, we’d consider sticking the plugin code inside a WebWorker inside the iframe.

A web worker would come with the downsides of their first iframe-based solution. (Web workers are basically iframes that have no visual portion and are guaranteed to run in a separate thread.)

This post was such a ride.

At one point they found a fucking legitimate reason to compile a javascript interpreter to javascript(wasm) to run javascript!

Plugins. The security nightmare of desktop applications and the main reason Google store / Apple store is banishing developers accounts left and right.
Like Schneier always said “you can always create an unbreakable security for you but smarter ones will find holes in it”. I’m curious how this one will hold on long term, let’s say an year from now.

I think the nice thing about their approach is that it’s built on top of the same sandboxing that underpins the very fundamental primitives of the web. If this approach is broken, the same-origin policy of the web is broken.

Compared to “native” approaches, where every application needs to implement their own solution, this seems far more durable (and easily fixed).

iframe is already broken. If I get you to hold my code inside an iframe and what I put there is a malware ActiveX your users going to blame you for their stolen accounts / btc mining worms. Windows is still a good target for malware creators despite the rise of mobile consumers.

I was actually talking to their lead developer for their plugin architecture on Tuesday, and he said that it’s likely very easy to allow this internally for org created plugins, or self-uploaded ones (similar to developer mode for chrome extensions), but it’s an issue having it available in the wider market. Security concerns are a big one (as mentioned in this post since this is ran in the browser you can easily make requests as and scripts that run on non-user directed actions can be sneaky dangerous. The other thing is they want to make sure performance stays high, as slowdowns caused by plugins are often attributed to the program itself rather than the plugin.

That said I’m totally fine if they isolate any event-based plugin as something you have to upload yourself or is bound strictly to your org using the app. As long as there’s some way to do it designers will find a way, but without it I don’t see it as being nearly as flexible as Sketch.

That’s a very interesting topic! Does anyone know of other resources (blog posts or books) talking about how to build such extensibility in a SaaS app?

Obviously, there are lots of inspiration to be drawn from apps we use everyday, such as GitHub, JIRA, etc, but these behind the scenes view is very informative.

(Author here)

There is material on the internet that are relevant to the topic, but it’s quite hard to piece together. After all, there are only a handful of players right now who need to build an API that isn’t a REST API.

Among big names, I can think of Zendesk (uses iframes), Coda (runs third-party code on their servers IIRC, isolated via server mechanisms), Salesforce (not sure exactly what they do, but I think they also use Realms as a component to their system).……

There’s a couple of academic papers on JavaScript isolation, but you’ll have to do a lot of work to figure out how relevant they are. Be sure to check the publication date.

The folks at Agoric are probably the leading experts actively working on untrusted code isolation in a browser environment right now. I would follow them if you want to hear about the latest new tech:

We’re investigating this now for extensibility of our 3D avatar based communication tool we’re developing at Mozilla, Hubs ( Our thinking + diligence so far lines up entirely with what Figma outlined here — however we are still in the planning stages, not building anything yet, so this post was greatly appreciated in revealing a lot of insights we were missing!

I use iframes with my software to separate user accounts and other “SaaS boilerplate” into one web app that then proxies your web app and uses an iframe’s srcdoc to serve the content within a template regardless of which server it came from. Using the srcdoc attribute lets you skip an additional request and mask the server’s address, but it comes with some additional API restrictions like not having document.location.

I believe this approach was pioneered by Facebook some years ago in the earliest incarnation of their app platforms. There was no iframe sandboxing so they had an intermediate contrived language called “FBML” which compiled to a subset of HTML they allowed. That was the platform where Zynga made their fortune on Farmville originally, just before the iPhone.

An online design tool. Think of it as sketch/photoshop/illustrator in the browser.

Thanks. For a single-person or maybe two-person team working in the same office, would you recommend Figma or Sketch?

Both would work well assuming you are using them for UI design. Note that Sketch is Mac only though. Although Figma runs in the browser, they also have Electron-based desktop apps for Mac and Windows.

Purchasing Sketch gives you one year of updates. If you decide not to renew the subscription, the app is still yours to keep and run – but you won’t get any new updates. Figma follows the SaaS model of subscription, but they do have a free tier.

Sketch also has a lot more tutorials and plugins than Figma at the moment, in case that is important.

Our designers use Figma, and I’ve had to interact with it a fair bit as a developer. It’s great. 🙂 I can’t speak to feature parity with sketch (I’ve used both though), but I really like the cross-platform nature of Figma, and its performance has been great for the projects I’ve worked with.

I have used Sketch a lot and can recommend it quite highly. I’m not a big fan of in-browser tools so it’s not a knock on Figma at all, I haven’t used it beyond a very short demo.

edit: I’ve used sketch from a design and a front end devs perspective and it was nice to use from both.

I don’t have a huge amount of experience, but Figma is a real pleasure to use. Performance
-wise it was great compared with sketch when I tried it. Ymmv

It is a vastly superior alternative to Sketch or Illustrator.

Used to work in an ad office making ads with Freehand and Illustrator and Photoshop. Figma is a wet dream compared to tools of that day. Could not have even dreamed of something like it back then.

This is hyperbole for sure – there are some amazing things that Figma does and I love using it, but it’s still missing a lot of things that Sketch and Illustrator have. On the plugin side which is what this article was about, Sketch is leagues ahead of Figma. File system access and native app functionality mean that plugins can go beyond just the constraints of a browser. Even in the web constraints though, Sketch is still leading in the support it has. An example of this is an events API – Figma doesn’t have any way to listen to insertions of components and react accordingly to it. This vastly limits the scope of what Figma plugins can do – they can only be proactive, not reactive. Often we want plugins that just work seamlessly, and right now Figma doens’t support that.

Illustrator on the other hand is still vastly better at what it’s named after – illustration. Figma has great basic vector functionality that will cover 99% of your design needs, but little around illustration needs.

I’m really excited for Figma and what comes out of it, but it still has a long way to go to catch up. I think it has a better foundation though.

I was only really sold on Figma because it could actually read Sketch files. That was an absolute blessing when I worked on a Windows machine. It’s improved loads since then but ultimately I’d still take a native app over a browser-based one.

With Figma’s deep use of webassembly and canvas, it’s much closer to a native app than a web app. Certainly it’s much less webappy than most.

I would not call it vastly superior to Sketch. I still find sketch to fit my workflow better.

Third-party and/or untrusted javascript is obviously a massive security and privacy hole if you don’t put it in an iframe.