DEV Community

Alastair Coote
Alastair Coote

Posted on

To JSC or not to JSC: running JavaScript on iOS in 2020

Did you know that iOS comes with a JavaScript engine bundled into the OS? JavaScriptCore (JSC) is the same engine used inside Safari, wrapped in a very interesting API that lets you mix JavaScript and native values together. Most importantly, when it was introduced in iOS 7 it was also the only way that Apple allowed you execute downloaded code (outside of a webview).

They loosened that restriction in 2017 to allow other interpreters. When they did, a few prominent JavaScript-centric libraries switched JS engines. So why did they ditch such a fully featured, OS-bundled JS engine? If you're using JavaScript in your app, should you? I recently went down this rabbit hole myself and thought I ought to document that process in case it'll help the next poor soul.

Why would I even want to run JavaScript in my native app?

To try to preempt any outrage: you don't have to! No-one is forcing you to! But the App Store release cycle is a fickle, time consuming thing and sometimes you want to be able to act outside of it. You can often do that with JSON configuration files, but sometimes you need more advanced logic than JSON can provide.

Speaking for myself: I've spent the majority of my career working in the news industry and we often need to move quicker than the App Store approval process will let us. Usually that means using a webview but I'm very interested in scenarios where we might want to move beyond that.

JavaScriptCore: fast, except when it isn't

JSC is bundled with macOS too, and I was curious to benchmark it to see how it compares to V8, which powers Node and Chrome. To my surprise, when running V8's Web Tooling Benchmark JavaScriptCore performed slightly better than Node 14.

Node 14 vs JavaScriptCore results

(the V8 Web Tooling Benchmark is a few years old and isn't really all that indicative of real world code, plus this is running on a powerful Macbook rather than an iPhone, so don't read anything into the raw numbers, and don't read too much the comparisons. The point here is that the two engines are "ballpark similar")

Test                Node 14         JavaScriptCore
acorn               12.09 runs/s    12.78 runs/s
babel               9.06 runs/s     10.07 runs/s
babel-minify        13.08 runs/s    16.99 runs/s
babylon             13.76 runs/s    14.64 runs/s
buble               7.97 runs/s     6.64 runs/s
chai                18.48 runs/s    14.84 runs/s
coffeescript        9.38 runs/s     7.32 runs/s
espree              3.99 runs/s     8.14 runs/s
esprima             10.50 runs/s    12.32 runs/s
jshint              12.29 runs/s    6.75 runs/s
lebab               14.95 runs/s    10.82 runs/s
postcss             8.44 runs/s     10.29 runs/s
prepack             9.21 runs/s     11.73 runs/s
prettier            9.36 runs/s     11.51 runs/s
source-map          12.68 runs/s    13.91 runs/s
terser              21.80 runs/s    22.96 runs/s
typescript          9.92 runs/s     14.04 runs/s
uglify-js           6.60 runs/s     4.83 runs/s
Geometric mean      10.58 runs/s    10.95 runs/s

But there's an important wrinkle here: JavaScriptCore on macOS (and inside web browsers on iOS) uses JIT to boost performance. For security reasons that's disabled when we use JSC inside iOS apps. Running that same benchmark with --useJIT=false is sobering: a 7.5x slowdown.

with JIT vs without JIT results
Test                jsc             jsc --useJIT=false
acorn               12.78 runs/s    0.82 runs/s
babel               10.07 runs/s    1.44 runs/s
babel-minify        16.99 runs/s    1.80 runs/s
babylon             14.64 runs/s    1.33 runs/s
buble               6.64 runs/s     1.94 runs/s
chai                14.84 runs/s    3.37 runs/s
coffeescript        7.32 runs/s     0.10 runs/s
espree              8.14 runs/s     0.62 runs/s
esprima             12.32 runs/s    1.34 runs/s
jshint              6.75 runs/s     2.15 runs/s
lebab               10.82 runs/s    1.96 runs/s
postcss             10.29 runs/s    1.70 runs/s
prepack             11.73 runs/s    1.91 runs/s
prettier            11.51 runs/s    1.64 runs/s
source-map          13.91 runs/s    2.02 runs/s
terser              22.96 runs/s    5.11 runs/s
typescript          14.04 runs/s    2.24 runs/s
uglify-js           4.83 runs/s     1.28 runs/s
Geometric mean      10.95 runs/s    1.47 runs/s

So, that's annoying. Motivation to switch engines? Perhaps. No JS engines are allowed to use JIT but V8 only sees a 3.6x slowdown when JIT is disabled.

why the hell not: node vs node --jitless
Test                node            node --jitless
acorn               12.09 runs/s    1.28 runs/s
babel               9.06 runs/s     2.42 runs/s
babel-minify        13.08 runs/s    3.71 runs/s
babylon             13.76 runs/s    1.76 runs/s
buble               7.97 runs/s     3.14 runs/s
chai                18.48 runs/s    4.95 runs/s
coffeescript        9.38 runs/s     4.00 runs/s
espree              3.99 runs/s     1.25 runs/s
esprima             10.50 runs/s    1.43 runs/s
jshint              12.29 runs/s    3.84 runs/s
lebab               14.95 runs/s    3.41 runs/s
postcss             8.44 runs/s     2.73 runs/s
prepack             9.21 runs/s     3.09 runs/s
prettier            9.36 runs/s     3.12 runs/s
source-map          12.68 runs/s    3.02 runs/s
terser              21.80 runs/s    9.29 runs/s
typescript          9.92 runs/s     3.27 runs/s
uglify-js           6.60 runs/s     2.73 runs/s
Geometric mean      10.58 runs/s    2.89 runs/s

But that's not all: V8 has an extra trick up its sleeve that JavaScriptCore doesn't.

Bytecode caching

There's another JavaScript engine metric we haven't touched on yet: startup time. Depending on what you're doing with your JS engine this might matter a lot: if your UI is controlled by JS you're going to have to wait for engine startup before you can show anything to the user.

This isn't a new problem for JS engines: startup time is a big deal in web browsers too. JavaScriptCore, V8 and SpiderMonkey all generate "bytecode" when parsing your JavaScript code. It's kind of a midway point between the text source and the machine code that actually gets executed, and it can be parsed a lot more quickly than the source can. So the engines cache that bytecode rather than parse the string every time, boosting startup time on return visits.

Unfortunately, JavaScriptCore's external API does not expose that functionality. The V8 API does. NativeScript (a framework for creating cross-platform UIs in JavaScript) cited this as one of their big reasons for switching from JSC to V8. Facebook, because it's Facebook, created an entirely new JavaScript engine named Hermes with creating bytecode at the app compilation stage as a key goal. (Hermes is pitched as an Android library but an eagle-eyed dev seems to have found proof that it's being used on iOS too.)

To test out how much a difference it can make I took another JS engine, QuickJS for a spin (more on this engine later). The Web Tooling Benchmark we ran above is an insanely big 32MB .js file. It takes QuickJS 2.9secs to parse the original text, but parsing bytecode generated from that text only takes 150ms. I certainly hope the JavaScript you want to run in your app isn't 32MB, but suffice to say, the performance benefit is notable.

So we should all use V8, right? It's faster on both counts!

No. Please no. Why not? Well, because the library is 7.5MB big. With no way to share third party libraries between apps that means adding sizable bulk to your app. JavaScriptCore is a system library so it adds 0MB to any app that uses it.

The part where I admit I have no good answer

As is often the case, the answer to "what JS engine should I use on iOS?" is "it depends". I don't think the NativeScript team are necessarily wrong in choosing to bundle V8 with apps. Since JavaScript controls the entire app, startup time is a huge priority for them. And maybe 7.5MB isn't that big of a deal when it's something that powers everything.

But I made a different decision. I'm creating a cross-platform JavaScript "worker" library called Esperanto that wraps a JS engine with some utility and lifecycle APIs that aim to make integrating JS into your existing app safe and simple. It's still extremely early days (and it's a side project, so patience is required here) but I want your cost/benefit calculation in using Esperanto to be a simple one. If you only want to use JavaScript in one tiny corner of your app then demanding 7.5MB of extra code bulk is absurd. So I'm sticking with JavaScriptCore. Just because there are faster alternatives doesn't mean JSC is slow. If it performs well enough for my needs then great. That said, I do have a possible escape hatch.

An escape hatch: QuickJS

Esperanto runs on Android too, and Android has no built-in JS library (despite having an instance of V8 inside the System Webview package. C'mon, Google!). And bundling JavaScriptCore on Android would take up about as much space as V8 does on iOS. So for Android I've turned to an engine called QuickJS that manages to pack an entire engine (complete with ES2020 compatibility) into ~200KB of code. Not as good as 0KB, but close.

Despite the name it isn't objectively quick ā€” I've benchmarked it as slightly less than half the performance of JIT-less JSC ā€” but in terms of bang for kilobyte buck it's pretty unbeatable. And it lets you cache bytecode. Which will end up being more important for Esperanto, startup time or overall performance? I'm not sure. But given that I have to develop a QuickJS version for Android anyway I can flip a switch and use QJS on iOS in the future if I want to. Or maybe I'll develop a feature letting you choose whether you want to run QuickJS, V8 or JSC depending on how you're using the library. We'll see.

Some general advice on using JS inside your apps

Engine choice aside, I've been experimenting in this area for a while and have some advice that might be useful:

  • Be careful with those dependencies. This applies to JavaScript development in general, but NPM is a blessing and a curse. There's an incredible wealth of libraries out there at your fingertips. But JavaScriptCore isn't a browser, nor is it it Node. Don't bulk it out with multi-megabyte dependencies. Use Bundlephobia a lot.
  • Defer to native where you can. JavaScriptCore's JSExport functionality is a really awesome way of seamlessly blending Objective C/Swift classes with your JS code. Use it. If you know you're going to be performing a particularly intensive operation (say, cryptography), don't NPM install a JS library to do it. Write a native function and expose it in the JS environment. As much as you possibly can, think of your JS as the glue between different native APIs rather than it's own world (this is the principle I've applied in Esperanto, where the cross-platform APIs are all implemented in Rust.)

Footnote: reasons to be mildly optimistic

The JavaScriptCore team have clearly recognised this lack of bytecode cache availability as a weakness, because they've created an API that will let you specify a directory to store cached bytecode in, and you can see the history of the cache being implemented. But open source JavaScriptCore is not iOS JavaScriptCore: Apple chooses what to bring in and when. It's too late for any of this to make it into iOS 14, so fingers crossed we'll get a nice surprise in 2021.

Top comments (2)

soylentgraham profile image
graham s. pumpkins

Where did you find a build of v8 for ios? Or did you build it yourself.
Ive integrated javascriptcore, chakra, v8, duktape and quickjs in my engine (for windows, ios, osx, android, magic leap and hololens - a mix on engines on diffefent platforms). But v8 has largely fallen by the wayside as building is such a pain, and my libraries end up being 50-150mb, not 7!

alastaircoote profile image
Alastair Coote

I never did find a build for iOS. My 7MB figure came from J2V8, which works on Android:

I would imagine that means you could get something similar for iOS but Iā€™m not any closer to knowing how than you are!