Back to Blog Home
← all posts

Using V8 code caching to minimize app load time on Android

December 3, 2015 — by Georgi Atanasov

There are three reasons for the NativeScript framework to exist - native UX, performance and a cleaner and easier programming model for cross-platform mobile applications.

To achieve excellence in those three areas a lot of work has been put inside the NativeScript framework in the last two years and we may say that we are already pretty happy with the current state of the framework. One thing that we wanted to improve in the last months was the loading time on Android devices. We spent a lot of days and weeks to profile and optimize client and Telerik mobile applications and we managed to optimize the loading time in Android 4 times!!

In this blog post I will shed more light on one of the techniques we used to accomplish this great result. We knew Google’s V8 JavaScript engine is a great virtual machine, works flawlessly and is easily programmable. But what we didn’t know is that it is actually a mighty beast - let me tell you why.

The idea

Recently I came across a blog post describing a technique V8 authors call “Code Caching”. In a nutshell, V8 can reuse already compiled JavaScript code to boost performance significantly. In other words, we can persist the result of a first-time JavaScript processing (yes, V8 provides all the needed data after compilation), save it to a file and load the code directly from that file later on, upon subsequent application runs.

I did some further research on this feature and found another interesting post where the author shares the performance improvements this code-caching brings. As the author’s comparison shows we are talking about a multifold improvement:

Code_Caching_Hashseed

Of course I was eager to do the experiments and integrate the feature in our Android Runtime. It was trivial to implement (yes, V8 is easy to program) and I ran my experiments against an internal Reddit reader app we have. As you may know, NativeScript’s cross-platform modules already contain 900 KB of pure JavaScript code, so caching could potentially bring huge improvement.

The results

Script_Compile_Lazy

As expected, saving the compilation result to a file for each script adds some overhead to the first run but pays back on subsequent runs. To my surprise, while significant, no big improvement could be perceived, mainly because the total compilation time was short anyways.

To lazy or not to lazy

I did further research about other implementations of this feature, just to ensure that I wasn’t doing something wrong. In a GitHub issue about Node.js script loading performance someone mentioned lazy compilation.

V8 is smart enough to not compile functions up until they are used for the first time. But what that means is that it will require additional time later during the lifecycle of the application to do the compilation. Additionally, code caching does not add value to lazily compiled code. This sheds some light as to why NativeScript module compilation does not take much time - because some part of the code is not initially processed.

Fortunately, V8 has that “--nolazy” startup flag that disables the lazy compilation feature. For the sake of the measurements I enabled this. This way I could more accurately measure the entire time spent in compilation:

Script_Compile_NoLazy

Now the results are more what I expected. While we “sacrifice” ~150ms on the first run, we gain about 10 times improvement on subsequent executions. If we compare the results with the previous chart, we will see that when the “--nolazy” is enabled, the compilation takes more time. In fact this additional time, although not present while measuring, will be hit later, during the application’s lifecycle - for example while loading the user interface, because V8 will need to go through still not compiled functions.

Give it a try

This feature has shipped with v 1.5.0 of NativeScript but it is not enabled by default. We decided to let you choose whether to accept a slower first run of the application for the sake of greatly improved secondary runs or not. All you need to do is to modify the package.json of the application (app/package.json) like so:

"android": {
    "v8Flags" : "--nolazy --expose_gc",
    "codeCache" : "true"
}

 

You may also experiment with the “--nolazy” flag and see what works best for you. Please, share your feedback with us as to whether this should be the default behavior - should we leave it disabled or should we enable it by default instead?

A quick note on the other V8 flag - “--expose_gc”, it is currently needed by the NativeScript modules and enables direct calls to V8’s Garbage Collector. Because we are synchronizing two Garbage Collectors (JavaScript and Java) it is sometime required to manually hint the JS side for a pressure on the Android one.

What comes next

We are always looking for ways to improve the performance of NativeScript. For instance, in Improving V8’s performance, the author talks about another very interesting V8 feature - the custom startup snapshot, which also has the potential to further improve Android Runtime’s loading time. We’ve even been able to make some progress there as well, but I’ll have to save that for the future post. Stay tuned :)