Symphonious

Living in a state of accord.

Modernising Our JavaScript – Why Angular 2 Didn’t Work

At LMAX we value simplicity very highly and since most of the UI we need for the exchange is fairly straight forward settings and reports we’ve historically kept away from JavaScript frameworks. Instead we’ve stuck with jQuery and bootstrap along with some very simple common utilities and patterns we’ve built ourselves. Mostly this has worked very well for us.

Sometimes though we have more complex UIs where things dynamically change or state is more complex. In those cases things start to breakdown and get very messy. The simplicity of our libraries winds up causing complexity in our code instead of avoiding it. We needed something better.

Some side projects had used Angular and a few people were familiar with it so we started out trialling Angular 2.0. While it was much better for those complex cases the framework itself introduced so much complexity and cost it was unpleasant to work with.  Predominately we had two main issues:

  1. Build times were too slow
  2. It wasn’t well suited for dropping an Angular 2 component into an existing system rather than having everything live in Angular 2 world

Build Times

This was the most surprising problem – Angular 2 build times were painfully slow. We found we could build all of the java parts of the exchange before npm could even finish installing the dependencies for an Angular 2 project – even with all the dependencies in a local cache and using npm 5’s –offline option. We use buck for our build system and it does an excellent job of only building what needs to be changed and caching results so most of the time we could avoid the long npm install step, but it still needs to run often enough that it was a significant drain on the team’s productivity.

We did evaluate yarn and pnpm but neither were workable in our particular situation. They were both faster at installing dependencies but still far too slow.

The lingering question here is whether the npm install was so slow because of the sheer number of dependencies or because something about those dependencies was slow. Anecdotally it seemed like rxjs took forever to install but other issues led us away from angular before we fully understood this.

Even when the npm install could be avoided, the actual compile step was still slow enough to be a drain on the team. The projects we were using angular on were quite new with a fairly small amount of code. Running through the development server was fast, but a production mode build was slow.

Existing System Integration

The initial projects we used angular 2 on were completely from scratch so could do everything the angular 2 way. On those projects productivity was excellent and angular 2 was generally a joy to use. When we tried to build onto our existing systems using angular 2 things were much less pleasant.

Technically it was possible to build a single component on a page using angular 2 with other parts of the page using our older approach, but doing so felt fairly unnatural. The angular 2 way is significantly different to how we had been working and since angular 2 provides a full-suite of functionality it often felt like we were working against the framework rather than with it. Re-using our existing code within an angular 2 component felt wrong so we were being pushed towards duplicating code that worked perfectly well and we were happy with just to make it fit “the angular 2 way”.

If we intended to rewrite all our existing code using angular 2 that would be fine, but we’re not doing that. We have a heap of functionality that’s already built, working great and will be unlikely to need changes for quite some time. It would be a huge waste of time for us to go back and rewrite everything just to use the shiny new tool.

Angular 2 is Still Great

None of this means that angular 2 has irretrievable faults, it’s actually a pretty great tool to develop with. It just happens to shine most if you’re all-in with angular 2 and that’s never going to be our situation. I strongly suspect that even the build time issues would disappear if we could approach the build differently, but changing large parts of our build system and the development practices that work with it just doesn’t make sense when we have other options.

I can’t see any reason why a project built with angular 2 would need or want to migrate away. Nor would I rule out angular 2 for a new project. It’s a pretty great library, provides a ton of functionality that you can just run with and has excellent tooling. Just work out how your build works and if it’s going to be too slow early on.

For us though, Angular 2 didn’t turn out to be the wonderful new world we hoped it would be.

Unit Testing JavaScript Promises with Synchronous Tests

With Promise/A+ spreading through the world of JavaScript at a rapid pace, there’s one little detail that makes them very hard to unit test: any chained actions (via .then()) are only called when the execution stack contains only platform code. That’s a good thing in the real world but it makes unit testing much more complex because resolving a promise isn’t enough – you also need to empty the execution stack.

The first, and best, way to address these challenges is to take advantage of your test runner’s asynchronous test support. Mocha for example allows you to return a promise from your test and waits for it to be either resolved to indicate success or rejected to indicate failure. For example:

it('should pass asynchronously', function() {
    return new Promise((resolve, reject) => {
        setTimeout(resolve, 100);
    })
});

This works well when you’re testing code that returns the promise to you so you can chain any assertions you need and then return the final promise to the test runner. However, there are often cases where promises are used internally to a component which this approach can’t solve. For example, a UI component that periodically makes requests to the server to update the data it displays. 

Sinon.js makes it easy to stub out the HTTP request using it’s fake server and the periodic updates using a fake clock, but if promises are used sinon’s clock.tick() isn’t enough to trigger chained actions. They’ll only execute after your test method returns and since there’s no reason, and often no way, for the UI component to pass a promise for it’s updates out of the component we can’t just depend on the test runner. That’s where promise-mock comes in. It replaces the normal Promise implementation with one that allows your unit test to trigger callbacks at any point.

Let’s avoid all the clock and HTTP stubbing by testing this very simple example of code using a Promise internally:

let value = 0;
module.exports = {
    setValueViaImmediatePromise: function (newValue) {
        return new Promise((resolve, reject) => resolve(newValue))
                .then(result => value = result);
    },
    getValue: function () {
        return value;
    }
};

Our test is then:

const asyncThing = require('./asyncThing');
const PromiseMock = require('promise-mock');
const expect = require('expect.js');
describe.only('with promise-mock', function() {
    beforeEach(function() {
        PromiseMock.install();
    });
    afterEach(function() {
        PromiseMock.uninstall();
    });
    it('should set value asynchronously and keep internals to itself', function() {
        asyncThing.setValueViaImmediatePromise(3);
        Promise.runAll();
        expect(asyncThing.getValue()).to.be(3);
    });
});

We have a beforeEach and afterEach to install and uninstall the mocked promise, then when we want the promise callbacks to execute in our test, we simply call Promise.runAll().  In most cases, promise-mock combined with sinon’s fake HTTP server and stub clock is enough to let us write easy-to-follow, synchronous tests that cover asynchronous behaviour.

Keeping our tests synchronous isn’t just about making them easy to read though – it also means we’re in control of how asynchronous callbacks interleave. So we can write tests to check what happens if action A finishes before action B and tests for what happens if it’s the other way around. Lots and lots of bugs hide in those areas.

PromiseMock.install() Not Working

All that sounds great, but I spent a long time trying to work out why PromiseMock.install() didn’t ever seem to change the Promise implementation. I could see that window.Promise === PromiseMock was true, but without the window prefix I was still getting the original promise implementation (Promise !== PromiseMock).

It turns out, that’s because we were using babel’s transform-runtime plugin which was very helpfully rewriting references to Promise to use babel’s polyfill version without the polyfill needing to pollute the global namespace. The transform-runtime plugin has an option to disable this:

['transform-runtime', {polyfill: false}]

With that promise-mock worked as expected.

Using WebPack with Buck

I’ve been gradually tidying up the build process for UI stuff at LMAX. We had been using a mix of requires and browserify – both pretty sub-optimally configured. Obviously when you have too many ways of doing things the answer is to introduce another new way so I’ve converted everything over to webpack.

Situation: There are 14 competing standards. We need to develop one universal standard that overs everyone's use cases. Situation: there are 15 competing standards...

Webpack is most often used as part of a node or gulp/grunt build process but our overall build process is controlled by Buck so I’ve had to work it into that setup. I was also keen to minimise the amount of code that had to be changed to support the new build process.

The final key requirement, which had almost entirely been missed by our previous UI build attempts was the ability to easily create reusable UI modules that are shared by some, but not all projects. Buck shuns the use of repositories in favour of a single source tree with everything in it so an internal npm repo wasn’t going to fly.

While the exact details are probably pretty specific to our setup, the overall shape of the build likely has benefit.  We have separate buck targets (using genrule) for a few different key stages:

Build node_modules

We use npm to install third party dependencies to build the node_modules directory we’ll need. We do this in an offline way by checking in the node cache as CI doesn’t have internet access but it’s pretty unsatisfactory. Checking in node_modules directory was tried previously but both svn and git have massive problems with the huge numbers of files it winds up containing.

yarn has much better offline support and other benefits as well, but it’s offline support requires a cache and the cache has every package already expanded so it winds up with hundreds of thousands of files to check in and deal with. Further investigations are required here…

For our projects that use Angular 2, this is actually the slowest part of our entire build (UI and server side). rxjs seems to be the main source of issues as it takes forever to install. Fortunately we don’t change our third party modules often and we can utilise the shared cache of artefacts so developers don’t wind up building this step locally too often.

Setup a Workspace and Generate webpack.config.js

We don’t want to have to repeat the same configuration for webpack, typescript, karma etc for everything UI thing we build. So our build process generates them for us, tweaking things as needed to account for the small differences between projects. It also grabs the node_modules from the previous step and installs any of our own shared components (with npm install <local/path/to/component>).

Build UI Stuff

Now we’ve got something that looks just like most stand-alone javascript projects would have – config files at the root, source ready to combine/minify etc. At this point we can just run things out of node_modules. So we have a target to build with ./node_modules/.bin/webpack, run tests with ./node_modules/.bin/karma or start the webpack dev server.

Buck can then pick up those results and integrate them where they’re needed in the final outputs ready for deployment.

Finding What Buck Actually Built

Buck is a weird but very fast build tool that happens to be rather opaque about where it actually puts the things you build with it. They wind up somewhere under the buck-out folder but there’s no guarantee where and everything under there is considered buck’s private little scratch pad.

So how do you get build results out so they can be used? For things with built-in support like Java libraries you can use ‘buck publish’ to push them out to a repo but that doesn’t work for things you’ve built with a custom genrule. In those cases you could use an additional genrule build target to actually publish but it would only run when one of it’s dependencies have changed. Sometimes that’s an excellent feature but it’s not always what you want.

Similarly, you might want to actually run something you’ve built. You can almost always use the ‘buck run’ command to do that but it will tie up the buck daemon while it’s running so you can’t run two things at once.

For ultimate flexibility you really want to just find out where the built file is which thankfully is possible using ‘buck targets –show-full-output’. However it outputs both the target and it’s output:

$ buck targets --show-full-output //support/bigfeedback:bigfeedback
//bigfeedback:bigfeedback /code/buck-out/gen/bigfeedback/bigfeedback.jar

To get just the target file we need to pipe it through:

cut -d ' ' -f 2-

Or as a handy reusable bash function:

function findOutput() {
    $BUCK targets --show-full-output ${1} | cut -d ' ' -f 2-
}

 

Replacing Symlinks with Hardlinks

Symlinks have been causing me grief lately.  Our build tool, buck, loves creating symlinks but publishes corrupt cache artefacts for any build rule that includes a symlink amongst it’s output.

We also wind up calling out to npm to manage JavaScript dependencies and it has an annoying (for us) habit of resolving symlinks when processing files and then failing to find required libraries because the node_modules folder was back where the symlink was, not with the original file. Mostly this problem is caused by buck creating so many symlinks.

So it’s useful to be able to get rid of symlinks which can be done with the handy -L or –dereference option to cp. Then instead of copying the symlink you copy the file it points to. Avoids all the problems with buck and npm but wastes lots of disk space and means that changes to the original file are no longer reflected in the new copy (so watching files doesn’t work).

Assuming our checkout is on a single file system (which seems reasonable) we can get the best of both worlds by using hard links.  cp has a handy option for that too -l or –link. But since buck gave us a symlink to start with it just gives us a hard link to the symlink that points to the original file.

So combining the two options, cp -Ll, should be exactly what we want. And if you’re using coreutils 8.25 or above it is. cp will dereference the symlink and create a hard link to the original file. If you’re using coreutils prior to 8.25 cp will just copy the symlink. Hitting a bug in coreutils is pretty much the definition of the world being out to get you.

Fortunately, we can work around the issue with a bit of find magic:

find ${DIR} -type l -exec bash -c 'ln -f "$(readlink -m "$0")" "$0"' {} \;

‘find -type l’ will find all symlinks.  For each of those we execute some bash, reading from inside out, to deference the symlink with readlink -m then use ln to create a hard link with the -f option to force it to overwrite the existing symlink.

Good times…