Living in a state of accord.

Moolah Diaries – Principles

It’s always important to have some basic principles in mind before you start writing code for a project. That way you have something to guide your technology choices, development practices and general approach to the code. They’re not immovable – many of them won’t survive the first encounter with actual code – but identifying your principles helps to make a code base much more coherent.

Naturally for the Moolah rebuild I just jumped in and started writing code without much thought, let alone guiding principles but a few have emerged either as lessons from the first version, personal philosophy or early experiences.

Make the Server Smart

In the first version of Moolah, the server was just a dumb conduit to the database – in fact for quite some time it didn’t even parse the JSON objects, just stuffed them into the database and pulled them back out again. That missed a lot of opportunities for optimisation and keeping the client light and fast.

Ultimately this isn’t so much about client vs server, it’s more about backend vs frontend. In a browser the data storage options are limited (though much better now than 5 years ago) but the server has a proper database so can filter and work with data much faster. Sure I could re-implement fast indexing in the browser but why bother?

Provide Well-Defined, Client-Agnostic APIs

I want to be able to work on the client and server independently, potentially replacing one outright without changing the other. That way if one of my shiny tool choices doesn’t pan out well it’s a lot easier to ditch it, even if that means completely rebuilding the client or server. It also means I can play around with native applications should I want to in the future.

Deploy Early, Deploy Often and Deploy Automatically

This is basically the only sane way to work, but I’m often too lazy to do it properly with side projects. Moolah still has a few missing elements but has come far enough that I’m likely to stick with this and do it properly.

Test at Multiple Levels

Side projects that involve playing with shiny new toys often wind up lacking any form of tests and that comes back to hurt pretty quickly. I’ve made an effort to have a reasonable level of tests in place right from the start. There’s been a big cost to that as I wrestle with the various testing frameworks and understand the best ways to approach testing Moolah, but that cost will be paid back over time and is much lower than trying to wrestle with those tools and a big, test-unfriendly code base.

I’m also fairly inclined to only believe something is actually working when I have quite high level tests for it, including going right through to the real database. Unit tests are a design tool, integration tests are where you flush out most of the bugs and I’ve been seeing that even this early.

Fail Fast

If a tool, library or approach isn’t going to work I want to find out quickly and ditch it quickly. I’m trying lots of new things so have had to be pretty ruthless at ripping them back out, not just if they don’t work, but if they introduce too many gotchas or hoops to jump through.

Have Fun

There’s a degree to which I need the Moolah rebuild to reach feature parity with the old system fairly quickly so I can switch over, but this is a side project so it’s mostly about having fun. I’ve let myself get distracted with building a fancy page for before you’re logged in even though it’s way more than I’ll likely ever need.

I’m not sure this is a particularly meaningful list in general but for this particular project at this particular time they seem to be serving me well.

Moolah Diaries – Background

When I moved back to Australia about 5 years ago, I suddenly had a much more complex task of tracking our money. I was consulting, so had to set aside money to pay tax at the end of the year, we were buying our first house and having just moved countries needed to buy pretty much everything so budgeting was critical. I found a reasonable looking money tracking application and started recording all our transactions there and using it to manage our money.

That worked for a year or so but I wound up frustrated that it couldn’t produce exactly the view of data I wanted and couldn’t forecast into the future quite in the way I wanted. So I wound up writing my own because it seemed like fun and would let me play around with shiny new technologies.

I those crazy days backbone was all the rage and bootstrap was the latest must-have look so I went with those.  I also built it with the intention that it could run entirely as a standalone, offline webapp using a backbone plugin to store data in local storage. It could sync data across devices via a web server but the intention was to support going offline without any notice or loss of functionality.

I very imaginatively called my new thing “finance” because why waste time thinking of a name when there’s code to write. However, when I set up a domain to sync through StartSSL (who used to provide free, trusted SSL certs but who are no longer trusted) refused to issue a cert for a domain containing the word finance. Apparently you have to pay if you’re doing anything to do with financial transactions. I tried money and a few other variants before finally discovering that “moolah” got through the filters and so the project became known as Moolah.

As with most personal projects, it fairly quickly reached the point where it could do everything I needed, including the particular reports, graphs and views I couldn’t get from the previous software. Everything was great, and I was happy.

So I stopped changing the code.

Time moved on, no one talks about backbone anymore, bootstrap’s still doing pretty well but I never liked it much and I kept on adding all my transactions into Moolah. As the number of transactions increased, some of the underlying design decisions were challenged and all the “I’ll improve that later when I need to” things started to become needed. But it had been years since I’d worked on the code and now it was an unfamiliar, legacy code base that some imbecile had written while obviously more interested in playing with shiny new things than writing maintainable code.

Since it was designed to work offline, every transaction was sent down to the browser so it had all the information. And due to one too many annoying bugs in the backbone local storage plugin it didn’t so much sync as just download everything every time you opened the page.  The size of the data was pretty insignificant, but the number of transactions to be filtered and totalled up etc meant that eventually it just wasn’t worth trying to use on my phone, then Safari started struggling and eventually even Chrome on my very beefy dev box struggled.

I’m sure I could work through the old code, fix up a bunch of it’s poor assumptions and keep it chugging along, but this is spare-time coding and that doesn’t sound at all fun. Instead, I’m going to rebuild it from scratch using lots of shiny new technologies but without making the same mistakes as last time.

Except the one about using shiny new technologies that may or may not last. I’m totally making that mistake again because it’s just too much fun. But I will try to avoid going too crazy with it.

And since my blog is far too neglected these days, I’m going to try and write about it from time to time. We’ll see how that goes…

Modernising Our JavaScript – Why Angular 2 Didn’t Work

At LMAX we value simplicity very highly and since most of the UI we need for the exchange is fairly straight forward settings and reports we’ve historically kept away from JavaScript frameworks. Instead we’ve stuck with jQuery and bootstrap along with some very simple common utilities and patterns we’ve built ourselves. Mostly this has worked very well for us.

Sometimes though we have more complex UIs where things dynamically change or state is more complex. In those cases things start to breakdown and get very messy. The simplicity of our libraries winds up causing complexity in our code instead of avoiding it. We needed something better.

Some side projects had used Angular and a few people were familiar with it so we started out trialling Angular 2.0. While it was much better for those complex cases the framework itself introduced so much complexity and cost it was unpleasant to work with.  Predominately we had two main issues:

  1. Build times were too slow
  2. It wasn’t well suited for dropping an Angular 2 component into an existing system rather than having everything live in Angular 2 world

Build Times

This was the most surprising problem – Angular 2 build times were painfully slow. We found we could build all of the java parts of the exchange before npm could even finish installing the dependencies for an Angular 2 project – even with all the dependencies in a local cache and using npm 5’s –offline option. We use buck for our build system and it does an excellent job of only building what needs to be changed and caching results so most of the time we could avoid the long npm install step, but it still needs to run often enough that it was a significant drain on the team’s productivity.

We did evaluate yarn and pnpm but neither were workable in our particular situation. They were both faster at installing dependencies but still far too slow.

The lingering question here is whether the npm install was so slow because of the sheer number of dependencies or because something about those dependencies was slow. Anecdotally it seemed like rxjs took forever to install but other issues led us away from angular before we fully understood this.

Even when the npm install could be avoided, the actual compile step was still slow enough to be a drain on the team. The projects we were using angular on were quite new with a fairly small amount of code. Running through the development server was fast, but a production mode build was slow.

Existing System Integration

The initial projects we used angular 2 on were completely from scratch so could do everything the angular 2 way. On those projects productivity was excellent and angular 2 was generally a joy to use. When we tried to build onto our existing systems using angular 2 things were much less pleasant.

Technically it was possible to build a single component on a page using angular 2 with other parts of the page using our older approach, but doing so felt fairly unnatural. The angular 2 way is significantly different to how we had been working and since angular 2 provides a full-suite of functionality it often felt like we were working against the framework rather than with it. Re-using our existing code within an angular 2 component felt wrong so we were being pushed towards duplicating code that worked perfectly well and we were happy with just to make it fit “the angular 2 way”.

If we intended to rewrite all our existing code using angular 2 that would be fine, but we’re not doing that. We have a heap of functionality that’s already built, working great and will be unlikely to need changes for quite some time. It would be a huge waste of time for us to go back and rewrite everything just to use the shiny new tool.

Angular 2 is Still Great

None of this means that angular 2 has irretrievable faults, it’s actually a pretty great tool to develop with. It just happens to shine most if you’re all-in with angular 2 and that’s never going to be our situation. I strongly suspect that even the build time issues would disappear if we could approach the build differently, but changing large parts of our build system and the development practices that work with it just doesn’t make sense when we have other options.

I can’t see any reason why a project built with angular 2 would need or want to migrate away. Nor would I rule out angular 2 for a new project. It’s a pretty great library, provides a ton of functionality that you can just run with and has excellent tooling. Just work out how your build works and if it’s going to be too slow early on.

For us though, Angular 2 didn’t turn out to be the wonderful new world we hoped it would be.

Unit Testing JavaScript Promises with Synchronous Tests

With Promise/A+ spreading through the world of JavaScript at a rapid pace, there’s one little detail that makes them very hard to unit test: any chained actions (via .then()) are only called when the execution stack contains only platform code. That’s a good thing in the real world but it makes unit testing much more complex because resolving a promise isn’t enough – you also need to empty the execution stack.

The first, and best, way to address these challenges is to take advantage of your test runner’s asynchronous test support. Mocha for example allows you to return a promise from your test and waits for it to be either resolved to indicate success or rejected to indicate failure. For example:

it('should pass asynchronously', function() {
    return new Promise((resolve, reject) => {
        setTimeout(resolve, 100);

This works well when you’re testing code that returns the promise to you so you can chain any assertions you need and then return the final promise to the test runner. However, there are often cases where promises are used internally to a component which this approach can’t solve. For example, a UI component that periodically makes requests to the server to update the data it displays. 

Sinon.js makes it easy to stub out the HTTP request using it’s fake server and the periodic updates using a fake clock, but if promises are used sinon’s clock.tick() isn’t enough to trigger chained actions. They’ll only execute after your test method returns and since there’s no reason, and often no way, for the UI component to pass a promise for it’s updates out of the component we can’t just depend on the test runner. That’s where promise-mock comes in. It replaces the normal Promise implementation with one that allows your unit test to trigger callbacks at any point.

Let’s avoid all the clock and HTTP stubbing by testing this very simple example of code using a Promise internally:

let value = 0;
module.exports = {
    setValueViaImmediatePromise: function (newValue) {
        return new Promise((resolve, reject) => resolve(newValue))
                .then(result => value = result);
    getValue: function () {
        return value;

Our test is then:

const asyncThing = require('./asyncThing');
const PromiseMock = require('promise-mock');
const expect = require('expect.js');
describe.only('with promise-mock', function() {
    beforeEach(function() {
    afterEach(function() {
    it('should set value asynchronously and keep internals to itself', function() {

We have a beforeEach and afterEach to install and uninstall the mocked promise, then when we want the promise callbacks to execute in our test, we simply call Promise.runAll().  In most cases, promise-mock combined with sinon’s fake HTTP server and stub clock is enough to let us write easy-to-follow, synchronous tests that cover asynchronous behaviour.

Keeping our tests synchronous isn’t just about making them easy to read though – it also means we’re in control of how asynchronous callbacks interleave. So we can write tests to check what happens if action A finishes before action B and tests for what happens if it’s the other way around. Lots and lots of bugs hide in those areas.

PromiseMock.install() Not Working

All that sounds great, but I spent a long time trying to work out why PromiseMock.install() didn’t ever seem to change the Promise implementation. I could see that window.Promise === PromiseMock was true, but without the window prefix I was still getting the original promise implementation (Promise !== PromiseMock).

It turns out, that’s because we were using babel’s transform-runtime plugin which was very helpfully rewriting references to Promise to use babel’s polyfill version without the polyfill needing to pollute the global namespace. The transform-runtime plugin has an option to disable this:

['transform-runtime', {polyfill: false}]

With that promise-mock worked as expected.

Using WebPack with Buck

I’ve been gradually tidying up the build process for UI stuff at LMAX. We had been using a mix of requires and browserify – both pretty sub-optimally configured. Obviously when you have too many ways of doing things the answer is to introduce another new way so I’ve converted everything over to webpack.

Situation: There are 14 competing standards. We need to develop one universal standard that overs everyone's use cases. Situation: there are 15 competing standards...

Webpack is most often used as part of a node or gulp/grunt build process but our overall build process is controlled by Buck so I’ve had to work it into that setup. I was also keen to minimise the amount of code that had to be changed to support the new build process.

The final key requirement, which had almost entirely been missed by our previous UI build attempts was the ability to easily create reusable UI modules that are shared by some, but not all projects. Buck shuns the use of repositories in favour of a single source tree with everything in it so an internal npm repo wasn’t going to fly.

While the exact details are probably pretty specific to our setup, the overall shape of the build likely has benefit.  We have separate buck targets (using genrule) for a few different key stages:

Build node_modules

We use npm to install third party dependencies to build the node_modules directory we’ll need. We do this in an offline way by checking in the node cache as CI doesn’t have internet access but it’s pretty unsatisfactory. Checking in node_modules directory was tried previously but both svn and git have massive problems with the huge numbers of files it winds up containing.

yarn has much better offline support and other benefits as well, but it’s offline support requires a cache and the cache has every package already expanded so it winds up with hundreds of thousands of files to check in and deal with. Further investigations are required here…

For our projects that use Angular 2, this is actually the slowest part of our entire build (UI and server side). rxjs seems to be the main source of issues as it takes forever to install. Fortunately we don’t change our third party modules often and we can utilise the shared cache of artefacts so developers don’t wind up building this step locally too often.

Setup a Workspace and Generate webpack.config.js

We don’t want to have to repeat the same configuration for webpack, typescript, karma etc for everything UI thing we build. So our build process generates them for us, tweaking things as needed to account for the small differences between projects. It also grabs the node_modules from the previous step and installs any of our own shared components (with npm install <local/path/to/component>).

Build UI Stuff

Now we’ve got something that looks just like most stand-alone javascript projects would have – config files at the root, source ready to combine/minify etc. At this point we can just run things out of node_modules. So we have a target to build with ./node_modules/.bin/webpack, run tests with ./node_modules/.bin/karma or start the webpack dev server.

Buck can then pick up those results and integrate them where they’re needed in the final outputs ready for deployment.