Symphonious

Living in a state of accord.

Modernising Our JavaScript – Vue.js To The Rescue

I’ve previously written about how Angular 2 didn’t work out for us, our second attempt was to use Vue.js which has much far more successful. The biggest difference with Angular is that it’s a much smaller library. It has far less functionality included in the box but a wide range of plugins to flesh out functionality as needed. That avoided the two big issues we had with Angular 2:

  • Build times were faster because there were way fewer dependencies to install, manage and bundle together. Not using Typescript may have also helped but I never had a clear indication of how much that affected build time.
  • It integrated much more easily with our existing system because it didn’t try to do everything. We just kept using the bits of our system that worked well.

We’ve been so happy with how Vue.js fits in for us, we’re now in the process of replacing the things we had built in Angular 2 with Vue.js versions.

We set out looking for a more modern UI framework primarily because we wanted the data binding functionality they provide. As expected, that’s been a very big benefit for any parts of the UI that are even slightly more than simple CRUD. We were using mustache for our templates and the extra power and flexibility of Vue’s templating has been a big advantage. There is a risk of making the templates too complex and hard to understand, but that’s mitigated by how easy it is to break out separate components that are narrowly focused.

In fact, the component model has turned out to be the single biggest advantage of Vue over jquery. We did have some patterns built up around jquery that enabled component like behaviour but they were very primitive compared to what Vue provides. We’ve got a growing library of reusable components already that all fit in nicely with the existing look and feel of the UI.

The benefit of components is so great that I’d use Vue even for very straight-forward UIs where jquery by itself could handle it simply. Vue adds almost no overhead in terms of complexity and makes the delineation of responsibilities between components very clear which leads to often unexpected re-usability. With mixins it’s also possible to reuse cross-cutting concerns easily.

All those components wind up being built in .vue files which combine HTML, JavaScript and styles for the component into one file. I was quite sceptical of this at first but Vue provides a good justification for the decision and in practice it works really well as long as you are a bit disciplined at splitting things out into separate files if they become at all complex. Typically I try to have the code in the .vue file entirely focused on managing the component state and split out the details of interacting with anything external (e.g. calling server APIs and parsing responses) into helper files.

Ultimately, it’s the component system that is really bringing us the most value which is a bit of a surprise given we had expected data-binding to be the real powerhouse. And data-binding is great, but it’s got nothing on the advantages of a clear component system that’s at just the right level of opinionated-ness for our system. We’re not only building UIs faster, but the UIs we build are better because any time we spend polishing a component applies everywhere it’s used.

I’m really struggling to think of a case where I wouldn’t use Vue now, and if I found one it would likely only be because one of the other similar options (e.g. Angular 2) was a better fit for that case.

Moolah Diaries – Principles

It’s always important to have some basic principles in mind before you start writing code for a project. That way you have something to guide your technology choices, development practices and general approach to the code. They’re not immovable – many of them won’t survive the first encounter with actual code – but identifying your principles helps to make a code base much more coherent.

Naturally for the Moolah rebuild I just jumped in and started writing code without much thought, let alone guiding principles but a few have emerged either as lessons from the first version, personal philosophy or early experiences.

Make the Server Smart

In the first version of Moolah, the server was just a dumb conduit to the database – in fact for quite some time it didn’t even parse the JSON objects, just stuffed them into the database and pulled them back out again. That missed a lot of opportunities for optimisation and keeping the client light and fast.

Ultimately this isn’t so much about client vs server, it’s more about backend vs frontend. In a browser the data storage options are limited (though much better now than 5 years ago) but the server has a proper database so can filter and work with data much faster. Sure I could re-implement fast indexing in the browser but why bother?

Provide Well-Defined, Client-Agnostic APIs

I want to be able to work on the client and server independently, potentially replacing one outright without changing the other. That way if one of my shiny tool choices doesn’t pan out well it’s a lot easier to ditch it, even if that means completely rebuilding the client or server. It also means I can play around with native applications should I want to in the future.

Deploy Early, Deploy Often and Deploy Automatically

This is basically the only sane way to work, but I’m often too lazy to do it properly with side projects. Moolah still has a few missing elements but has come far enough that I’m likely to stick with this and do it properly.

Test at Multiple Levels

Side projects that involve playing with shiny new toys often wind up lacking any form of tests and that comes back to hurt pretty quickly. I’ve made an effort to have a reasonable level of tests in place right from the start. There’s been a big cost to that as I wrestle with the various testing frameworks and understand the best ways to approach testing Moolah, but that cost will be paid back over time and is much lower than trying to wrestle with those tools and a big, test-unfriendly code base.

I’m also fairly inclined to only believe something is actually working when I have quite high level tests for it, including going right through to the real database. Unit tests are a design tool, integration tests are where you flush out most of the bugs and I’ve been seeing that even this early.

Fail Fast

If a tool, library or approach isn’t going to work I want to find out quickly and ditch it quickly. I’m trying lots of new things so have had to be pretty ruthless at ripping them back out, not just if they don’t work, but if they introduce too many gotchas or hoops to jump through.

Have Fun

There’s a degree to which I need the Moolah rebuild to reach feature parity with the old system fairly quickly so I can switch over, but this is a side project so it’s mostly about having fun. I’ve let myself get distracted with building a fancy page for before you’re logged in even though it’s way more than I’ll likely ever need.

I’m not sure this is a particularly meaningful list in general but for this particular project at this particular time they seem to be serving me well.

Moolah Diaries – Background

When I moved back to Australia about 5 years ago, I suddenly had a much more complex task of tracking our money. I was consulting, so had to set aside money to pay tax at the end of the year, we were buying our first house and having just moved countries needed to buy pretty much everything so budgeting was critical. I found a reasonable looking money tracking application and started recording all our transactions there and using it to manage our money.

That worked for a year or so but I wound up frustrated that it couldn’t produce exactly the view of data I wanted and couldn’t forecast into the future quite in the way I wanted. So I wound up writing my own because it seemed like fun and would let me play around with shiny new technologies.

I those crazy days backbone was all the rage and bootstrap was the latest must-have look so I went with those.  I also built it with the intention that it could run entirely as a standalone, offline webapp using a backbone plugin to store data in local storage. It could sync data across devices via a web server but the intention was to support going offline without any notice or loss of functionality.

I very imaginatively called my new thing “finance” because why waste time thinking of a name when there’s code to write. However, when I set up a domain to sync through StartSSL (who used to provide free, trusted SSL certs but who are no longer trusted) refused to issue a cert for a domain containing the word finance. Apparently you have to pay if you’re doing anything to do with financial transactions. I tried money and a few other variants before finally discovering that “moolah” got through the filters and so the project became known as Moolah.

As with most personal projects, it fairly quickly reached the point where it could do everything I needed, including the particular reports, graphs and views I couldn’t get from the previous software. Everything was great, and I was happy.

So I stopped changing the code.

Time moved on, no one talks about backbone anymore, bootstrap’s still doing pretty well but I never liked it much and I kept on adding all my transactions into Moolah. As the number of transactions increased, some of the underlying design decisions were challenged and all the “I’ll improve that later when I need to” things started to become needed. But it had been years since I’d worked on the code and now it was an unfamiliar, legacy code base that some imbecile had written while obviously more interested in playing with shiny new things than writing maintainable code.

Since it was designed to work offline, every transaction was sent down to the browser so it had all the information. And due to one too many annoying bugs in the backbone local storage plugin it didn’t so much sync as just download everything every time you opened the page.  The size of the data was pretty insignificant, but the number of transactions to be filtered and totalled up etc meant that eventually it just wasn’t worth trying to use on my phone, then Safari started struggling and eventually even Chrome on my very beefy dev box struggled.

I’m sure I could work through the old code, fix up a bunch of it’s poor assumptions and keep it chugging along, but this is spare-time coding and that doesn’t sound at all fun. Instead, I’m going to rebuild it from scratch using lots of shiny new technologies but without making the same mistakes as last time.

Except the one about using shiny new technologies that may or may not last. I’m totally making that mistake again because it’s just too much fun. But I will try to avoid going too crazy with it.

And since my blog is far too neglected these days, I’m going to try and write about it from time to time. We’ll see how that goes…

Modernising Our JavaScript – Why Angular 2 Didn’t Work

At LMAX we value simplicity very highly and since most of the UI we need for the exchange is fairly straight forward settings and reports we’ve historically kept away from JavaScript frameworks. Instead we’ve stuck with jQuery and bootstrap along with some very simple common utilities and patterns we’ve built ourselves. Mostly this has worked very well for us.

Sometimes though we have more complex UIs where things dynamically change or state is more complex. In those cases things start to breakdown and get very messy. The simplicity of our libraries winds up causing complexity in our code instead of avoiding it. We needed something better.

Some side projects had used Angular and a few people were familiar with it so we started out trialling Angular 2.0. While it was much better for those complex cases the framework itself introduced so much complexity and cost it was unpleasant to work with.  Predominately we had two main issues:

  1. Build times were too slow
  2. It wasn’t well suited for dropping an Angular 2 component into an existing system rather than having everything live in Angular 2 world

Build Times

This was the most surprising problem – Angular 2 build times were painfully slow. We found we could build all of the java parts of the exchange before npm could even finish installing the dependencies for an Angular 2 project – even with all the dependencies in a local cache and using npm 5’s –offline option. We use buck for our build system and it does an excellent job of only building what needs to be changed and caching results so most of the time we could avoid the long npm install step, but it still needs to run often enough that it was a significant drain on the team’s productivity.

We did evaluate yarn and pnpm but neither were workable in our particular situation. They were both faster at installing dependencies but still far too slow.

The lingering question here is whether the npm install was so slow because of the sheer number of dependencies or because something about those dependencies was slow. Anecdotally it seemed like rxjs took forever to install but other issues led us away from angular before we fully understood this.

Even when the npm install could be avoided, the actual compile step was still slow enough to be a drain on the team. The projects we were using angular on were quite new with a fairly small amount of code. Running through the development server was fast, but a production mode build was slow.

Existing System Integration

The initial projects we used angular 2 on were completely from scratch so could do everything the angular 2 way. On those projects productivity was excellent and angular 2 was generally a joy to use. When we tried to build onto our existing systems using angular 2 things were much less pleasant.

Technically it was possible to build a single component on a page using angular 2 with other parts of the page using our older approach, but doing so felt fairly unnatural. The angular 2 way is significantly different to how we had been working and since angular 2 provides a full-suite of functionality it often felt like we were working against the framework rather than with it. Re-using our existing code within an angular 2 component felt wrong so we were being pushed towards duplicating code that worked perfectly well and we were happy with just to make it fit “the angular 2 way”.

If we intended to rewrite all our existing code using angular 2 that would be fine, but we’re not doing that. We have a heap of functionality that’s already built, working great and will be unlikely to need changes for quite some time. It would be a huge waste of time for us to go back and rewrite everything just to use the shiny new tool.

Angular 2 is Still Great

None of this means that angular 2 has irretrievable faults, it’s actually a pretty great tool to develop with. It just happens to shine most if you’re all-in with angular 2 and that’s never going to be our situation. I strongly suspect that even the build time issues would disappear if we could approach the build differently, but changing large parts of our build system and the development practices that work with it just doesn’t make sense when we have other options.

I can’t see any reason why a project built with angular 2 would need or want to migrate away. Nor would I rule out angular 2 for a new project. It’s a pretty great library, provides a ton of functionality that you can just run with and has excellent tooling. Just work out how your build works and if it’s going to be too slow early on.

For us though, Angular 2 didn’t turn out to be the wonderful new world we hoped it would be.

Unit Testing JavaScript Promises with Synchronous Tests

With Promise/A+ spreading through the world of JavaScript at a rapid pace, there’s one little detail that makes them very hard to unit test: any chained actions (via .then()) are only called when the execution stack contains only platform code. That’s a good thing in the real world but it makes unit testing much more complex because resolving a promise isn’t enough – you also need to empty the execution stack.

The first, and best, way to address these challenges is to take advantage of your test runner’s asynchronous test support. Mocha for example allows you to return a promise from your test and waits for it to be either resolved to indicate success or rejected to indicate failure. For example:

it('should pass asynchronously', function() {
    return new Promise((resolve, reject) => {
        setTimeout(resolve, 100);
    })
});

This works well when you’re testing code that returns the promise to you so you can chain any assertions you need and then return the final promise to the test runner. However, there are often cases where promises are used internally to a component which this approach can’t solve. For example, a UI component that periodically makes requests to the server to update the data it displays. 

Sinon.js makes it easy to stub out the HTTP request using it’s fake server and the periodic updates using a fake clock, but if promises are used sinon’s clock.tick() isn’t enough to trigger chained actions. They’ll only execute after your test method returns and since there’s no reason, and often no way, for the UI component to pass a promise for it’s updates out of the component we can’t just depend on the test runner. That’s where promise-mock comes in. It replaces the normal Promise implementation with one that allows your unit test to trigger callbacks at any point.

Let’s avoid all the clock and HTTP stubbing by testing this very simple example of code using a Promise internally:

let value = 0;
module.exports = {
    setValueViaImmediatePromise: function (newValue) {
        return new Promise((resolve, reject) => resolve(newValue))
                .then(result => value = result);
    },
    getValue: function () {
        return value;
    }
};

Our test is then:

const asyncThing = require('./asyncThing');
const PromiseMock = require('promise-mock');
const expect = require('expect.js');
describe.only('with promise-mock', function() {
    beforeEach(function() {
        PromiseMock.install();
    });
    afterEach(function() {
        PromiseMock.uninstall();
    });
    it('should set value asynchronously and keep internals to itself', function() {
        asyncThing.setValueViaImmediatePromise(3);
        Promise.runAll();
        expect(asyncThing.getValue()).to.be(3);
    });
});

We have a beforeEach and afterEach to install and uninstall the mocked promise, then when we want the promise callbacks to execute in our test, we simply call Promise.runAll().  In most cases, promise-mock combined with sinon’s fake HTTP server and stub clock is enough to let us write easy-to-follow, synchronous tests that cover asynchronous behaviour.

Keeping our tests synchronous isn’t just about making them easy to read though – it also means we’re in control of how asynchronous callbacks interleave. So we can write tests to check what happens if action A finishes before action B and tests for what happens if it’s the other way around. Lots and lots of bugs hide in those areas.

PromiseMock.install() Not Working

All that sounds great, but I spent a long time trying to work out why PromiseMock.install() didn’t ever seem to change the Promise implementation. I could see that window.Promise === PromiseMock was true, but without the window prefix I was still getting the original promise implementation (Promise !== PromiseMock).

It turns out, that’s because we were using babel’s transform-runtime plugin which was very helpfully rewriting references to Promise to use babel’s polyfill version without the polyfill needing to pollute the global namespace. The transform-runtime plugin has an option to disable this:

['transform-runtime', {polyfill: false}]

With that promise-mock worked as expected.