Symphonious

Living in a state of accord.

Moolah Diaries – Vuex for Front End State

Most of the work I’ve done on Moolah so far has been on the server side – primarily fairly boring setup work and understanding the best way to use any number of libraries that were new to me sensibly. The most interesting work however has been on the front-end. I’ve been using Vue.js and Vuetify for the UI after Vue’s success in the day job. The Moolah UI has much more data-interdependence between components than what we’ve needed at work though so I’ve introduced Vuex to manage the state in a centralised and more managed way.

I really quite like the flow – Vue components still own any state related directly to the UI like whether a dialog is shown or hidden etc but all the business model is stored in and managed by the Vuex store. The Vue components dispatch actions which perform computation, make requests to the backend or whatever is required then commit any changes to the store (via mutations). The usual Vue data-binding then kicks in to update the UI to reflect those changes.

The big advantage of this is that it naturally pulls business logic out of .vue files, preventing them getting to big. Without Vuex that basically depends on having the discipline to notice when a .vue file is doing too much and then untangling and splitting out the business logic.  Vuex provides a much clearer and more consistent way to delineate business logic from view code because you can’t modify state directly from the Vue component and it then becomes natural to split out an action.

Vuex’s module support also makes it easy to avoid your Vuex store from becoming the big ball of mud that does everything.

However, I’m still searching for a good, efficient way to calculate and update the current balance for each transaction. The actual calculation is simple enough – the balance for any transaction is the sum of the amount of every transaction before it in the account. Simplistically we could just start from the first transaction and iterate through calculating all the balances in a single O(n) pass. However, recalculating the balance for every transaction on each change is incredibly wasteful and is a big part of why the original version of Moolah takes so long to get started – it’s calculating all those balances. Each transaction balance actually only depends on two things, the transaction amount and the balance of the previous transaction. Since most new or changed transactions are at or near the very end of the transaction list, we should be able to avoid recalculating most of the balances.

I don’t think Vue/Vuex’s lazy evaluation will be able to avoid doing a lot of extra recalculation, not least of all because the only way to represent this would be a transactionsWithBalances computed view and it would output the entire list of transactions so would recalculate every balance on every change.

However, it’s reasonably straight forward to build the lazy evaluation manually, but where does that sit in the Vuex system? I’m guessing pre-calculated balance is just a property of every transaction in the state and actions take responsibility for updating any balances they might have affected.

I’m leaning towards having a dedicated ‘updateBalances’ action that can be triggered at the end of any action that changes transactions and is given the first transaction that requires it’s balance recalculated. Since every transaction after that depends on the balance of the one before they’ll also need updating.

I think that works and am now reminded about how useful it is to write a diary like this as a way to think through issues like this.

Modernising Our JavaScript – Vue.js To The Rescue

I’ve previously written about how Angular 2 didn’t work out for us, our second attempt was to use Vue.js which has much far more successful. The biggest difference with Angular is that it’s a much smaller library. It has far less functionality included in the box but a wide range of plugins to flesh out functionality as needed. That avoided the two big issues we had with Angular 2:

  • Build times were faster because there were way fewer dependencies to install, manage and bundle together. Not using Typescript may have also helped but I never had a clear indication of how much that affected build time.
  • It integrated much more easily with our existing system because it didn’t try to do everything. We just kept using the bits of our system that worked well.

We’ve been so happy with how Vue.js fits in for us, we’re now in the process of replacing the things we had built in Angular 2 with Vue.js versions.

We set out looking for a more modern UI framework primarily because we wanted the data binding functionality they provide. As expected, that’s been a very big benefit for any parts of the UI that are even slightly more than simple CRUD. We were using mustache for our templates and the extra power and flexibility of Vue’s templating has been a big advantage. There is a risk of making the templates too complex and hard to understand, but that’s mitigated by how easy it is to break out separate components that are narrowly focused.

In fact, the component model has turned out to be the single biggest advantage of Vue over jquery. We did have some patterns built up around jquery that enabled component like behaviour but they were very primitive compared to what Vue provides. We’ve got a growing library of reusable components already that all fit in nicely with the existing look and feel of the UI.

The benefit of components is so great that I’d use Vue even for very straight-forward UIs where jquery by itself could handle it simply. Vue adds almost no overhead in terms of complexity and makes the delineation of responsibilities between components very clear which leads to often unexpected re-usability. With mixins it’s also possible to reuse cross-cutting concerns easily.

All those components wind up being built in .vue files which combine HTML, JavaScript and styles for the component into one file. I was quite sceptical of this at first but Vue provides a good justification for the decision and in practice it works really well as long as you are a bit disciplined at splitting things out into separate files if they become at all complex. Typically I try to have the code in the .vue file entirely focused on managing the component state and split out the details of interacting with anything external (e.g. calling server APIs and parsing responses) into helper files.

Ultimately, it’s the component system that is really bringing us the most value which is a bit of a surprise given we had expected data-binding to be the real powerhouse. And data-binding is great, but it’s got nothing on the advantages of a clear component system that’s at just the right level of opinionated-ness for our system. We’re not only building UIs faster, but the UIs we build are better because any time we spend polishing a component applies everywhere it’s used.

I’m really struggling to think of a case where I wouldn’t use Vue now, and if I found one it would likely only be because one of the other similar options (e.g. Angular 2) was a better fit for that case.

Modernising Our JavaScript – Why Angular 2 Didn’t Work

At LMAX we value simplicity very highly and since most of the UI we need for the exchange is fairly straight forward settings and reports we’ve historically kept away from JavaScript frameworks. Instead we’ve stuck with jQuery and bootstrap along with some very simple common utilities and patterns we’ve built ourselves. Mostly this has worked very well for us.

Sometimes though we have more complex UIs where things dynamically change or state is more complex. In those cases things start to breakdown and get very messy. The simplicity of our libraries winds up causing complexity in our code instead of avoiding it. We needed something better.

Some side projects had used Angular and a few people were familiar with it so we started out trialling Angular 2.0. While it was much better for those complex cases the framework itself introduced so much complexity and cost it was unpleasant to work with.  Predominately we had two main issues:

  1. Build times were too slow
  2. It wasn’t well suited for dropping an Angular 2 component into an existing system rather than having everything live in Angular 2 world

Build Times

This was the most surprising problem – Angular 2 build times were painfully slow. We found we could build all of the java parts of the exchange before npm could even finish installing the dependencies for an Angular 2 project – even with all the dependencies in a local cache and using npm 5’s –offline option. We use buck for our build system and it does an excellent job of only building what needs to be changed and caching results so most of the time we could avoid the long npm install step, but it still needs to run often enough that it was a significant drain on the team’s productivity.

We did evaluate yarn and pnpm but neither were workable in our particular situation. They were both faster at installing dependencies but still far too slow.

The lingering question here is whether the npm install was so slow because of the sheer number of dependencies or because something about those dependencies was slow. Anecdotally it seemed like rxjs took forever to install but other issues led us away from angular before we fully understood this.

Even when the npm install could be avoided, the actual compile step was still slow enough to be a drain on the team. The projects we were using angular on were quite new with a fairly small amount of code. Running through the development server was fast, but a production mode build was slow.

Existing System Integration

The initial projects we used angular 2 on were completely from scratch so could do everything the angular 2 way. On those projects productivity was excellent and angular 2 was generally a joy to use. When we tried to build onto our existing systems using angular 2 things were much less pleasant.

Technically it was possible to build a single component on a page using angular 2 with other parts of the page using our older approach, but doing so felt fairly unnatural. The angular 2 way is significantly different to how we had been working and since angular 2 provides a full-suite of functionality it often felt like we were working against the framework rather than with it. Re-using our existing code within an angular 2 component felt wrong so we were being pushed towards duplicating code that worked perfectly well and we were happy with just to make it fit “the angular 2 way”.

If we intended to rewrite all our existing code using angular 2 that would be fine, but we’re not doing that. We have a heap of functionality that’s already built, working great and will be unlikely to need changes for quite some time. It would be a huge waste of time for us to go back and rewrite everything just to use the shiny new tool.

Angular 2 is Still Great

None of this means that angular 2 has irretrievable faults, it’s actually a pretty great tool to develop with. It just happens to shine most if you’re all-in with angular 2 and that’s never going to be our situation. I strongly suspect that even the build time issues would disappear if we could approach the build differently, but changing large parts of our build system and the development practices that work with it just doesn’t make sense when we have other options.

I can’t see any reason why a project built with angular 2 would need or want to migrate away. Nor would I rule out angular 2 for a new project. It’s a pretty great library, provides a ton of functionality that you can just run with and has excellent tooling. Just work out how your build works and if it’s going to be too slow early on.

For us though, Angular 2 didn’t turn out to be the wonderful new world we hoped it would be.

Unit Testing JavaScript Promises with Synchronous Tests

With Promise/A+ spreading through the world of JavaScript at a rapid pace, there’s one little detail that makes them very hard to unit test: any chained actions (via .then()) are only called when the execution stack contains only platform code. That’s a good thing in the real world but it makes unit testing much more complex because resolving a promise isn’t enough – you also need to empty the execution stack.

The first, and best, way to address these challenges is to take advantage of your test runner’s asynchronous test support. Mocha for example allows you to return a promise from your test and waits for it to be either resolved to indicate success or rejected to indicate failure. For example:

it('should pass asynchronously', function() {
    return new Promise((resolve, reject) => {
        setTimeout(resolve, 100);
    })
});

This works well when you’re testing code that returns the promise to you so you can chain any assertions you need and then return the final promise to the test runner. However, there are often cases where promises are used internally to a component which this approach can’t solve. For example, a UI component that periodically makes requests to the server to update the data it displays. 

Sinon.js makes it easy to stub out the HTTP request using it’s fake server and the periodic updates using a fake clock, but if promises are used sinon’s clock.tick() isn’t enough to trigger chained actions. They’ll only execute after your test method returns and since there’s no reason, and often no way, for the UI component to pass a promise for it’s updates out of the component we can’t just depend on the test runner. That’s where promise-mock comes in. It replaces the normal Promise implementation with one that allows your unit test to trigger callbacks at any point.

Let’s avoid all the clock and HTTP stubbing by testing this very simple example of code using a Promise internally:

let value = 0;
module.exports = {
    setValueViaImmediatePromise: function (newValue) {
        return new Promise((resolve, reject) => resolve(newValue))
                .then(result => value = result);
    },
    getValue: function () {
        return value;
    }
};

Our test is then:

const asyncThing = require('./asyncThing');
const PromiseMock = require('promise-mock');
const expect = require('expect.js');
describe.only('with promise-mock', function() {
    beforeEach(function() {
        PromiseMock.install();
    });
    afterEach(function() {
        PromiseMock.uninstall();
    });
    it('should set value asynchronously and keep internals to itself', function() {
        asyncThing.setValueViaImmediatePromise(3);
        Promise.runAll();
        expect(asyncThing.getValue()).to.be(3);
    });
});

We have a beforeEach and afterEach to install and uninstall the mocked promise, then when we want the promise callbacks to execute in our test, we simply call Promise.runAll().  In most cases, promise-mock combined with sinon’s fake HTTP server and stub clock is enough to let us write easy-to-follow, synchronous tests that cover asynchronous behaviour.

Keeping our tests synchronous isn’t just about making them easy to read though – it also means we’re in control of how asynchronous callbacks interleave. So we can write tests to check what happens if action A finishes before action B and tests for what happens if it’s the other way around. Lots and lots of bugs hide in those areas.

PromiseMock.install() Not Working

All that sounds great, but I spent a long time trying to work out why PromiseMock.install() didn’t ever seem to change the Promise implementation. I could see that window.Promise === PromiseMock was true, but without the window prefix I was still getting the original promise implementation (Promise !== PromiseMock).

It turns out, that’s because we were using babel’s transform-runtime plugin which was very helpfully rewriting references to Promise to use babel’s polyfill version without the polyfill needing to pollute the global namespace. The transform-runtime plugin has an option to disable this:

['transform-runtime', {polyfill: false}]

With that promise-mock worked as expected.

Safely Encoding Any String Into JavaScript Code Using JavaScript

When generating a JavaScript file dynamically it’s not uncommon to have to embed an arbitrary string into the resulting code so it can be operated on. For example:

function createCode(inputValue) {
return "function getValue() { return '" + inputValue + "'; }"
}

This simplistic version works great for simple strings:

createCode("Hello world!");
// Gives: function getValue() { return 'Hello world!'; }

But breaks as soon as inputValue contains a special character, e.g.

createCode("Hello 'quotes'!");
// Gives: function getValue() { return 'Hello 'quotes' !'; }

You can escape single quotes but it still breaks if the input contains a \ character. The easiest way to fully escape the string is to use JSON.stringify:

function createCode(inputValue) {
return "function getValue() { return " +
JSON.stringify(String(inputValue)) +
"; }"
}

Note that JSON.stringify even adds the quotes for us. This works because a JSON string is a JavaScript string, so if you pass a string to JSON.stringify it will return a perfectly valid JavaScript string complete with quotes that is guaranteed to evaluate back to the original string.

The one catch is that JSON.stringify will happily JavaScript objects and numbers, not just strings, so we need to force the value to be a string first – hence, String(inputValue).