Symphonious

Living in a state of accord.

Moolah Diaries – Automating Deployment from Travis CI

Thanks to Oncle Tom’s “SSH deploys with Travis CI“, I’ve now fully automated Moolah deployment for both the client and server.

Previously the client was automated using a cronjob that pulled any changes from GitHub, built and ran the tests and if that all worked resync’d the built resources to apache’s docroot. That meant my not-particularly-powerful server was duplicating the build and test steps already happening in Travis CI and there was up to a 10 minute delay before changes went live.

The server didn’t have automated deployments because it has database tests that need an actual database to test against.  I’m crazy, but not crazy enough to run database tests anywhere near real production data.

Now with Travis CI trigging the deployment after the tests have already passed everything is triggered automatically from a git push and it’s a nicely isolated server running all the tests.

I’ve deliberately split the deployment in two – committed to the Moolah codebase is just enough to push the artefacts over to the server and trigger it’s deployment script. The deployment script on the server is managed as part of it’s puppet configuration and controls where things are finally deployed to, manages database migrations etc. That gives a nice clear delineation between the development of the service and the details of how it’s deployed in this particular environment and a simple, clear interface between the two. Now I can change how things work in either the code or the server setup without thinking too much about the other half.

Fun with CommonJS

I’ve recently started a little side project to create a java implementation of a CommonJS compiler: commonjs-java which is available on github under the Apache 2 license. It’s output is targeted at browsers, bundling together all the dependencies into one javascript file.

There are plenty of utilities that can do that for you on the command line or as part of a build process (e.g. browserify) and in most cases you should stick with them, but I had a couple of requirements that drove me to want a Java implementation:

  1. Support no-build-step-required hot-deploy of changes to scripts
  2. Have full control over how dependencies are located and loaded

Hot deploying changes can be achieved with a build system very smoothly if you set up a way to watch for changes and automatically recompile but it tends to require some messing about to set up right. 

I also wanted to be able to deploy the resulting web apps as a standalone jar, so wanted to be able to load all the web resources from the class path. When running from an IDE during development they’d be read straight off the filesystem but when deployed they’d come out of the single jar file.

Finally, it seemed like something interesting to build…

In the end I’m really quite happy with how it all worked out. We use spark as our framework so it only takes a little glue-code to route requests to through spark and have commonjs-java compile the javascript on the fly. The script compilation step doesn’t add any noticeable delay during development and I have a toggle that enables the script to be cached to avoid constant recompiling once deployed.

The CommonJS compilation turned out to be quite straight forward. The biggest part is identifying all the require calls but rhino can provide an AST which makes it straight-forward. There are some interesting details about how to inject the require, module and exports variables at runtime as well as providing fully isolated contexts for the modules but certainly nothing ground breaking.

Adding source map support was more interesting. The spec certainly doesn’t give a lot of detail or guidance so there was quite a lot of experimentation and back and forth.

The final piece to add is a minification step. My first attempt at that used Uglify2 running in java 8’s javascript engine but it was far too slow – I suspect mostly because of java’s javascript engine. Next attempt will be to try using YUI which is written in java.

Use More Magic Literals

In programming courses one of the first thing you’re taught is to avoid “magic literals” – numbers or strings that are hardcoded in the middle of an algorithm. The recommended solution is to extract them into a constant. Sometimes this is great advice, for example:

if (amount > 1000) {
  checkAdditionalAuthorization();
}

would be much more readable if we extracted a ADDITIONAL_AUTHORIZATION_THRESHOLD variable – primarily so the magic 1000 gets a name.

That’s not a hard and fast rule though.  For example:

value.replace(PATTERN_TO_REPLACE, token)

is dramatically less readable and maintainable than:

value.replace("%VALUE%", token)

Extracting a constant in this case just reduced the locality of the code, requiring someone reading the code to unnecessarily jump around the code to understand it.

 

My rule of thumb is that you should extract a constant only when:

  • it’s reasonably easy to think of a good name from the constant – one that adds meaning to the code OR
  • the value is required in multiple places and is likely to change

Arbitrary tokens like %VALUE% above are generally unlikely to change – it’s an implementation choice – so I’d lean towards preserving the locality and not extracting a constant even when they’re used in multiple places.  The 1000 threshold for additional authorisation on the other hand is clearly a business rule and therefore likely to change so I’d go to great lengths to avoid duplicating it (and would consider making it a configuration option).

Obviously these are just rules of thumb so there will be plenty of cases where, because of the specific context, they should be broken.

Playing with Ruby on Rails

I’ve been playing around with ruby on rails recently, partly to play around with rails and partly to take a run at a web app I’ve been considering (which I’ve open sourced because why not?).

It turns out the last time I played with it was back in 2005 and slightly amusingly my thoughts on it haven’t changed all that much. The lack of configuration is still good, but the amount of magic involved makes it hard to understand what’s going on. The ease of finding documentation has improved dramatically – 10 years of blog posts really help. I’m still using TextMate and it’s still annoying that I can’t click a method name to jump to it’s definition – I hear good things about RubyMine but I’m not keen to invest that kind of money in what may be a very short-lived experiment.

The two big changes are that I’ve got 10 years more development experience and the quality of gems seems to have improved significantly. The extra experience means I pick things up a lot faster, understand them more deeply and am a lot less convinced that the way I’m used to is the only right way. The improved quality of gems makes it far less likely that I’ll waste a heap of time struggling with a poorly written gem and instead can drop it in and see benefits straight away.

While all the hipsters may consider rails old-hat now and moved onto node or go, from what I’ve seen it’s matured extremely well and is in a fairly ideal point in the hype-cycle – still getting a lot of attention, embedded enough to have loads of people with experience in it and code depending on it so it will never go away but old and boring enough to be well documented with good libraries and not changing dramatically every week.

End to End Tests @ LMAX Update

A little while back I said that LMAX ran around 11,000 end to end tests in around 50 minutes. Since then we’ve deployed some new hardware to run our continuous integration on, plus continued building new stuff and are now running about 11,500 tests in under 20 minutes.

A large part of the speed boost is extra VM instances but also the increased RAM allocation available to each VM has allowed us to increase a number of limits in the system and we can now run more tests concurrently against each VM instance.

We’re currently running 61 instances of the exchange using virtual machines hosted by four Dell FX2s chassis three-quarter populated with FC630s. That gives us 480 cores and 4.5TiB RAM. That’s certainly no small investment, but we consider it excellent value for money because of the boost in productivity and confidence it gives our development team (not to mention the boost in confidence and reliability it gives our clients).