Below you will find pages that utilize the taxonomy term “LMAX”
Modernising Our JavaScript – Supporting OpenSource
At LMAX, we’ve been using Vue.js for all our new JavaScript work very successfully for a while now. Our existing products used Vue.js for the logic but continued to use bootstrap as the component library to ensure the UI remained consistent. For anything new though, we started using Vuetify to give us better UIs much more easily.
Vuetify is close to reaching it’s 1.0 release now and we’ve been using it for a while so when we first started with it, the vast majority of development and support was being done by the project founder, John Leider in his spare time. Migrating UI libraries is painful so having an early stage project without a lot of community (at the time) was a pretty big risk for us, but the code was excellent and John was obviously very committed to the project, being very responsive to bug reports and questions. So LMAX jumped on board as a platinum supporter of Vuetify, making regular monthly payments to support Vuetify development.
Internationalising Vue Components
As Vue gradually spreads through our JavaScript, it’s reached the point where we need to support internationalisation. We already have a system for localising numbers and dates that we can reuse easily enough with Vue filters. We also have an existing system for translations, but it’s tied into the mustache templates we’ve used to date so we’ll need to adapt or change it somehow to work with Vue.
In mustache we simply wrap any strings to be translated with {{#i18n}}…{{/i18n}} markers and a webpack loader takes care of providing a translation function as the i18n property of the state for the compiled template.
Modernising Our JavaScript – Why Angular 2 Didn’t Work
At LMAX we value simplicity very highly and since most of the UI we need for the exchange is fairly straight forward settings and reports we’ve historically kept away from JavaScript frameworks. Instead we’ve stuck with jQuery and bootstrap along with some very simple common utilities and patterns we’ve built ourselves. Mostly this has worked very well for us.
Sometimes though we have more complex UIs where things dynamically change or state is more complex. In those cases things start to breakdown and get very messy. The simplicity of our libraries winds up causing complexity in our code instead of avoiding it. We needed something better.
Finding What Buck Actually Built
Buck is a weird but very fast build tool that happens to be rather opaque about where it actually puts the things you build with it. They wind up somewhere under the buck-out folder but there’s no guarantee where and everything under there is considered buck’s private little scratch pad.
So how do you get build results out so they can be used? For things with built-in support like Java libraries you can use ‘buck publish’ to push them out to a repo but that doesn’t work for things you’ve built with a custom genrule. In those cases you could use an additional genrule build target to actually publish but it would only run when one of it’s dependencies have changed. Sometimes that’s an excellent feature but it’s not always what you want.
Testing@LMAX – Isolate UI Tests with vncserver
One reason that automated UI tests can be unreliable is that they tend to be sensitive to what else is on screen at the time and even things like the current screen size. Developers running the tests locally also find it annoying to have windows opening and closing on their machine while the test runs and are unable to do anything else because their clicking might interfere with the test.
Testing@LMAX – Screenshots with Selenium/WebDriver
When an automated UI test fails, it can be hard to tell exactly what went wrong just from the failure message. The failure message typically just says that some element the test was looking for wasn’t found, but it doesn’t tell you what was there. Was there an error message displayed instead? Was the operation still executing? Did something completely unexpected happen instead?
To answer those questions our DSL automatically captures a screenshot when any UI operation fails and we include a link to it in the failure message. That way when someone reviews the test result they can see exactly what was on screen which typically makes it straight forward to identify what went wrong and fix it.
Testing@LMAX – Compatibility Tests
Once an application goes live, it is absolutely essential that any future changes are able to be work with the existing data in production, typically by migrating it as changes are required. That existing data and the migrations applied to it are often the riskiest and least tested functions in the system. Mistakes in a migration will at best cause a multi-hour outage while backups are restored and more likely will subtly corrupt data, producing incorrect results that may go unnoticed for long periods making it impossible to roll back. At LMAX we reduce that risk by running compatibility tests.
Testing@LMAX – Making Test Output Useful
Just like production code, you should assume things are going to go wrong in your tests and when it does you want good logging to help track down what happened and why. So just like production code, you should use a logging framework within your DSL, use meaningful log levels and think about what info you’d need in the logs if something went wrong (and what you don’t). There’s also a few things we’ve found very useful specifically in our test logging.
Testing@LMAX – Introducing ElementSpecification
Today LMAX Exchange has released ElementSpecification, a very small library we built to make working with selectors in selenium/WebDriver tests easier. It has three main aims:
- Make it easier to understand selectors by using a very English-like syntax
- Avoid common pitfalls when writing selectors that lead to either brittle or intermittent tests
- Strongly discourage writing overly complicated selectors.
Essentially, we use ElementSpecification anywhere that we would have written CSS or XPath selectors by hand. ElementSpecification will automatically select the most appropriate format to use – choosing between a simple ID selector, CSS or XPath.
Testing@LMAX – Replacements in DSL
Given our DSL makes heavy use of aliases, we often have to provide a way to include the real name or ID as part of some string. For example, an audit record for a new account might be:
Created account 127322 with username someUser123.
But in our acceptance test we’d create the user with:
registrationAPI.createUser("someUser");
someUser is just an alias, the DSL creates a unique username to use and the system assigns a unique account ID that the DSL keeps track of for us. So to write a test that the new account audit record is written correctly, we need a way to build the expected audit string.
Testing@LMAX – Abstraction by DSL
At LMAX, all our acceptance tests are written using a high level DSL. This gives us two key advantages:
- Tests can focus on what they’re testing with the details and mechanics hidden behind the DSL
- When things change we can update the DSL implementation to match, avoiding the need to change each test that happens to touch on the area.
The DSL that LMAX uses is probably not what most people think of when hearing the term DSL – it doesn’t attempt to read like plain English, just simplifies things down significantly. We’ve actually open sourced the simple little library that is the entrance-way to what we think of as the DSL – creatively named simple-dsl. It’s essentially the glue between what we write in an acceptance test and the plain-java implementation behind that.
Making End-to-End Tests Work
The Google Testing Blog has an article “Just Say No to More End-to-End Tests” which winds up being a rather depressing evaluation of the testing capabilities, culture and infrastructure at Google. For example:
Let’s assume the team already has some fantastic test infrastructure in place. Every night:
- The latest version of the service is built.
- This version is then deployed to the team's testing environment.
- All end-to-end tests then run against this testing environment.
- An email report summarizing the test results is sent to the team.
If your idea of fantastic test infrastructure starts with the words “every night” and ends with an email being sent you’re doomed. Bryan Pendleton does a good job of analysing and correcting the details so I won’t recover that ground. Instead, let me provide a view of what reasonable test infrastructure looks like.
Testing@LMAX – Testing in Live
Previously in the Testing@LMAX series I’ve mentioned the way we’ve provided isolation between tests, allowing us to run them in parallel. That isolation extends all the way up to supporting a multi-tenancy module called venues which allows us to essentially run multiple, functionally separate exchanges on a single deployment of the LMAX Exchange.
We use the isolation of venues to reduce the amount of hardware we need to run our three separate liquidity pools (LMAX Professional, LMAX Institutional and LMAX Interbank), but that’s not all. We actually use the isolation venues provide to extend our testing all the way into production.
Testing@LMAX – Test Isolation
One of the most common reasons people avoid writing end-to-end acceptance tests is how difficult it is to make them run fast. Primary amongst this is the time required to start up the entire service and shut it down again. At LMAX with the full exchange consisting of a large number of different services, multiple databases and other components start up is far too slow to be done for each test so our acceptance tests are designed to run against the same server instance and not interfere with each other.
Testing@LMAX – Distributed Builds with Romero
LMAX has invested quite a lot of time into building a suite of automated tests to verify the behaviour of our exchange. While the majority of those tests are unit or integration tests that run extremely fast, in order for us to have confidence that the whole package fits together in the right way we have a lot of end-to-end acceptance tests as well.
These tests deliver a huge amount of confidence and are thus highly valuable to us, but they come at a significant cost because end-to-end tests are relatively time consuming to run. To minimise the feedback cycle we want to run these tests in parallel as much as possible.
Background Logging with the Disruptor
Peter Lawrey posted an example of using the Exchanger class from core Java to implement a background logging implementation. He briefly compared it to the LMAX disruptor and since someone requested it, I thought it might be interesting to show a similar implementation using the disruptor.
Firstly, let’s revisit the very high level differences between the exchanger and the disruptor. Peter notes:
This approach has similar principles to the Disruptor. No GC using recycled, pre-allocated buffers and lock free operations (The Exchanger not completely lock free and doesn’t busy wait, but it could)
The Disruptor Wizard is Dead, Long Live the Disruptor Wizard!
As of this morning, the Disruptor Wizard has been merged into the core LMAX Disruptor source tree. The .NET port had included the wizard style syntax for quite some time and it seemed to be generally popular, so why make people grab two jars instead of one?
I also updated it to reflect the change in terminology within the Disruptor. Instead of Consumers, there are now EventProcessors and EventHandlers. That better reflects the fact that consumers can actually add additional values to the events. Additionally, the ProducerBarrier has been merged into the ring buffer itself and the ring buffer entries are now called events. Again, that better reflects the fact that the programming model around the disruptor is most often event based.
Martin Fowler on the LMAX Architecture
Martin Fowler has posted an excellent overview of the LMAX architecture which helps put the use-cases and key considerations that led to the LMAX disruptor pattern in context.
Good to see some focus being put on the programming model that LMAX uses as well. While convention suggests that to get the best performance you have to make heavy use of concurrency and multithreading, the measurements LMAX did showed that the cost of contention and communication between threads was too great. As such, the programming model at LMAX is largely single-threaded, achieving the best possible performance while avoiding the complexities of multithreaded programming.
LMAX Disruptor – High Performance, Low Latency and Simple Too
The LMAX disruptor is an ultra-high performance, low-latency message exchange between threads. It’s a bit like a queue on steroids (but quite a lot of steroids) and is one of the key innovations used to make the LMAX exchange run so fast. There is a rapidly growing set of information about what the disruptor is, why it’s important and how it works – a good place to start is the list of articles and for the on-going stuff, follow LMAX Blogs. For really detailed stuff, there’s also the white paper (PDF).