At LMAX, all our acceptance tests are written using a high level DSL. This gives us two key advantages:
- Tests can focus on what they’re testing with the details and mechanics hidden behind the DSL
- When things change we can update the DSL implementation to match, avoiding the need to change each test that happens to touch on the area.
The DSL that LMAX uses is probably not what most people think of when hearing the term DSL – it doesn’t attempt to read like plain English, just simplifies things down significantly. We’ve actually open sourced the simple little library that is the entrance-way to what we think of as the DSL – creatively named simple-dsl. It’s essentially the glue between what we write in an acceptance test and the plain-java implementation behind that.
As a simple example here’s a test that creates a user, instrument and places an order:
tradingAPI.placeOrder("instrument", "quantity: 5", "type: market",
Overall, the aim is to have the acceptance tests written at a very high level – focussing on what should happen but leaving the how to the DSL implementation. The tradingAPI.placeOrder call is a good example of this, it’s testing that when the user places an order on an instrument with no liquidity, it won’t be matched. In the DSL that’s actually a two step process, first place the order and receive a synchronous OK response to say the order was received, then when the order reaches the matching engine an asynchronous event will be emitted to say the order was not matched. We could have made that two separate calls in the acceptance test but that would have exposed too much detail about how the system works when what we really care about is that the order is unfilled, how that’s reported is an implementation detail.
However that does mean that the implementation of the DSL an important part of the specification of the system. The acceptance tests express the user requirements and the DSL expresses the technical details of those requirements.
Model the Types of Users and Interactions
All our acceptance tests extends a base class, DslTestCase, that exposes a number of public variables that act as the entry points to the system (registrationAPI, adminAPI and tradingAPI in the example above). Each of these roughly represent a way that certain types of users interact with the system. So registrationAPI works with the API exposed by our registration gateway – the same APIs that our sign-up process on the website talks to. adminAPI uses the same APIs our admin console talks to and tradingAPI is the API that both our UI uses and that many of our clients interact with directly.
We also have UI variants like adminUI and tradingUI that use selenium to open a browser and test the UI as well.
Our system tends to have a strong correlation between the type of user and the entry point to the system they use so our DSL mostly maps to the gateways into our system, but in other systems it may be more appropriate to focus more on the type of user regardless of what their entry point into the system is. Again the focus should be on what happens more than how. The way you categorise functions in the DSL should aid you in thinking that way.
That said, our top level DSL concepts aren’t entirely restricted to just the system entry point they model. For example the registrationAPI.createUser call in the example will initially talk to the system’s registration API, but since a new account isn’t very useful until it deposits funds, it then talks to the admin console to approve the registration and credit some funds into the users account. There’s a large dose of pragmatism involved in the DSL implementation with the goal being to make it easy to read and write the acceptance tests themselves and we’re willing to sacrifice a little design purity to get that (but only a little).
Top level concepts often further categorise the functionality they provide, for example our admin console that adminAPI drives has a lot of functionality and is used by a range of types of users, so it sub-categorises into things like marketOperations, customerServices, risk, etc.
Add Reusable Components to the DSL
One of the signs that people don’t understand the design of our DSL is when they extract repetitive pieces of tests into a private method within the test itself. On the surface this seems like a reasonable idea, allowing that sequence of actions to be reused by multiple tests in the file. If the sequence is useful in many test cases within one file and significant enough to be worth the indirection of extracting a method it’s almost inevitably useful across many files.
Instead of extracting a private method, put reusable pieces into the DSL itself. Then they’ll be available to all your tests. More importantly though, you can make that method fit into the DSL style properly – in our case, using simple-dsl to pass parameters instead of a fixed set of method parameters.
One of our top level concepts in the DSL is ‘workflows’. It bundles together broader sequences of actions that cut across the boundaries of any one entrance point. It’s a handy home for many of the reusable functions we split out. The down side is it’s currently a real grab bag of random stuff and could do with some meaningful sub-categorisation. Naming is hard…
Design to Avoid Intermittency
The way the DSL is designed is a key weapon in the fight against intermittency. The first rule is to design each function to appear synchronous as much as possible. The LMAX Exchange is a highly asynchronous system design but our DSL hides that as much as possible.
The most useful pattern for this is that whenever you provide a setter-type function it should automatically wait and verify that the effect has been fully applied by checking the equivalent getter-type API. So the end of the DSL implementation for registrationAPI.createUser is a waiter that polls our broker service waiting for the account to actually show up there with the initial balance we credited. That way the test can carry on and place an order immediately without intermittently being rejected for a lack of funds.
The second key pattern applies when verifying values. We produce a lot of reports as CSV files so originally had DSL like:
adminAPI.finance.downloadCashflowReport("date: today", "rememberAs: myCsvFile");
adminAPI.finance.verifyCashflowReportContains("csvFile: myCsvFile", "amount: 10.00");
Apart from being pretty horrible to read, this leads to a lot of intermittency because our system doesn’t guarantee that cash flows will be recorded to the database immediately, it’s done asynchronously so is only guaranteed to happen within a reasonable time. Instead it’s much better to write:
adminAPI.finance.verifyCashflowReportContains("date: today", "amount: 10.00");
Then inside the DSL you can use a waiter to poll the cashflow CSV until it does contain the expected value or whatever you define as a reasonable time elapses and the test times out and fails. Again, having the test focus on what and the DSL dealing with how allows us to write better tests.
Don’t Get Too Fancy with the DSL
The first thought most people have when they see our DSL is that it could be so much better if we used static types and chained method calls to get the compiler to validate more stuff and have refactoring tools work well. It sounds like a great idea, our simple string based DSL seems far too primitive to work in practice but we’ve actually tried it the other way as well and it’s not as great as it sounds.
Inevitably when you try to make the DSL too much like English or try to get the compiler more involved you add quite a lot of complexity to the DSL implementation which makes it a lot harder to maintain so the cost of your acceptance tests goes up – exactly the opposite of what you were intending.
The trade offs will vary considerably depending on which language you’re using for your tests and the best style of DSL to create will vary significantly. I strongly suspect though that regardless of language the best DSL is a radically simple one, just that different things are radically simple in different languages.
This was meant to be a quick article before getting on to what I really wanted to talk about but suddenly I’m 1500 words in and still haven’t discussed anything about the implementation side of the DSL.
It turns out that while our DSL might be simple and something we take for granted, it’s a huge part of what makes our acceptance tests easily maintainable instead of gradually becoming a huge time sink that prevents any change to the system. My intuition is that those people who have tried acceptance tests and found them too expensive to maintain have failed to find the right style of abstraction in the DSL they use, leaving their tests too focused on how instead of what.