Symphonious

Living in a state of accord.

Testing@LMAX – Isolate UI Tests with vncserver

One reason that automated UI tests can be unreliable is that they tend to be sensitive to what else is on screen at the time and even things like the current screen size. Developers running the tests locally also find it annoying to have windows opening and closing on their machine while the test runs and are unable to do anything else because their clicking might interfere with the test.

At LMAX we solve that by isolating tests in their own X session, created using vncserver. We simply start vncserver with:

vncserver :20 -geometry 1600x1200

Then set DISPLAY=:20 as an environment variable when starting WebDriver’s Firefox instance:

FirefoxBinary firefoxBinary = new FirefoxBinary();
firefoxBinary.setEnvironmentProperty("DISPLAY", ":20");

The Firefox window then pops up in it’s own isolated X session. We can still use a vnc client to watch as the test runs but we can also let it run in the background and continue using the machine for other things. In CI it allows us to run UI tests on a headless server.

Since we run a number of tests in parallel, in CI we start a number of vncserver instances and allocate a different one to each running test to ensure they’re completely isolated.

Simple, but incredibly effective.

Testing@LMAX – Screenshots with Selenium/WebDriver

When an automated UI test fails, it can be hard to tell exactly what went wrong just from the failure message. The failure message typically just says that some element the test was looking for wasn’t found, but it doesn’t tell you what was there.  Was there an error message displayed instead? Was the operation still executing? Did something completely unexpected happen instead?

To answer those questions our DSL automatically captures a screenshot when any UI operation fails and we include a link to it in the failure message. That way when someone reviews the test result they can see exactly what was on screen which typically makes it straight forward to identify what went wrong and fix it.

Until recently we’d been using the convenient and helpful looking TakesScreenshot.getScreenshotAs method that WebDriver provides.  For example:

((TakesScreenshot)webDriver).getScreenshotAs(
new SaveScreenshotOutputType(pngFilename));

As expected, this creates a PNG image in the specified location that looks for all the world like a screenshot of the browser content. Unfortunately, it’s lying.

WebDriver actually does something very clever and gets the browser to render the page content into a canvas element and then saves that as the PNG file. This is an extremely close approximation of what the page looks like with two important exceptions:

  1. It doesn’t respect the viewport size so body content is never scrolled off-screen.
  2. Any browser chrome or random other windows that have popped up aren’t shown.

Both of these things can be an issue – the scrolled-off-screen one being the most problematic.  Modern WebDriver quite accurately simulates a user clicking and typing keys so if somethings not on screen it can’t be clicked. When your test fails because an element was “present but not visible” and the screenshot shows it as very clearly visible, hilarity ensues. Very frustrating hilarity.

To fix this we’ve started taking honest-to-goodness screenshots. Since all our tests get their own X session (courtesy of vncserver) their windows are completely isolated from each other and a dump of the entire screen will capture precisely what a real user would see, browser chrome and scrolling included. Linux provides an entertaining array of options for capturing screenshots from the command line but the one that happened to be already installed was import, part of the ImageMagick suite. We simply execute:

import -display :20 -window root screenshot.png

where :20 is the X display this particular test has been allocated and screenshot.png is where we want the screenshot to wind up.

Since the WebDriver screenshot can be useful as well – for example finding out an error message is displayed at the top of the screen  we continue to grab that too.

Finally, for completeness we grab a dump of the DOM to a HTML file so we can later inspect what IDs, classes, attributes etc are present, including any hidden elements. webDriver.getPageSource() makes that easy and we append an extra HTML comment that includes webDriver.getCurrentUrl() for good measure.

Testing@LMAX – Compatibility Tests

Once an application goes live, it is absolutely essential that any future changes are able to be work with the existing data in production, typically by migrating it as changes are required. That existing data and the migrations applied to it are often the riskiest and least tested functions in the system. Mistakes in a migration will at best cause a multi-hour outage while backups are restored and more likely will subtly corrupt data, producing incorrect results that may go unnoticed for long periods making it impossible to roll back. At LMAX we reduce that risk by running compatibility tests.

Sanitised Data

The first requirement for testing data migrations is testing data that as production-like as possible. It’s almost never a good idea to bring actual production data into development and it’s definitely never going to happen for a finance company like LMAX. Instead, we have a sanitiser that runs in production and generates a very heavily sanitised version of production data that still has the same “shape”. For example, it has the same number of accounts, the same distribution of open positions across instruments, null values in the same places there’s null values in production and so on. However it replaces anything that’s even vaguely personally identifiable or sensitive information. If in doubt, sanitise it out.

Despite being heavily sanitised, we still treat it as data that should be secured but it can be brought into our development, testing and staging environments.

We have multiple production environments so we have sanitised data that matches the shape of each of those environments.

Can Migrations Run?

Once we have sanitised data the most basic check is to confirm the migration process will actually complete successfully. This ensures we can release successfully but doesn’t give us any real confidence that the migration is successful. For such a primitive check it’s surprisingly effective as it picks up the common errors of changing columns to NOT NULL when the production data actually does have null values, or adding a unique constraint to tables when the content isn’t actually unique.

Did Migrations Work?

The obvious next step is to write tests to confirm that migrations actually worked. Our compatibility test jobs are setup so that we can easily write a JUnit test that will be run after migrations complete so we can verify the state. 

The most direct form of test is an example based test. We select some examples from the production dataset and write assertions to check that specific bit of the data migrated in the way we expect. The down side is that we’re dealing with live production data which is regularly updated so it’s possible that our examples will change after we’ve written the test and then, correctly, migrate to something different to what we expect. Still, these are often useful to put through for a single run as a sanity test when developing the migration, then delete them.

Slightly more generic, we can write a test that assert constraints that must be true immediately after the migration completes. For example, when we made permissions more fine-grained we needed to assign a new type of payment role to every account that used real money, but not to any demo accounts (which use pretend money). We can write a test to verify that migration worked correctly quite easily, however once the migration goes out admin users may add or remove the role to different accounts and the constraint would no longer hold. For cases like that we simply delete the test once the migration has gone live at which point it’s done its job anyway.  We also mark the test with a @ValidUntil annotation that makes it clear that the test has a limited life time in case we forget to delete it.

Finally, we can often identify constraints that should always be valid and write permanent tests for them. These are extremely powerful, testing not just that our current migration works correctly but that no future migration breaks that expectation.

Did Something Unexpected Happen?

The compatibility tests that should always hold true have an additional benefit – they give us early feedback that the production data has diverged from expectations for some reason, typically because a bug slipped through. We can then investigate, fix the bug to prevent any more data issues and work out how to clean up the problem.

Obviously, finding issues only after production data has gone wrong is not something we ever want to do but it’s still a useful safety net if something slips through all our pre-release testing and the database schema constraints we use. Typically when we do find issues they are minor inconsistencies that don’t cause any issues now, but are like little time bombs just waiting for a future release to assume it can’t possibly happen. So even getting feedback that late in the process often allows us to avoid any user-noticeable effects.

Making Them Easy

We have a base class we extend our compatibility tests from which makes it easy to get a connection to the database and has a few handy utilities for asserting things. By far the most useful however is the assertNoRowsMatch method. It does exactly what it says – takes an SQL query and asserts that no rows match. If any do, it prints them out so you get really useful debug information to start investigating the problem. For example:

@Test
public void shouldHaveAPrincipalForEveryAccount() {
  assertNoRowsMatch(
    "SELECT a.account_id, a.name " +
    "FROM account a " +
    "LEFT JOIN principal p ON a.principal_id = p.principal_id " +
    "WHERE p.principal_id IS NULL");
}

If we’ve somehow wound up with an account with no principal that could log into it the test will print the account ID and name so we can investigate what happened and clean up the data.

 

Testing@LMAX – Making Test Output Useful

Just like production code, you should assume things are going to go wrong in your tests and when it does you want good logging to help track down what happened and why. So just like production code, you should use a logging framework within your DSL, use meaningful log levels and think about what info you’d need in the logs if something went wrong (and what you don’t). There’s also a few things we’ve found very useful specifically in our test logging.

Log Alias to Real Name Mappings

Since the DSL uses aliases, if we want to poke around in the exchange manually to understand why a test failed, we need to know the real names to use. So whenever we create a real name for an alias we log some information about it. For example when creating an instrument:

21:28:53,916 WARN  …rdersWithSuppliedInstructionIds [AdminAPI] Created instrument 'instrument' (actual name: 'instrument-54369ih64k63', externalId: 180423, internalId: 180422) on tradeReportingGroup 1003

All the key information we need is in that log statement – the alias, real name plus unique identifiers (externalId and internalId).

Name Your Threads

We use a custom JUnit test runner (via the @RunWith annotation) so we can run tests within a test suite in parallel. With tests running in parallel all their output gets mixed up and becomes hard to read.  Recently we started setting the test thread names to the name of the test case. 

private void executeTest(
  final FrameworkMethod method,
  final Description description) {
    Thread.currentThread().setName(
      getThreadName(description.getMethodName()));
    methodBlock(method).evaluate();
}

We actually trim the method name to 30 characters (cutting off the start rather than the end which tends to work better with the way we name tests) so we get output like:

21:28:47,356 INFO  …tityAndPriceAndNoStopLossOffset [TestContext] Created PartyCode XMCS (alias: marketMaker)

21:28:47,356 INFO  …erWithZeroSuppliedInstructionId [TestContext] Created PartyCode U6HK (alias: marketMaker)

21:28:47,356 INFO  …ctionIdIfFirstOrderHasCompleted [TestContext] Created PartyCode 3DS6 (alias: marketMaker)

21:28:47,356 INFO  …StopLossOffsetAndAStopLossPrice [TestContext] Created PartyCode XFY8 (alias: marketMaker)

There are a few cases where we spawn additional threads within a test (typically to pull data from long poll or other push event channels). In those cases we generally pass the thread name down with an additional prefix (e.g. LongPoll-…StopLossOffsetAndAStopLossPrice) so we can still associate that output with the right test.

Time Traveller Names

The way we allow time travelling tests to run in parallel is reasonably complex – only one thread ever actually executes the time travel and there’s a bunch of cross-thread coordination – so our thread names aren’t as useful in that little area of code. As such we give each test we’re currently running a time traveller name so we get log output from the Tardis like:

The Doctor asking to travel to mondayOpen (Mon Mar 14 07:00:00 UTC 2016)
Clara asking to travel to mondayOpen (Mon Mar 14 07:00:00 UTC 2016)
Captain Jack asking to travel to mondayOpen (Mon Mar 14 07:00:00 UTC 2016)
Rory asking to travel to mondayOpen (Mon Mar 14 07:00:00 UTC 2016)
Missy asking to travel to mondayOpen (Mon Mar 14 07:00:00 UTC 2016)
Amy asking to travel to mondayOpen (Mon Mar 14 07:00:00 UTC 2016)
River asking to travel to mondayOpen (Mon Mar 14 07:00:00 UTC 2016)
Rose asking to travel to mondayOpen (Mon Mar 14 07:00:00 UTC 2016)
Time travelling to: mondayOpen (Mon Mar 14 07:00:00 UTC 2016)

We may have gotten a little carried away with the Doctor Who theme but having a name for each time traveller makes it far easier to understand what each test is waiting for.

Testing@LMAX – Introducing ElementSpecification

Today LMAX Exchange has released ElementSpecification, a very small library we built to make working with selectors in selenium/WebDriver tests easier. It has three main aims:

  • Make it easier to understand selectors by using a very English-like syntax
  • Avoid common pitfalls when writing selectors that lead to either brittle or intermittent tests
  • Strongly discourage writing overly complicated selectors.

Essentially, we use ElementSpecification anywhere that we would have written CSS or XPath selectors by hand. ElementSpecification will automatically select the most appropriate format to use – choosing between a simple ID selector, CSS or XPath.

Making selectors easier to understand doesn’t mean making locators shorter – CSS is already a very terse language. We actually want to use more characters to express our intent so that future developers can read the specification without having to decode CSS. For example, the CSS:

#data-table tr[data-id='78'] .name

becomes:

anElementWithId("data-table")
.thatContainsA("tr").withAttributeValue("data-id", "78")
.thatContainsAnElementWithClass("name")

Much longer, but if you were to read the CSS selector to yourself, it would come out a lot like the ElementSpecification syntax. That allows you to stay focussed on what the test is doing instead of pausing to decode the CSS. It’s also reduces the likelihood of misreading a vital character and misunderstanding the selector.

With ElementSpecification essentially acting as an adapter layer between the test author and the actual CSS, it’s also able to avoid some common intermittency pitfalls. In fact, the reason ElementSpecification was first built was because really smart people kept attempting to locate an element with a classname using:

//*[contains(@class, 'valid')]

which looks ok, but incorrectly also matches an element with the class ‘invalid’. Requiring the class attribute to exactly match ‘valid’ is too brittle because it will fail if an additional class is added to the element. Instead, ElementSpecification would generate:

contains(concat(' ', @class, ' '), ' valid ')

which is decidedly unpleasant to have to write by hand.

The biggest benefit we’ve seen from ElementSpecification though is that fact that it has a deliberately limited set of abilities. You can only descend down the DOM tree, never back up and never across to siblings. That makes selectors far easier to understand and avoids a lot of unintended coupling between the tests and incidental properties of the DOM. Sometimes it means augmenting the DOM to make it more semantic – things like adding a “data-id” attribute to rows as in the example above. It’s surprisingly rare how often we need to do that and surprising how useful those extra semantics wind up being for a whole variety of reasons anyway.