Below you will find pages that utilize the taxonomy term “Testing”
Moolah Diaries – Multi-tenant Support for Testing
Experience at LMAX has very clearly demonstrated the benefits of good test isolation, so one of the first things I added to Moolah was multi-tenant support. Each account is a completely isolated little world which is perfect for testing.
Given that I’ve completely punted on storing passwords by delegating authentication to Google (and potentially other places in the future since it’s using bell to handle third party authentication), there’s actually no user creation step at all which makes it even easier. As a result it’s not just acceptance tests that are benefiting from the isolation but also the database tests which no longer need to delete everything in the database to give themselves a clear workspace.
Testing@LMAX – Isolate UI Tests with vncserver
One reason that automated UI tests can be unreliable is that they tend to be sensitive to what else is on screen at the time and even things like the current screen size. Developers running the tests locally also find it annoying to have windows opening and closing on their machine while the test runs and are unable to do anything else because their clicking might interfere with the test.
Testing@LMAX – Screenshots with Selenium/WebDriver
When an automated UI test fails, it can be hard to tell exactly what went wrong just from the failure message. The failure message typically just says that some element the test was looking for wasn’t found, but it doesn’t tell you what was there. Was there an error message displayed instead? Was the operation still executing? Did something completely unexpected happen instead?
To answer those questions our DSL automatically captures a screenshot when any UI operation fails and we include a link to it in the failure message. That way when someone reviews the test result they can see exactly what was on screen which typically makes it straight forward to identify what went wrong and fix it.
Testing@LMAX – Compatibility Tests
Once an application goes live, it is absolutely essential that any future changes are able to be work with the existing data in production, typically by migrating it as changes are required. That existing data and the migrations applied to it are often the riskiest and least tested functions in the system. Mistakes in a migration will at best cause a multi-hour outage while backups are restored and more likely will subtly corrupt data, producing incorrect results that may go unnoticed for long periods making it impossible to roll back. At LMAX we reduce that risk by running compatibility tests.
Testing@LMAX – Making Test Output Useful
Just like production code, you should assume things are going to go wrong in your tests and when it does you want good logging to help track down what happened and why. So just like production code, you should use a logging framework within your DSL, use meaningful log levels and think about what info you’d need in the logs if something went wrong (and what you don’t). There’s also a few things we’ve found very useful specifically in our test logging.
Alert Dialogs Do Not Appear When Using WebDriverBackedSeleniu
With Selenium 1, JavaScript alert and confirmation dialogs were intercepted by the Selenium JavaScript library so they never appeared on-screen and were accessed using selenium.isAlertPresent(), selenium.isConfirmationPresent(), selenium.chooseOkOnNextConfirmation() and similar methods.
With Selenium 2 aka WebDriver, the dialogs do appear on screen and you access them with webDriver.switchTo().alert() which returns an Alert instance for further interactions.
However, when you use WebDriverBackedSelenium to mix Selenium 1 and WebDriver APIs – for example, during migrations from one to the other – the alerts don’t appear on screen and webDriver.switchTo().alert() always throws a NoAlertPresentException. This makes migration problematic because selenium.chooseOkOnNextConfirmation() doesn’t have any effect if the next confirmation is triggered by an action performed directly through Web Driver.
Testing@LMAX – Introducing ElementSpecification
Today LMAX Exchange has released ElementSpecification, a very small library we built to make working with selectors in selenium/WebDriver tests easier. It has three main aims:
- Make it easier to understand selectors by using a very English-like syntax
- Avoid common pitfalls when writing selectors that lead to either brittle or intermittent tests
- Strongly discourage writing overly complicated selectors.
Essentially, we use ElementSpecification anywhere that we would have written CSS or XPath selectors by hand. ElementSpecification will automatically select the most appropriate format to use – choosing between a simple ID selector, CSS or XPath.
Travis CI
Probably the best thing I’ve discovered with my recent playing is Travis CI. I’ve known about it for quite some time, even played with it for simple projects but never with anything with any real complexity. Given this project uses rails which wants a database for pretty much every test and that database has to be postgres because I’m using it’s jsonb support, plus capybara and phantomjs for good measure, this certainly isn’t a simple project to test.
Testing@LMAX – Replacements in DSL
Given our DSL makes heavy use of aliases, we often have to provide a way to include the real name or ID as part of some string. For example, an audit record for a new account might be:
Created account 127322 with username someUser123.
But in our acceptance test we’d create the user with:
registrationAPI.createUser("someUser");
someUser is just an alias, the DSL creates a unique username to use and the system assigns a unique account ID that the DSL keeps track of for us. So to write a test that the new account audit record is written correctly, we need a way to build the expected audit string.
Testing@LMAX – Abstraction by DSL
At LMAX, all our acceptance tests are written using a high level DSL. This gives us two key advantages:
- Tests can focus on what they’re testing with the details and mechanics hidden behind the DSL
- When things change we can update the DSL implementation to match, avoiding the need to change each test that happens to touch on the area.
The DSL that LMAX uses is probably not what most people think of when hearing the term DSL – it doesn’t attempt to read like plain English, just simplifies things down significantly. We’ve actually open sourced the simple little library that is the entrance-way to what we think of as the DSL – creatively named simple-dsl. It’s essentially the glue between what we write in an acceptance test and the plain-java implementation behind that.
Making End-to-End Tests Work
The Google Testing Blog has an article “Just Say No to More End-to-End Tests” which winds up being a rather depressing evaluation of the testing capabilities, culture and infrastructure at Google. For example:
Let’s assume the team already has some fantastic test infrastructure in place. Every night:
- The latest version of the service is built.
- This version is then deployed to the team's testing environment.
- All end-to-end tests then run against this testing environment.
- An email report summarizing the test results is sent to the team.
If your idea of fantastic test infrastructure starts with the words “every night” and ends with an email being sent you’re doomed. Bryan Pendleton does a good job of analysing and correcting the details so I won’t recover that ground. Instead, let me provide a view of what reasonable test infrastructure looks like.
Testing@LMAX – Testing in Live
Previously in the Testing@LMAX series I’ve mentioned the way we’ve provided isolation between tests, allowing us to run them in parallel. That isolation extends all the way up to supporting a multi-tenancy module called venues which allows us to essentially run multiple, functionally separate exchanges on a single deployment of the LMAX Exchange.
We use the isolation of venues to reduce the amount of hardware we need to run our three separate liquidity pools (LMAX Professional, LMAX Institutional and LMAX Interbank), but that’s not all. We actually use the isolation venues provide to extend our testing all the way into production.
Testing@LMAX – Test Isolation
One of the most common reasons people avoid writing end-to-end acceptance tests is how difficult it is to make them run fast. Primary amongst this is the time required to start up the entire service and shut it down again. At LMAX with the full exchange consisting of a large number of different services, multiple databases and other components start up is far too slow to be done for each test so our acceptance tests are designed to run against the same server instance and not interfere with each other.
Testing@LMAX – Distributed Builds with Romero
LMAX has invested quite a lot of time into building a suite of automated tests to verify the behaviour of our exchange. While the majority of those tests are unit or integration tests that run extremely fast, in order for us to have confidence that the whole package fits together in the right way we have a lot of end-to-end acceptance tests as well.
These tests deliver a huge amount of confidence and are thus highly valuable to us, but they come at a significant cost because end-to-end tests are relatively time consuming to run. To minimise the feedback cycle we want to run these tests in parallel as much as possible.
Injecting Stubs/Mocks into Tests with Require.js
I’ve recently embarked on a fairly complex new application, a large part of which is a webapp written in JavaScript. The application uses require.js to handle modules and loading of dependencies and we want to be able to unit test our JavaScript.
In order to test specific pieces of the application, we want to be able to inject stubs or mocks into the module being tested. For example, if we had a module:
Java AWT Robot and Windows Remote Desktop
Problem
If you’re attempting to use the Java AWT Robot to help automate tests, you may find it convenient to log in via remote desktop to watch the tests execute and verify they work well. You may also find that they work fine while you’re watching them but then inexplicably start failing whenever you aren’t. Basically your test is an angel from Dr Who.
What’s most likely happening is that when you disconnect from the remote desktop session, the console on the Windows box is left in a locked state and you need to enter the user password before you can resume interacting with programs on the main display. Since the AWT Robot is doing a very effective job of acting like a user, it also can’t interact with the programs you’re intending and instead is interacting, probably quite ineffectively, with the login dialog box.