Living in a state of accord.

Background Logging with the Disruptor

Peter Lawrey posted an example of using the Exchanger class from core Java to implement a background logging implementation. He briefly compared it to the LMAX disruptor and since someone requested it, I thought it might be interesting to show a similar implementation using the disruptor.

Firstly, let’s revisit the very high level differences between the exchanger and the disruptor. Peter notes:

This approach has similar principles to the Disruptor. No GC using recycled, pre-allocated buffers and lock free operations (The Exchanger not completely lock free and doesn't busy wait, but it could)

Two keys difference are:

  • there is only one producer/consumer in this case, the disruptor supports multiple consumers.
  • this approach re-uses a much smaller buffer efficiently. If you are using ByteBuffer (as I have in the past) an optimal size might be 32 KB. The disruptor library was designed to exploit large amounts of memory on the assumption it is relative cheap and can use medium sized (MBs) to very large buffers (GBs). e.g. it was design for servers with 144 GB. I am sure it works well on much smaller servers. ;)

Actually, there’s nothing about the Disruptor that requires large amounts of memory. If you know that your producers and consumers are going to keep pace with each other well and you don’t have a requirement to replay old events, you can use quite a small ring buffer with the Disruptor. There are a lot of advantages to having a large ring buffer, but it’s by no means a requirement.

It’s also worth noting that the Disruptor does not require consumers to busy-spin, you can choose to use a blocking wait strategy, or strategies that combine busy-spin and blocking to handle both spikes and lulls in event rates efficiently.

There is also an important advantage to the Disruptor that wasn’t mentioned: it will process events immediately if the consumer is keeping up. If the consumer falls behind however, it can process events in a batch to catch up. This significantly reduces latency while still handling spikes in load efficiently.

The Code

First let’s start with the LogEntry class. This is a simple value object that is used as our entries on the ring buffer and passed from the producer thread over to the consumer thread.

Peter’s Exchanger based implementation – the use of StringBuilder in the LogEntry class is actually a race condition and not thread safe. Both the publishing side and the consumer side are attempting to modify it and depending on how long it takes the publishing side to write the log message to the StringBuilder, it will potentially be processed and then reset by the consumer side before the publisher is complete. In this implementation I’m instead using a simple String to avoid that problem.

The one Disruptor-specific addition is that we create an EventFactory instance which the Disruptor uses to pre-populate the ring buffer entries.

Next, let’s look at the BackgroundLogger class that sets up the process and acts as the producer.

In the constructor we create an ExecutorService which the Disruptor will use to execute the consumer threads (a single thread in this case), then the disruptor itself. We pass in the LogEntry.FACTORY instance for it to use to create the entries and a size for the ring buffer.

The log method is our producer method. Note the use of two-phase commit. First claim a slot with the method, then copy our values into that slot’s entry and finally publish the slot, ready for the consumer to process. We could have also used the Disruptor.publish method which can make this simpler for many use cases by rolling the two phase commit into call.

The producer doesn’t need to do any batching as the Disruptor will do that automatically if the consumer is falling behind, though there are also APIs that allow batching the producer which can improve the performance if it fits into your design (here it’s probably better to publish each log entry as it comes in).

The stop method uses the new shutdown method on the Disruptor which takes care of waiting until all consumers have processed all available entries for you, though the code for doing it yourself is quite straight-forward. Finally we shut down the executor.

Note that we don’t need a flush method since the Disruptor is always consuming log events as quickly as the consumer can.

Last of all, the consumer which is almost entirely implementation logic:

The consumer’s onEvent method is called for each LogEntry put into the Disruptor. The endOfBatch flag can be used as a signal to flush written content to disk, allowing very large buffer sizes to be used causing writes to disk to be batched when the consumer is running behind, yet also ensure that our valuable log messages get to disk as quickly as possible.

The full code is available as a Gist.

The Disruptor Wizard is Dead, Long Live the Disruptor Wizard!

As of this morning, the Disruptor Wizard has been merged into the core LMAX Disruptor source tree. The .NET port had included the wizard style syntax for quite some time and it seemed to be generally popular, so why make people grab two jars instead of one?

I also updated it to reflect the change in terminology within the Disruptor. Instead of Consumers, there are now EventProcessors and EventHandlers. That better reflects the fact that consumers can actually add additional values to the events. Additionally, the ProducerBarrier has been merged into the ring buffer itself and the ring buffer entries are now called events. Again, that better reflects the fact that the programming model around the disruptor is most often event based.

It doesn’t make much difference for the wizard API, except that:

  • The consumeWith method has been changed to handleEventsWith
  • The getProducerBarrier method has been replaced with a start method which returns the ring buffer. This clears up the confusion that the getProducerBarrier function was also used as the trigger to start the event handler threads. Now the method name is explicit about the fact that it will have side-effects.

Martin Fowler on the LMAX Architecture

Martin Fowler has posted an excellent overview of the LMAX architecture which helps put the use-cases and key considerations that led to the LMAX disruptor pattern in context.

Good to see some focus being put on the programming model that LMAX uses as well. While convention suggests that to get the best performance you have to make heavy use of concurrency and multithreading, the measurements LMAX did showed that the cost of contention and communication between threads was too great. As such, the programming model at LMAX is largely single-threaded, achieving the best possible performance while avoiding the complexities of multithreaded programming.

LMAX Disruptor – High Performance, Low Latency and Simple Too

The LMAX disruptor is an ultra-high performance, low-latency message exchange between threads. It's a bit like a queue on steroids (but quite a lot of steroids) and is one of the key innovations used to make the LMAX exchange run so fast. There is a rapidly growing set of information about what the disruptor is, why it's important and how it works – a good place to start is the list of articles and for the on-going stuff, follow LMAX Blogs. For really detailed stuff, there's also the white paper (PDF).

While the disruptor pattern is ultimately very simple to work with, setting up multiple consumers with the dependencies between them can require a bit too much boilerplate code for my liking. To make it quick and easy for 99% of cases, I've whipped up a simple DSL for the disruptor pattern. For example, to wire up a "diamond pattern" of consumers:

Diamond pattern of consumers. C3 depends on both C1 and C2 completing.

(Image blatantly stolen from Trisha Gee's excellent series explaining the disruptor pattern)

In this scenario, consumers C1 and C2 can process entries as soon as the producer (P1) puts them on the ring buffer (in parallel). However, consumer C3 has to wait for both C1 and C2 to complete before it processes the entries. In real life this might be because we need to both journal the data to disk (C1) and validate the data (C2) before we do the actual business logic (C3).

With the raw disruptor syntax, these consumers would be created with the following code:

Executor executor = Executors.newCachedThreadPool();
BatchHandler handler1 = new MyBatchHandler1();
BatchHandler handler2 = new MyBatchHandler2();
BatchHandler handler3 = new MyBatchHandler3()
RingBuffer ringBuffer = new RingBuffer(ENTRY_FACTORY, RING_BUFFER_SIZE);
ConsumerBarrier consumerBarrier1 = ringBuffer.createConsumerBarrier();
BatchConsumer consumer1 = new BatchConsumer(consumerBarrier1, handler1);
BatchConsumer consumer2 = new BatchConsumer(consumerBarrier1, handler2);
ConsumerBarrier consumerBarrier2 =
ringBuffer.createConsumerBarrier(consumer1, consumer2);
BatchConsumer consumer3 = new BatchConsumer(consumerBarrier2, handler3);
ProducerBarrier producerBarrier =

We have to create our actual handlers (the two instances of MyBatchHandler), plus consumer barriers, BatchConsumer instances and actually execute the consumers on their own threads. The DSL can handle pretty much all of that setup work for us with the end result being:

Executor executor = Executors.newCachedThreadPool();
BatchHandler handler1 = new MyBatchHandler1();
BatchHandler handler2 = new MyBatchHandler2();
BatchHandler handler3 = new MyBatchHandler3();
DisruptorWizard dw = new DisruptorWizard(ENTRY_FACTORY, RING_BUFFER_SIZE, executor);
dw.consumeWith(handler1, handler2).then(handler3);
ProducerBarrier producerBarrier = dw.createProducerBarrier();

We can even build parallel chains of consumers in a diamond pattern: Two chains of consumers running in parallel with a final consumer dependent on both.

(Thanks to Trish for using her fancy graphics tablet to create a decent version of this image instead of my original finger painting on an iPad…)

dw.consumeWith(handler1a, handler2a);
dw.after(handler1b, handler2b).consumeWith(handler3);
ProducerBarrier producerBarrier = dw.createProducerBarrier();

The DSL is quite new so any feedback on it would be greatly appreciated and of course feel free to fork it on GitHub and improve it.