Living in a state of accord.

The Single Implementation Fallacy

As my colleague and favorite debating opponent Danny Yates noted:

We got into a bit of a debate at work recently. It went a bit like this:

“Gah! Why do we have this interface when there is only a single implementation?”

(The stock answer to this goes:) “Because we need the interface in order to mock this class in our tests.”

“Oh no you don’t, you can use the FingleWidget [insert appropriate technology of your mocking framework of choice here – e.g. JMock ClassImposteriser]! I’m smarter than you!”

“Well, yes, you can. But if you’ve correctly followed Design for Extension principles, you’ve made the class final, right? And you definitely can’t mock that! Hah! I’m smarter than you!”

“Ah ha! But you could always use the JDave Unfinaliser Agent! I’m so smart it hurts!”

I tend to side with Danny that using the unfinaliser agent is a bad idea, but I also have to question the benefit of declaring a class final in the first place. However, let’s first cover why I think single implementation interfaces are an “enterprisey” anti-pattern in a little more detail.

Why Single Implementation Interfaces Are Evil

Interface Separation or Interface Duplication

The main argument people raise in favour of having interfaces for everything, even if there’s only one implementation is that it separates the API from the implementation. However in practice with languages like Java, this is simply not true. The interface has to be entirely duplicated in the implementation and the two are tightly coupled. Take the code:

public interface A {
  String doSomething(int p1, Object p2);
public class AImpl implements A {
  public String doSomething(int p1, Object p2) { ... }

This is a pretty clear violation of Don’t Repeat Yourself (DRY). The fact that the implementation name is essentially the same as the interface is a clear indication that there’s actually only one concept here. If there had been a vision of multiple implementations that work in different ways the class name would have reflected this (e.g. LinkedList vs ArrayList or FileReader vs StringReader).

As a general rule, if you can’t think of a good name for your class (or method, variable, etc) you’ve probably broken things down in the wrong way and you should rethink it.

Extra Layers == Extra Work

The net result of duplicating the API is that each time you want to add or change a method on the interface you have to duplicate that work and add it to the class as well. It’s a small amount of time but distracts from the real task at hand and amounts to a lot of unnecessary “busy work” if you force every class to have a duplicate interface. Plus if you subscribe to the idea of code as inventory, those duplicated method declarations are costing you money.

Also, as James Turner pointed out:

Unneeded interfaces are not only wasted code, they make reading and debugging the code much more difficult, because they break the link between the call and the implementation.

This is probably the biggest problem I have with single implementation interfaces. When you’re tracking down a difficult bug you have to load up a lot of stuff into your head all at once – the call stack, variable values, expected control flow vs actual etc etc. Having to make the extra jump through a pointless interface on each call can be the straw that breaks the camel’s back and cause the developer to loose track of the vital context information. It’s doubly bad if you have to jump through a factory as well.

Library Code

Many people argue that in library code, providing separate interfaces is essential to define the API and ensure nothing leaks out accidentally. This is the one case where I think it makes sense to use an interface as it frees up your internal classes to use method visibility to let classes collaborate “behind the scenes” and have a clean implementation, without that leaking out to the API.

A word of warning however: one of the fatal mistakes you can make in a Java library is to provide interfaces that you expect the code using the library to implement. Doing this makes it extremely difficult to maintain backwards compatibility – if you ever need to add a method to that interface compatibility is immediately and unavoidably broken. On the other hand, providing an abstract class that you expect to be extended allows new methods to be added more easily since they can often provide a no-op implementation and maintain backwards compatibility. Abstract classes do limit the choices the code using the library can make though so neither option is a clear cut winner.

Why Declaring Classes Final is Pointless

So at last we come back around to the original problem of needing to mock out classes during testing but being unable to because they’re marked final. There seem to be two main reasons that people like to make classes final:

  1. Classes should be marked final unless they are explicitly designed for inheritance.
  2. Marking a class final provides hints to HotSpot that can improve performance either by method inlining or using faster method call dispatch algorithms (direct instead of dynamic).

Designing for Extension

I have a fair bit of sympathy for the argument that classes should be final unless designed for inheritance, but for shared code within a development team it has a very critical flaw – it’s trivial to just remove the word final and carry on, so people will. Let’s face it, if you look at a class and think “I can best solve my problem by extending this class” then a silly little keyword which may have just been put their by habit is not going to stop you. You’d need to also provide a clear comment about why the class isn’t suitable for extension but in most cases such a reason doesn’t exist – extension just hadn’t been thought about yet so the class is inherently not designed for extension. Besides which, if you have the concept of shared code ownership then whoever extends the class is responsible for making any design changes required to make it suitable for extending when they use it as a base class. Most likely though, they have already looked at the class and decided it’s suitable for extension as is which is why they are trying to do just that.

Perhaps what would be better is to require any class that is designed for extension to have a @DesignedForExtension annotation, then use code analysis tools (like Freud) to fail the build if a class without that annotation is extended. That makes the default not-extendable which is more likely to be correct and still lets you mock the object for testing. You would however want an IDE plugin to make the error message show up immediately but it does seem like a nice way to get the best of all worlds.

Final Classes Go Faster

I found myself immediately suspicious of this claim – it may have been true once but HotSpot is a seriously clever little JIT engine and advances at an amazing pace. Many people claim that HotSpot can inline final methods and it can, but it can also inline non-final methods. That’s right, it will automatically work out that there is only one possible version of this method that exists and go right ahead and inline it for you.

There is also a slight variant of this that claims that since dynamic method dispatch is more expensive, marking a method as final means the JVM can avoid the dynamic dispatch for that method. Marking a class final effectively makes all it’s methods final so that every method would get the benefit.

My reasoning is such that if HotSpot can work out that it can safely inline a method, it clearly has all the information required to avoid the dynamic dispatch as well. I can’t however find any reference to definitively show it does that. Fortunately, I don’t need to. Remember back at the start we said we had to introduce an interface to make things testable? That means changing our code from:

final class A { public void doSomething(); }
A a = new A();


interface A { void doSomething(); }
final class AImpl implements A { public void doSomething(); }
A a = new AImpl();

Since we’ve duplicated the method declaration, there is no guarantee that the only version of doSomething is in AImpl, since any class could implement interface A and provide a version of doSomething. We’re right back to relying on HotSpot doing clever tricks to enable method inlining and avoiding dynamic method dispatch.

There simply can be no performance benefit to declaring a class final if you then refer to it via an interface rather than the concrete class. And if you refer to it as the concrete class you can’t test it.


There shouldn’t be anything too surprising here – less code is better, simpler architectures work better and never underestimate how clever HotSpot is. Slavishly following the rule of separating the interface from the code doesn’t make code any more testable, it doesn’t reduce coupling between classes (since they still call the same methods) and it does create extra work. So why does everyone keep doing it?

Oh and, nyah, nyah, I’m so smart it hurts…