Living in a state of accord.

The Single Implementation Fallacy

As my colleague and favorite debating opponent Danny Yates noted:

We got into a bit of a debate at work recently. It went a bit like this:

“Gah! Why do we have this interface when there is only a single implementation?”

(The stock answer to this goes:) “Because we need the interface in order to mock this class in our tests.”

“Oh no you don’t, you can use the FingleWidget [insert appropriate technology of your mocking framework of choice here - e.g. JMock ClassImposteriser]! I’m smarter than you!”

“Well, yes, you can. But if you’ve correctly followed Design for Extension principles, you’ve made the class final, right? And you definitely can’t mock that! Hah! I’m smarter than you!”

“Ah ha! But you could always use the JDave Unfinaliser Agent! I’m so smart it hurts!”

I tend to side with Danny that using the unfinaliser agent is a bad idea, but I also have to question the benefit of declaring a class final in the first place. However, let’s first cover why I think single implementation interfaces are an “enterprisey” anti-pattern in a little more detail.

Why Single Implementation Interfaces Are Evil

Interface Separation or Interface Duplication

The main argument people raise in favour of having interfaces for everything, even if there’s only one implementation is that it separates the API from the implementation. However in practice with languages like Java, this is simply not true. The interface has to be entirely duplicated in the implementation and the two are tightly coupled. Take the code:

public interface A {
  String doSomething(int p1, Object p2);
public class AImpl implements A {
  public String doSomething(int p1, Object p2) { ... }

This is a pretty clear violation of Don’t Repeat Yourself (DRY). The fact that the implementation name is essentially the same as the interface is a clear indication that there’s actually only one concept here. If there had been a vision of multiple implementations that work in different ways the class name would have reflected this (e.g. LinkedList vs ArrayList or FileReader vs StringReader).

As a general rule, if you can’t think of a good name for your class (or method, variable, etc) you’ve probably broken things down in the wrong way and you should rethink it.

Extra Layers == Extra Work

The net result of duplicating the API is that each time you want to add or change a method on the interface you have to duplicate that work and add it to the class as well. It’s a small amount of time but distracts from the real task at hand and amounts to a lot of unnecessary “busy work” if you force every class to have a duplicate interface. Plus if you subscribe to the idea of code as inventory, those duplicated method declarations are costing you money.

Also, as James Turner pointed out:

Unneeded interfaces are not only wasted code, they make reading and debugging the code much more difficult, because they break the link between the call and the implementation.

This is probably the biggest problem I have with single implementation interfaces. When you’re tracking down a difficult bug you have to load up a lot of stuff into your head all at once – the call stack, variable values, expected control flow vs actual etc etc. Having to make the extra jump through a pointless interface on each call can be the straw that breaks the camel’s back and cause the developer to loose track of the vital context information. It’s doubly bad if you have to jump through a factory as well.

Library Code

Many people argue that in library code, providing separate interfaces is essential to define the API and ensure nothing leaks out accidentally. This is the one case where I think it makes sense to use an interface as it frees up your internal classes to use method visibility to let classes collaborate “behind the scenes” and have a clean implementation, without that leaking out to the API.

A word of warning however: one of the fatal mistakes you can make in a Java library is to provide interfaces that you expect the code using the library to implement. Doing this makes it extremely difficult to maintain backwards compatibility – if you ever need to add a method to that interface compatibility is immediately and unavoidably broken. On the other hand, providing an abstract class that you expect to be extended allows new methods to be added more easily since they can often provide a no-op implementation and maintain backwards compatibility. Abstract classes do limit the choices the code using the library can make though so neither option is a clear cut winner.

Why Declaring Classes Final is Pointless

So at last we come back around to the original problem of needing to mock out classes during testing but being unable to because they’re marked final. There seem to be two main reasons that people like to make classes final:

  1. Classes should be marked final unless they are explicitly designed for inheritance.
  2. Marking a class final provides hints to HotSpot that can improve performance either by method inlining or using faster method call dispatch algorithms (direct instead of dynamic).

Designing for Extension

I have a fair bit of sympathy for the argument that classes should be final unless designed for inheritance, but for shared code within a development team it has a very critical flaw – it’s trivial to just remove the word final and carry on, so people will. Let’s face it, if you look at a class and think “I can best solve my problem by extending this class” then a silly little keyword which may have just been put their by habit is not going to stop you. You’d need to also provide a clear comment about why the class isn’t suitable for extension but in most cases such a reason doesn’t exist – extension just hadn’t been thought about yet so the class is inherently not designed for extension. Besides which, if you have the concept of shared code ownership then whoever extends the class is responsible for making any design changes required to make it suitable for extending when they use it as a base class. Most likely though, they have already looked at the class and decided it’s suitable for extension as is which is why they are trying to do just that.

Perhaps what would be better is to require any class that is designed for extension to have a @DesignedForExtension annotation, then use code analysis tools (like Freud) to fail the build if a class without that annotation is extended. That makes the default not-extendable which is more likely to be correct and still lets you mock the object for testing. You would however want an IDE plugin to make the error message show up immediately but it does seem like a nice way to get the best of all worlds.

Final Classes Go Faster

I found myself immediately suspicious of this claim – it may have been true once but HotSpot is a seriously clever little JIT engine and advances at an amazing pace. Many people claim that HotSpot can inline final methods and it can, but it can also inline non-final methods. That’s right, it will automatically work out that there is only one possible version of this method that exists and go right ahead and inline it for you.

There is also a slight variant of this that claims that since dynamic method dispatch is more expensive, marking a method as final means the JVM can avoid the dynamic dispatch for that method. Marking a class final effectively makes all it’s methods final so that every method would get the benefit.

My reasoning is such that if HotSpot can work out that it can safely inline a method, it clearly has all the information required to avoid the dynamic dispatch as well. I can’t however find any reference to definitively show it does that. Fortunately, I don’t need to. Remember back at the start we said we had to introduce an interface to make things testable? That means changing our code from:

final class A { public void doSomething(); }
A a = new A();


interface A { void doSomething(); }
final class AImpl implements A { public void doSomething(); }
A a = new AImpl();

Since we’ve duplicated the method declaration, there is no guarantee that the only version of doSomething is in AImpl, since any class could implement interface A and provide a version of doSomething. We’re right back to relying on HotSpot doing clever tricks to enable method inlining and avoiding dynamic method dispatch.

There simply can be no performance benefit to declaring a class final if you then refer to it via an interface rather than the concrete class. And if you refer to it as the concrete class you can’t test it.


There shouldn’t be anything too surprising here – less code is better, simpler architectures work better and never underestimate how clever HotSpot is. Slavishly following the rule of separating the interface from the code doesn’t make code any more testable, it doesn’t reduce coupling between classes (since they still call the same methods) and it does create extra work. So why does everyone keep doing it?

Oh and, nyah, nyah, I’m so smart it hurts…

  • SteveL says:

    Once you get into distributed systems interfaces are almost a requirement (at least of RMI and (sort of) WS-*), as are final classes. Why the final: they ensure that what gets serialized can be handled at the far end. with non-final classes someone can extend the class, push it down the wire and the far end will get something it can’t handle.

    That’s for distribution, with proxy classes, interfaces that extend Remote, etc. In-VM code doesn’t need to be so optimistic. Oh, and where you do plan to distribute the code, you need to test it over VMs, ideally on different hosts, on the targeted network infrastructure (IPv4 or IPv6, HTTP proxies, long-haul emulation via relay proxies, broken DNS/rDNS etc). Most people don’t bother with this, which is why their code doesn’t work so well in the real world.

    June 19, 2011 at 2:09 pm
  • Adrian Sutton says:

    Trye, there can be technical reasons to do it such as RMI (though personally I tend to shy away from those forms of distributed system tools for various reasons). However what I’m referring to is purely a policy decision either to separate the api frOm implementation or to be able to mark classes as final. It’s not driven by requirements like RMI etc.

    June 19, 2011 at 3:00 pm
  • Amir says:

    Hi Adrian,

    Thanks for the Freud mention :)

    I tend to always go back to the basic reasons of why we write (or not) a piece of code / a keyword / an access modifier….

    code is a formal solution to a software problem.

    Therefore my take on the “redundant interface” debate is – You write an interface if it is part of the solution, you don’t if its not.
    An interface should define a type ie. in our java world, an abstract description of certain behaviour / operation.

    So – yes – An interface just because you need to mock a class is not a good excuse. That’s because mocking and tests in general have nothing to do with the solution.
    They’re tools that allow us to design, document and ensure high quality code. They are not what you would draw on the whiteboard when you try to explain how you solved the problem.

    Which means I agree…. but I don’t count implementations.

    On the “final” class issue – I, again, look at the code as a solution.
    IMO perhaps the biggest mistake made with the Java language was to make “non-final” the default and “final” an optional modifier.

    Its equivalent to making the default authorization rule “allow all” and then disabling some instead of the other way around. It would have made much more sense if we had an “inheritable” or “overridable” keyword (Must be some better word than those two…. ).

    You don’t expect that by default all objects should be inherited. Only specific ones should.

    So regardless of the hotspot magic, If “final” was the default we would not have this debate at all. Sadly it isn’t. So we need to decide on a convention.

    I believe that using final everywhere is perhaps stating the obvious – but it is what you want the solution to really be (unless inheritance really is part of your solution).

    June 19, 2011 at 4:03 pm
  • Steven Shaw says:

    I must agree. I’ve seen this kind of thing before. It is a bit irritating. One project I worked on was inundated with these kinds of classes. It was in part the particular coding style/process that was adopted (100% test coverage, DI, no statics etc) and also because of the test framework we were using (that didn’t have support for mocking classes). I’d say that about in about 85% of cases, each interface only had a single method too.

    Some of this code survives to this day as an all-but-unknown open source project. I plucked out an example:

    public interface CollectionSubtractor {
    List subtract(List l1, List l2);

    public class DefaultCollectionSubtractor implements CollectionSubtractor {
    public List subtract(List l1, List l2) {
    List result = new ArrayList(l1);
    for (Object o : l2) {
    return result;

    What is striking is that this example if of a simple function, “subtract”! In Java, such a function would often be implemented as a static function (but in this case that was disallowed by the coding practices [1]). Some might note that the whole implementation can more-or-less by implemented by List.removeAll :). So, it all seems like a lot of boilerplate. On the flip-side, if there were other components injected into the implementation then it wouldn’t be that same simple “function” any longer. The argument for keeping code in this form is that some implementation by require components to be injected.

    In Scala, this kind of thing could perhaps be written inline as “list1.filterNot(list2 contains _)” and the developer would have moved on. Alternatively, it could be implemented as a method of a “singleton” object (in Scala there are no statics) and would have therefore “passed” the coding practices. This was perhaps a poor example as the original code can become quite inefficient if list1 is long. If might be better implemented with a different datastructure. Again, in that case, it is arguable that it’s nice to have a single place in the code to go to in order to make things efficient again (or at least to change the method signature and go chasing all compilation errors). That said, even what seem like useful principles and practices can lead to significant code bloat. On the whole, I say down with the single implementation class, don’t be afraid of static utility functions (no problem testing these) and perhaps take up a better language and library such as Scala :).


    October 11, 2011 at 8:42 am
  • Christian Gruber says:

    Side note on Library use of Interfaces and binary compatibility. Java 8 will be including some form of “defender methods” or “Extension methods” to interfaces which will allow a default implementation to be chosen when new methods are added, so existing implementations of the interface remain binary compatible, valid implementors.

    This should at least reduce that burden for Library and API developers. Your other points are well taken.

    June 22, 2012 at 5:19 pm

Your email address will not be published. Required fields are marked *