I’m a big proponent of automated testing, and so also unit testing. Automated testing is the number one tool in my toolbox to avoid or fix technical debt. But it can also become a force that is blocking you from fixing that technical debt. Here’s why.

Two Schools

There are two schools in unit testing and TDD: the London school and the Chicago school.

The London School of TDD

The London school is also called the mockist school, because it relies heavily on mocks. Say you have a class with a function that depends on a second class. This second class then depends on a class that makes a database call:

If you were to test the function in Class A, you would mock the function of Class B that Class A calls:

These types of tests often check the result of the call to Class A, but also verify that the correct function in Class B was called, with the correct arguments.

The Chicago School of TDD

The Chicago school would take a different approach. In this school, we would only mock the database call. Some might use an in-memory database or a small database that is created for the test and removed after the test. The diagram then looks like this:

In this school, you don’t care about the implementation of Class A. So there are no checks to see if and how Class B was called. You only verify the return value of Class A. Of course, this is slightly different if there is no return value. In that case you might verify that the database received the correct command. But anything in between is not verified.

In both schools, you would also write tests for Class B and Class C.

Outside-In vs Inside-Out

The London school matches well with an Outside-In approach. This means we start at the outer edges of our application, for example a HTTP endpoint. We can implement the HTTP endpoint (for example in a Controller class) and just verify that it calls a (mock) service correctly. We’re not interested in the implementation of that service yet. First, we want to finish the HTTP endpoint/controller.

Then, we move to the service and write tests to verify that it calls its dependencies correctly as well. Again, we use mocks because we haven’t implemented those underlying dependencies yet.

And so we move down until we have implemented all our layers.

An Inside-Out approach takes a different route. It starts by implementing the “lowest” classes in the dependency chain. Once these have been implemented correctly (using unit tests to verify), we can move up one layer. Because we’re (pretty) sure the dependencies have been implemented correctly, we can use them in our test. There is no need to use a mock.

The only things we would want to mock are external dependencies like file systems, databases and third-party services. That is why the Inside-Out approach and the Chicago school are a good match.

Objection!

Purists might now say that the Chicago school isn’t writing unit test but integration tests. And that would be a violation of the test pyramid principle. True. But I dare say it doesn’t matter. In both cases, you should have a suite of automated tests that drive your design and provide a safety net for changes.

But here’s the thing, refactoring your architecture could end up being more difficult the more you used mocks. This is especially true when you’re working with legacy code.

Why Mocks Can Inhibit Refactoring

If your tests depend heavily on mocks, your tests know a lot about the internal workings of the units you’re testing. When you want to refactor the inner logic of a single function, that won’t give you too much trouble. However, when you want to refactor the interactions between components, it means you also have a collection of tests to refactor.

This can be a large task that either takes a long time, or holds developers back from doing it in the first place.

The London School And Legacy Code

Legacy code tends to have many tightly coupled code that is difficult to refactor, let alone mock so that refactoring can be done. As such, it is easier to write outside-in tests. This will give you a testing ice-cream cone, which is considered an anti-pattern:

The testing ice cream cone: lots of end to end tests, less integration tests and even less unit tests.

Once you have a good suite of automated tests that provide your safety net for refactoring, you can start the actual refactoring.

You could then move towards a mockist approach, but why would you? You already have your tests in place and they allow you to refactor and improve the code.

You can improve your design and where necessary, you could write unit tests for smaller units that may or may not use mocks. This means you can move from an initial testing ice-cream cone to a testing hour glass:

Why The Hourglass Can Be Okay

In your legacy project, you’re now actively working on improving the architecture and you have automated tests to ensure the ride goes relatively smoothly. Constructing a testing pyramid could have been practically impossible. And now that you have an hourglass, there might not be a reason to evolve to a pyramid.

There are some requirements however. Your full suite of tests should

  • run fast
  • be reliable
  • run on developer machines and the build server
  • be independent of each other

And they should allow you to refactor the code without having to refactor tens or hundreds of tests.

If that is the case, then I don’t see what added value the testing pyramid would give you. The whole idea behind the pyramid was to reduce the flakiness, brittleness and slow performance that end-to-end tests traditionally had. But with modern tools and modern machines, this has improved significantly.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.