LendInvest uses cookies to make our site better. By browsing you agree to our cookie policy. clear

Author: Leonardo Proietti, Software Development Manager in Engineering @ LendInvest

We’re test lovers here at LendInvest. Having a few experienced TDD practitioners amongst our ranks, testing for us is simply a good way to write clean, working code.

During our hiring process, we ask each candidate to solve a problem related to our domain, getting them to write some code (like a kata), and encouraging them to use TDD if they’re comfortable with it. As a result, a good percentage of the solutions are actually written using a test-first approach.

And it’s always a pleasure to review a code covered with tests.

You set up the environment, run the tests and “yes!” all the tests are green.

Then you take a closer look at the code. Test doubles are implemented properly. Each unit test covers the smallest unit. But you can’t find an integration test that proves the code works as per the specification. In BDD terminology, you can’t find a scenario.

It’s fine, you think; I can write the integration test on my own. But then you find that the code doesn’t work.


Because unit tests are not enough.

Whatever your unit test coverage, it’s unlikely you can cover all the cases your code is going to handle once in production. And even if you can, you still you can’t rely solely on them. You also need integration and functional tests.

Obviously, you don’t want to cover all cases with functional tests, but a good balance is to use functional tests for the happy paths and use unit testing for the rest.

So why bother about TDD?

Because TDD is not about unit testing. It’s about applying the scientific method.

I’m not sure when this idea popped to my mind, but at some point I found this pdf that explains everything very clearly.

For those of you too lazy to read it, I’ll report few sentences.

The scientific method provides a rationale for TDD. […] Tests in TDD take the role of experiments, while design takes the role of theory. Experimental reproducibility is managed through continued use of automated tests to ensure that the theory has not been broken. The program theory is driven through experimentation with tests. The theory is refined to fit the tests, with refactoring to ensure suitable generality.” (Rick Mugridge)

We start with a red test because we have to prove our theory can fail. I’m talking about falsifiability. That way, we get to know our code better and get a measure of its quality.

There are plenty of good books and blog posts about TDD in general, so in the rest of this post I’d like instead to go through some of the common pitfalls I’ve run into with TDD and those to which I’ve seen others fall victim.

Interface proliferation

An interface in the OOP world is the best way to implement polymorphism — it keeps our code more flexible.

A class depends upon behaviours, without knowing how they have been,or will be, implemented. Most of the time, that’s a good thing: we’re striving for flexible code. And it’s even better if you practice TDD because you can keep the focus on the system under test, deferring all the choices about the concrete implementation of the collaborators.

But that’s not always what you need, and often you don’t know at the outset which parts need to be flexible.

That’s where TDD helps you. Refactoring is an integral part of the cycle, and you can easily apply the YAGNI principle this way. So instead of polluting your code with a lot of interfaces (ending up, in the most extreme case, with one interface for each class), you just create interfaces for those parts of your code that you really want to make flexible.

This process that can also be done after the fact. You can start writing code defining a lot of interfaces and then remove the ones you don’t need it. And you can do the same with tests. Dead code, after all, just adds to the tech debt.

On the other hand, if you choose to mock a concrete class, you can always replace it with the actual class once it has been implemented (eg when the class is a value object). A good example is when the class is a Value Object.

Dependency injection dogma

DI is a powerful pattern, especially when you’re defining services in a container. But it’s not a silver bullet.

There isn’t any real need to inject every object, and sometimes you want to do exactly the opposite.

In this case, we really want to create the object as a result of another behaviour. That’s a way to make it explicit. When you want to make your code behavioural that can happen quite often, especially when you’re adding an element to a collection that you want to keep private.

Another mistake, not directly related with testing, I’ve seen quite often in a classic MVC application is the container being used as a service locator. These are two different patterns. The latter, if abused, is like using a singleton and, in any case, it’s a clear violation of the Hollywood principle (IoC).

Class with only public methods

So you’re doing TDD, testing everything directly, and then you make all your methods public.

That’s wrong, really. You shouldn’t change your class API adding public methods that you have to maintain, because you should assume that there’s always at least one client using them. Indeed, there’s always at least one: your test.

Furthermore, if your model is anemic (only getters and setters), don’t bother testing it; you’re already breaking the Tell Don’t Ask principle (so the encapsulation) so there isn’t any test that can save you!

Finally, TDD doesn’t have dogmas, it’s just another way to write clean code that works. So, do your experiments. Be brave. And be driven by the tests.