Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Don't write tests - The Hidden Cost of TDD (tomblomfield.com)
24 points by tomblomfield on April 1, 2012 | hide | past | favorite | 9 comments


Once you’ve found traction, you’ll likely need to re-write your codebase...

The problem with this approach is that once you've got traction, you will be in the worst possible position to re-write your codebase. I understand the allure of just getting features out when you're trying things and don't have many customers to worry about. But the old adage of 'always time to do it over but never time to do it right' applies just as well early as it does later. Resist the temptation to throw away all discipline because 'it's just a prototype and we'll re-write it later.' The reality is, if your product catches on, you're going to be living with that code for a long time. Move fast, yes, but don't move sloppy.


To paraphrase: either testing is useful in your situation or it isn’t; when it is useful, use it, and otherwise don’t.

Really, the “debate” comes from the fact that some folks have embraced testing and TDD philosophically, and thus feel the need to evangelise it. The same thing happens with OOP, FP, you name it. We cleave tenaciously to our magic bullets. But I have found that the subjects of the most fervent evangelism tend also to be the least philosophically sound. Then again, maybe I’m just being contrarian (read: a hipster).

Like the author, my take is that if testing makes your development life easier, great. I use it too sometimes. But no, I don’t want to hear the Good News.


Testing is an infinitely complicated subject involving very difficult tradeoffs, but there are two axes I like to break down on when thinking about it:

1) Coverage vs. Speed/Understandability: you want as much coverage as is compatible with relatively fast feedback and a suite where, when something goes wrong, it is sufficiently obvious what went wrong. As a system increases (non-linearly) in complexity, this becomes a more and more difficult tradeoff and you should expect to feel increasingly bad about the state of affairs as your code base grows in size: either stuff is gonna be slow and incomprehensible or you are not gonna have the coverage you want. Probably both.

2) High-level vs. Unit tests: as your system matures, the high level tests are going to be the most valuable over time, since they are the abstract specification of the behavior of the system. They are also going to be the hardest to understand when they break, since they are inherently abstract. Your unit tests will keep running quickly and breaking in obvious ways, but will become increasingly useless for verifying system validity as second-order effects start dominating your bugs, and they will introduce lots of friction if you are trying to refactor code (which might be good: many a refactor has gone bad in my career.)

So, as usual, I come to a pessimistic conclusion: tests are necessary and, as your code base increases in size and complexity, they will come to dominate your development process. Holding test quality constant, the high level tests will run the slowest and break in the most incomprehensible manner, but will be the most valuable test you have, and you will develop a love-hate relationship with them.

So, my take aways are:

1) Write unit tests, but be prepared to nuke them once your implementation is done in favor of higher level tests. Make sure they run very fast so they don't get in the way of your development.

2) Try to write very clean high-level tests. This is hard and you have to trade off how much time you put into it vs. other priorities. Strive to make these fast, but accept that they will be slower than you'd like and will cause you a lot of pain. Every once in a while, they are gonna save you.


High-level tests and unit tests go hand in hand. Ideally you are striving for 100% coverage for both your high level tests and your unit tests. The value of that is that when a high-level test fails in some inscrutable way, in theory a corresponding unit test should also fail, which should make the actually point of breakage clear.


"declare bankruptcy on that technical debt"

I like that metaphor. Every shop I've worked in that's had awareness of "technical debt" has always looked at it like something that will have to be confronted, even to the point of including time in each iteration to address it.

But sometimes, you just carry it until you realize that what you bought with it is not worth anything. Then you just wipe it from the books (codebase) and move on.


I thought that was a cool turn of phrase as well. It’s something many developers don’t realise they can do. And maybe that’s why we find it so hard to justify throwing away code: we see the value, but miss the cost.


What about the cost of having your client finding bugs that could have been found if you were using tdd?


What about the cost of not having any clients at all because your competitor lapped you while you were busy writing tests?


It's best to deliver later a better written application than deliver something fast but full of bugs. Quality brings more clients than it loses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: