Still tepid about TDDI recently did a 2-day TDD workshop for work. It covered the basics which were helpful to review, and we did team and individual exercises to apply. My only real complaint is that these exercises were starting from nothing. In order to be convinced of the merits of TDD, I need to see a case study. I need to see commit logs. I need to see a "holy shit" moment where unit tests really saved someone's butter.
Why don't I just go look up those things myself? I'm not particularly incentivized to learn about TDD on my own. That's because I know of the Linux Kernel, which has no tests, and Clojure, which has tests now but none of them written by the core author and creator of the language. Tests are no substitute for careful programming, and if you have careful programming, I don't think you really need any automated tests, though you may find some useful, especially since careful programming is not rewarded in a corporate setting so that bar is a little high most of the time for your average salaried programmer. In one of the creator of Clojure's famous talks he brings up two interesting facts about bugs: they all passed the test suite, and they all passed the type checker. How much value do those two things really give us then when even in the most powerful static type languages with extensive tests we still find an annoyingly large number of bugs? What other things could we trade off with those two things to get a lower bug count? Could we get anything for free if we looked elsewhere? Incentives for careful programming are always going to be a human problem, but perhaps the language can provide features that make careful programming easier? Hence Clojure is garbage collected and immutable by default. Dynamic typing is a tradeoff.
In the class, the final day's assignment was "somewhat large". No one in our group of 20 finished it entirely, including the instructor. This was sort of glossed over, but what I get out of that is: TDD slows me down. Some will argue that as you do it more and more TDD will eventually speed you up, or at least allow you to continue at a constant velocity and not hit a mountain, but without a case study I can't agree with that. I just know that for a new project from nothing, it's much faster for me, and I assume everyone else in the class, to Just Do It, and write useful/semi-useful tests after if I want. (Often to just prove to oneself the thing works as expected without having to prove it manually all the time if the output is complicated.) The time savings of not doing TDD are banked and you can move on to other things. If a bug shows up (that for the sake of argument we'll say would have been caught by TDD), that's okay, you just take from your banked time to fix it. My argument is that if you do in fact spend all of your banked time, and in fact exceed it (subtlety! I'm talking about Expected Time Savings), you're actually spending the same amount of time in total as you would have with TDD, but you're spreading it out (and hence even if you go over your banked Expected Time, that just means your Expected Time was under the real time if you had done TDD to begin with). By spreading it out, you can move faster. :) At least, ship buggy things that mostly work and get you money and then you fix the bugs later, instead of shipping late with no bugs but losing all that potential money. Most customers for most domains don't care if you're 100% bug-free, they care if you solve their problem.
See Full Post and Comments