TDD: the war on errors

In the past two years, there has been an emphasis on Test Driven Development in Design Studio.

There seems to be a misconception about the past, though: it’s sometimes spoken as if testing was not part of DS previously. This is untrue – no team fired up notepad, wrote some code, and turned it over. Instead, “testing” has come to mean “having a robust automated test suite.” That’s great, and it is amazing that so many more teams are building those suites now. But we also have to remember it isn’t the only part of how we should test software.

Below is my first draft of a process-style diagram of how things “should be” in our process. It’s far too big to fit in here at 100% size, so click through to see it in all its glory. I’m looking for any thoughts or corrections to it before I make a final infographic-style rendition.

A few things to note:

  • The axes. Horizontally, the client-team responsibility levels: The further an item is to the right, the less directly the client is involved. Vertically, the “language spectrum”: The further towards the bottom, the simpler it usually is to verify a typical statement in the language involved. I will have to enumerate the various levels later, but “Vision” would be at the top and most project-specific, and something like machine code at the bottom being least project-specific.

    Let me know what you think!

  • EmacsUser2

    Looks good as a high-level picture. A few points in (hopefully)
    constructive criticism:

    1. Where is the step to verify the tests, e.g. coverage measurement, mutation testing, etc.?
    2. Where is the step to correct wrong tests or update old ones (maybe something just needs a clearer label)?
    3. The term “semantically correct” is abused in the diamond on warnings; semantics are jointly verified by static analyses (such as those that drive compiler warnings) and the test suite, but if correctness could be established here the tests would be unnecessary. Probably you want this diamond to convey something about good coding practices, in which case this is also the point where code reviews and code inspections come into the mix.
    4. The answer to this question may not belong in the diagram, but in your explanation: what static analyses? Are you formally specifying correctness properties (like “on exit from a public method x must be an integer in [1..10] that is greater than y”) or just taking advantage of the properties that all program should have (like “no variable is used without a definition”)?

  • Kiel

    Awesome points.

    1/2 – couldn’t come up with a formalized verification for tests so I’ll have to research that area some more.
    3 – I hesitated on the word “semantically”, I just didn’t know what other word to use and I was in a hurry. Code reviews would be good to put in this step.
    4 – both would be good for that step, it seems – particularly if code review is in the prior step. Enumerating that in terms of what tools would be used is probably a good clarification since it doesn’t seem to be a common or well-known practice.