|Ernesto Guisado's Website » Programming » Cleanroom||Articles | Miscellanea ||
Cleanroom has some very loud followers and some very notorious detractors. Recently some people from a CMM-4 company came to mine to tell us about their development methodology. Although they didn't say so, it sounded awfully like Cleanroom to me. This prompted me to write down what I know about Cleanroom.
A nice overview is here. Software methodology invented by Harlan Mills and based on two ideas:
The main practices are:
I'll go through each of the above practices in turn:
Incremental development. Not much discussion on this point. Now considered a "best practice". Agile methods recommend it, RUP recommends it. You have a working system from the start. Users can get early releases and shorten the feedback loop. Don't remember who said it, but it seems like the current metaphor for software development is growing programs like a garden, instead of building them like a bridge. This like Boehm's Spiral model enables much better risk management.
Formal methods. More room for discussion here. formal methods have made their way into the mainstream through efforts like Meyer's DBC (Design by Contract), where the contracts for a class are explicitly stated as preconditions, postconditions and invariants. Other efforts like design for test or test-driven development put the emphasis on automatic test code as a way of documenting the specification that classes must honor. The main advantage is that most formal methods are still based on some type of non-executable specification language, while test code can be executed.
Intensive review. Again no discussion here. As Robert L. Glass pointed out, this is the only best practice that has actually shown it's usefulness through enough serious studies to be undisputed. XP likes pair programming which makes sure any code is reviewed at least once.
Statistical testing as method of reliability measurement. This is the practice that makes Cleanroom so hard to accept. Quoting Boris Beizer:
Cleanroom advocates many things with which I agree. There is, however, one fundamental tenet of the Cleanroom doctrine that moves me to criticism: its continuing attack on all forms of testing other than the specific stochastic testing it advocates.
In a chip factory, QA personnel test one chip out of every 1000 and use the results to build reliability models that tell them the probability of a particular chip being defective. Mills tried to do the same thing for software: testing is only used to measure the defect rate of the software and to decide whether the quality level is acceptable or not. If I understood correctly, developers don't even need to know which defects where found. They are just told: too buggy, do it again. Why did Mills think this was a good idea? I think Beizer hit the nail on the head when he says: "[a few decades ago]...there was no distinction between testing and debugging". I've recently (2002) heard some Cleanroom advocates repeat the same mantra. I also believe that automated unit testing at the class level has changed this and their approach is not acceptable (if it ever was...).
Summary: Cleanroom ideas that made sense are already part of the mainstream (perhaps with the exception of formal methods, which I expect to grow in importance over time) and the parts that aren't mainstream weren't such a good idea anyway.
SEI has a page on Cleanroom.
Another short intro
This guy seems to know about it.