Thanks for your input. Both points are good.. But, not much of it (sans Scobles' "Is it being used") is measurable. I wholeheartedly agree Quality should be built into the process, fewer defects, etc...
But, how does one measure "quality" in a contract? Everyone who has done software building as a consultant (internally or externally to their company) has requirements that often say something like "it should be "high quality" or "Quality should come first",
etc. My problem is if I get assigned a task that has a contact in which Quality is of primary concern, how do I measure that? Some thoughts:
* I can't measure bugs, per se. I have to find them either during or after the software is being built. I can't have a "99% defect free" clause in a contract given to me. People could log erronously defects. And, what, exactly, is a defect? A bug found?
Or a bug not found? (If a tree falls in the woods and no one hears it.. etc.)
* Is Quality really a broad category in which reliability, scalability, usability, etc., reside? If I achieve everything under it, it's of "high quality"?
* Use Cases completion? How many use cases can I work though with the software? (Ideally it's 100%, but sometimes they need to be curtailed or limited, due to time / cost contraints).
Has anyone else come up with measurable metrics? (That might be redundant.. but I wanted to stress the importants of measurement).
Robert, how do you and the C9 team measure, "Do you like it"? If you go to Slashdot, I'm sure you'll find plenty of negative press on it (I seem to remember a few links from /. to C9). People who routinely use C9 probably like it (otherwise, why would you use
it? We don't *need* to be here.. In fact, we all should be working.. ) What does C9 do to measure how much we like it?