|Problems? Is your data what you think it is?|
"Practices and Principles" to deathby ack (Deacon)
|on Feb 29, 2008 at 06:50 UTC||Need Help??|
I was just reading BrowserUk's Meditation Node "Testing methodology, best practices and a pig in a hut (Meditation Node #670478) and the various replys and it got me to thinking about a growing concern of mine relative to my industry, buidling experimental satellite systems.
The problem is this: we have, over the roughly 30 years that I've been in this industry, evolved such a plethora of "best practices" and "policies" and "processes" to try to improve the reliability of our satellites that we are being crushed under the cost and effort required to build any given satellite these days.
But the most disconcerting and frustrating part of it is that we have a small group of people who have invested roughly the last 15 yeras in trying to ween ourselves of all of those overburdening issues and have found that it is possible to produce our systems with adequate reliability by just 'THINKING' and only using those things that are absolutely necessary.
That includes only producing that documentation that is truly needed to get the job done...and then only producing the documentation in the form that is directly useful to thos that need it (including, usually, handwritten in engineering notebooks).
Surprisingly, we have been developing systems for 1/3 the cost that everyone else does and have maintained the reliability at levels at or above those that still cling to the "evolved ways" of the rest of the community.
I especially like the replys to BrowserUk from amarquis who noted:
"(speaking of the value of testing amarquis writes) Preventing 90% of issues is fairly easy. Preventing 99% is hard. 99.9% is incredibly hard, and so on and so on. Obviously, you have to stop somewhere. And to decide where exactly to stop, you have to sit and think what the real cost of failure is. Will a small fraction of customers be driven away by the bug? Will embedded systems need to be recalled? I think that everybody goes through this "How good does it have to be, how much effort will it take to get there" evaluation when thinking about a project."
I also like the reply from an Anonymous Monk who wrote:
"You can lead a monk to knowledge, but you can't make one think. Some will think. Some will not. I personally think that a discussion of best practices/principles for testing or anything else should begin with encouragement to think. For those who have not the inclination to think, the capacity to think, or the experience to think clearly, a list of rules is better than no rules at all."
What I see in my industry (and in most any endeavor to create new things...e.g., in almost every aspect of programming) is that it is not just that people don't think (or know how to think)...it is an almost paranoid fear of failure with the result that if they happen to fail then they'd rather endure an ever increasing load of "practices" and "processes" than to have taken the chance to "think" and go against the community's evolved "best practices."
And as I look at our evolved "best practices" I see that they have all evolved from a long series of failures (some minor...some not so minor). Each failure prompted all of the leadership to say "What went wrong? Let's form a new policy (or practice or process) to make that particular failure isn't repeated." And so another layer of policies, practices, and/or processes is added to the list. Like silt settling over the carcasses of dead dynosaurs at the bottom of the lake, over thousands of failures we end up with such a crushing weight of "processes" and "practices" that we get a lump of coal.
And the very worst of it is that the entire community bands together in their shared fear and, like the reborn bodies in the "Invasion of the Body Snatchers, scream out horribe accusations, obscenities, and calls for public stonings whenever anyone tries to do things differently...tries to "think" (as noted by the Anonymous Monk in his reply to BrowserUk).
At the core of the dilema (I think I misspelled that. Sorry) seems to perpetually be "testing". And the rest of the replys to BrowserUk's node focused on that very topic...I guess that's what got me to thinking.
Has anyone else found that growth of "common practices", "best practices", "policies" to rediculous levels in their jobs? Hopefully not to the level that I have experienced it. But I am curious if others (especially those that have mediated on BrowserUk's node) have seen and had to deal with it and, why in your opinion, so many people collectively would rather move towards out-of-control policies, practices and processes rather than think.
ack Albuquerque, NM