|XP is just a number|
Making Skepticism a Design Criterionby Yohimbe (Pilgrim)
|on Feb 18, 2001 at 10:43 UTC||Need Help??|
In this article, Jay Thorne <email@example.com> and Sharon Allsup <firstname.lastname@example.org> discuss coding and design mindset. Jay has 15 years experience in the software systems industry; Sharon has 20. This article came about as the result of an IRC conversation, and, as is common in these cases, Jay and Sharon have never met in person.
Good Coders Never Trust Anything
Sharon: Good coders may trust themselves, a bit. It's the untrained users, incoming data, other connected systems, outgoing data, business practices, hardware, operating systems, and especially their own code that they don't trust. If they don't control it absolutely, they don't trust it. Even their own code, once on a production machine, is essentially out of their control.
The coders I respect the most are terminally unsure, but work to establish reasonable safeguards.
Jay: Most of the good ones I've worked with never even trust requirements. Initial requirements have a way of expanding. A good design involves some wiggle room in the capabilities of components, especially in the component interfaces. Interface design is, in a lot of ways, more important than algorithm design. If you have a well-defined interface, you can fix / optimize / recode / buy pieces / add in new business rules / get a piece written by consultants, with very little overall impact to the system as a whole.
Sharon: "Trust but verify," to quote an old politician. Requirements are like trying to hold Jell-O(tm) in a napkin, there is only so much you can do with paper clips.
One good definition of a "system" is that it is the sum of its interfaces, not the sum of its algorithms.
Jay: A lot of coders get quite a rude awakening when software design gets beyond code and data to business rules. At that point they no longer hold all the cards. Experience shows that you don't hold any cards. You can write the most beautiful algorithm in the world, but if the business rules communicated to you were mistaken, or miscommunicated, your beautiful gem-like code is useless, or worse than useless. Because of the draw to use it anyway, and hack in the re-communicated business rules, the design of your great algorithm is shot to hell, since new and different results are now expected.
On running design and brainstorming sessions:
Sharon: Designs are SO important, but so few coders do collaborative design well. During design sessions, you just have to expect to be wrong. Many coders act as if poking loopholes in their suggestions is personal, as opposed to just brainstorming. If you think your initial stab at an answer is invulnerable and perfect and will fit all scenarios, then either you are fooling yourself, or everyone else. That being said, I hate it when brainstorming becomes a one-upmanship.
Jay: When I run a design session, I use a method that has been working fairly well for me in the past. I get two coders to talk. I shut up. Then when they get arguing as they almost always do, and not conceding points, I chose one way of doing it, based on my own criteria -- usually ease of coding and robustness. Then the task continues. The fear of being arbitrarily overruled keeps them mostly on track. It's fiendishly unfair, of course, but it's terribly effective. I've used the question "Do you want me to choose?" to put them back on track. I've only been told "yes" once.
Sharon: I end up getting accused a lot of not having any faith in what I do, or in my suggestions, because I tend to throw them up in the air and then jump right into taking potshots at them rather than defending it. "You give up too easily." "Well, you brought up a good point and my ideas did not cover that".
Making Pessimism Work for You
Jay: "Work To Do, Work Done" logs are great. Because software and hardware have a way of NOT doing what you want them to. So you need to be able to recover lost work. Hence the work to do queue, and work done. This is like a transactional database, but with human readable checkpoints. Human readable is machine readable as well. If I had to count the number of times I've written one-off code to fix databases that were broken because of code going haywire, I'd probably become a ditch digger. Oh, the effort.... At least a ditch stays dug.
By the way, never delete a work-to-do log. Always checkpoint it in the work done log. If you delete a work to-do, and an entire db goes missing, you can't re-create it, except from backups. And backups are not backups until you've actually recovered from them. Until you get data from them, backups are just these tapes sitting in the safety deposit box. They are NOT guarantees.
Here's a nightmare scenario that actually happened to me: Hard disk crash. Backup tapes unusable because the head of the tape drive was out of alignment during the writing of the backups. The tape drive was replaced, with a new unit, which was in alignment, while the misaligned one was returned to the manufacturer. It took us two more spares to figure out that the first one was out of alignment. Couldn't read the tapes. What to do? Other than run around in circles, scream and shout?
With comprehensive work-to-do logs, the entire database was recreated (labouriously) from scratch. And from personal experience, the work to create and manage this was worth it to be able to say to the company comptroller, "Oh yes, the hard drive crashed and is unrecoverable. The backups are unusable, but we can re-create the databases from the to-do logs in about 10 hours on a new hard disk." One line I heard about this topic: "There are two kinds of systems designers-- ones that have had an important disk die with no backup, and ones that have not."
I like straight ASCII logs. Log databases have a way of getting lost when the db breaks. ASCII + perl == my favorite recovery tool. And keep the logs on another physical disk, please.
Sharon: Other points-- Did you remember to flush your buffers? Don't forget to record where you checkpointed, either. Checkpoints aren't much good if you don't hold onto the bookmark they give you.
That's also from personal experience. Very embarrassing. Most defensive coding practices come from embarrassing mistakes, and the determination to never do something so jackassed - publicly - stupid again. Even if it never got past the dev box's second compile, it was too public for my tastes.
Defensive coding is an exercise in cynicism. And in more industries than people would care to admit, there are users who actively try to break the system for their benefit.
You can expect data to not be as specified. Wrong format, wrong line width. Always have a default that does nothing dangerous, or better yet, nothing at all. Have an ELSE to catch the "otherwise" case.
Test data for null / empty / zero values before doing something in which those values could cause a problem. If you don't control the source of the data, range check all field types / values before inserting them into the database. Software tends to trust database queries, so to make it worthy of that trust, the data going in needs to be checked.
So, in summary, our points are pretty much "Be skeptical, make logs, before and after, expect to be wrong, and be skeptical... did we mention being skeptical?"