A little discussion between myself and a peer which I thought others could provide a bit of insight towards. (The peer develops in Java, not Perl, but I doubt that language factors into this.)
Having a short discussion about how to best provide the user with a certain type of functionality which we've tried to provide in the past, although not entirely successfully. I say not successfully as we've had multiple "critical"-level issues with it. My coworker thought we had solved all the problems, I wasn't comfortable with that. And then, today, I find that another teammate had (just last week) found yet another problem with this functionality that, admittedly, had been there since the beginning, just waiting for someone to find it.
This is when I said of my design (a radical departure from past functionality):
All I'm trying to do is make it impossible to have bugs. Well, these types of bugs. Quality isn't about fixing bugs, it's about preventing them. It's not about fixing the edge cases. It's about having no edge cases.It was about this point in the conversation that he started to become convinced of why I wanted to disrupt the existing functionality from the core to completely change the way we looked at the problem.
Then I realised that this was quite concise, and thought it rather catchy. It got me wondering if there were any concise, potentially catchy phrases that can prove useful in aligning everyone's thoughts around the core ideas of producing quality output. This may have nothing to do with producing idiomatic code, or clean code (although I would think that clean, idiomatic code is much more likely to be of high quality). This may be just quality of design, which doesn't even allow for code problems because the design is so thoroughly bullet proof.
PS: he writes Java, I write Perl, and we're both talking about the C++ code ;-) As I said, this is a language-agnostic observation.
|Replies are listed 'Best First'.|
Re: On Quality
by adrianh (Chancellor) on May 10, 2005 at 10:24 UTC
It got me wondering if there were any concise, potentially catchy phrases that can prove useful in aligning everyone's thoughts around the core ideas of producing quality output.
None of them original to me, but I agree with all of them:
My own personal mantra on software development has changed many times over the years. Currently it goes something like:
Unfortunately - that's hardly pithy :-)
Interesting. I can see where some of these may actually contradict each other. For example, "Do The Simplest Thing That Could Possibly Work" often means "cut and paste" which is the exact opposite of "Don't Repeat Yourself". Refactoring is not the simplest way to get things to work. Which, it seems, after following some of your links, is part of their description. To be honest, they started sounding like complex ideas all over again ;-) You need enough experience to tell when something is simple yet still abiding by all the other rules.
Further, I really have to disagree with your first one: "You Arent Gonna Need It". I've spent years maintaining a project that was developed this way. And we're just finishing up over the next couple of months the complete, ground-up rewrite (which started back in 2001). The rewrite has flexibilities that we'll never use. Or at least, we think we'll probably never use. But more than one flexibility has caught us by suprise when it proved useful when a last-minute design change comes in which we can handle by a small tweak in a data file, or minor code changes/additions (or both). We have a marketing department that likes to make changes to our product lineup and functionality after we ship the golden master CDs to manufacturing. And I'm not exaggerating here one bit. You don't get this type of flexibility by writing code when you need it, you get this type of flexibility by writing a framework that does it already.
For the curious, we had to tell manufacturing to ignore the CDs that were on their way, and reburn CDs immediately, and then courier those. But we could add our changes in minutes. Waiting until the requirements came in would have caused a delay of hours or days, which likely would have meant we couldn't meet the new requirements and the schedule at the same time, which, presuming a competent marketing department, would mean we couldn't meet end-customer needs.
I can see where some of these may actually contradict each other. For example, "Do The Simplest Thing That Could Possibly Work" often means "cut and paste" which is the exact opposite of "Don't Repeat Yourself"
Rather than looking at them as contradictory - look at them as working together. Doing the simplest thing that can possibly work might be to copy and past something. Which gives us a code smell of duplication since we should do things once and only once. However since we refactor mercilessly we'll quickly remove that duplication into some kind of common abstraction. So we have clean code. Problem solved.
These are not things you do in isolation - you do them all together all of the time. Doing the simplest thing that can possibly work is a starting point not an end. Synergy is a wonderful thing :-)
Refactoring is not the simplest way to get things to work
I used to think that. I don't anymore. I've found incrementally growing and refactoring a framework to be an enormously effective way of developing flexible high quality applications.
Further, I really have to disagree with your first one: "You Arent Gonna Need It" … You don't get this type of flexibility by writing code when you need it, you get this type of flexibility by writing a framework that does it already.
Colour me slightly suspicious with your diagnosis of the fault with your first system :-) Why was the original project so hard to change? Was there duplication? Was there scope for refactoring? How did you know what flexibility you needed to add to the second system? Were there requirements that weren't made explicit in the first system? Etc.
The reason I'm suspicious is that the flexible framework that you describe is what I'd expect to produce by following YAGNI and the other practices I briefly outlined.
Writing good code is a trivial excercise: any half-decent coder can learn to do it. Writing good code on a budget, without a decent testing environment, and with sharp real time constraints is brutally hard. Unfortunately, that's largely today's business climate...
The idea is that if you can build something smaller and faster while keeping the code clean enough to add to later, you can deliver it months earlier. I think we've all seen programmers waste time on pet abstractions in pursuit of a cool architectural idea.
This is one of those comfortable academic ideas that doesn't make much business sense: doing a lot of work on a program, that if done correctly, will result in a product that does the same exactly same thing as it did before, and if done wrong might ruin the product.
If you let things get to the point where refactoring is "a lot of work", you've already screwed up. You're supposed to do it as you build the program, so that you get the benefits of cleaner code while you are building it.
In order to test, you need to know exactly what you're testing, and how. To properly test a section of the program, you need to define what that section does, and how it does it (all preconditions, post conditions, and side effects, etc.) You can't do that until you write the code
At some point, you define the interface. It could be while coding, or it could be while writing tests. You will probably make some changes over time, but the work of defining the interface has to happen anyway, so it's not wasted time.
You Aren't Going to Need It: Unless, of course, you are.
Well then you do need it don't you and it isn't a situation where YAGNI applies :-)
The whole point of coding it beforehand is that later you won't have time to fiddle with coding and testing; whereas right now, you do.
Right now I have a set of features to implement. Some the customer needs now. Some the customer needs later. YAGNI is all about doing stuff the customer needs now first. That way I end up spending time coding and testing features the customer actually needs, rather than spending time coding and testing features that we don't need until some indeterminate date in the future - by which time the requirements may have changed anyway.
Once And Only Once: Is rarely the simplest thing to do, because it requires abstraction. Abstracting away from the business requirements is one more thing that can go wrong.
Removing duplication abstracts away from business requirements? Usually the opposite in my experience.
Having, say, a global variable or class default that has to be tracked down is often harder to find than just calling the functions explictly, without implicit values being passed around.
Sorry I just don't follow what you're getting at here. Removing duplication invariably makes things clearer in my experience so I think we must be talking about different things.
Worse still, once you move your code around, from say a nested if cascade to a hash table lookup with coderefs, in a month, the business requirements will require that you move to a different model of abstraction to handle a new problem. Now you've got to go back to the old code again... which is needless repetition of coder time.
I don't see how moving from if/then/else statements to a table look up is removing duplication?
A decent search and replace, applied intelligently (or better still, a language-specific refactoring browser) can make mass changes at a single stroke: without the added confusion of implict values lurking in the background.
In my experience people waste far more time dealing with bugs related to duplication in code than they would save by avoiding refactoring the duplication out.
Refactor Mercilessly: This is a pipe dream. Coder time is very, very expensive. This is one of those comfortable academic ideas that doesn't make much business sense: doing a lot of work on a program, that if done correctly, will result in a product that does the same exactly same thing as it did before, and if done wrong might ruin the product.
It's not a pipe dream since lots of people do it with a fair amount of success.
Short term expense, long term profit. If you are refactoring mercilessly it's a very short term expense since it's a background task that you're doing all of the time. Refactoring only gets expensive if you let the code get messy. Clean the kitchen after every meal, not once a month.
(which reminds me - I need to do the washing up :-)
Test First: This is a great concept. It doesn't work in most business settings, though.
I know lots of business coders, myself included, who'll disagree with you there :-)
In order to test, you need to know exactly what you're testing, and how. To properly test a section of the program, you need to define what that section does, and how it does it (all preconditions, post conditions, and side effects, etc.)
That is, as it were, the point. You use the tests you write to define the requirements for the code that you write. It works really well.
Tight Feeback: is costly. If you have the money to burn, it may or may not be profitable
Is it more expensive to find your code fails tests now or next week when QA gets it? Is it more expensive to know the users hate a feature now or after the manuals are printed? Is it more expensive to know that the code your writing now duplicates a feature Bob wrote last week as you start, or two weeks later in the code review?
Tight feedback loops save money. In spades.
Introspection: is good when it works; and a complete waste of time when it doesn't pay off. Best reserved for people with the actual power to change things. Usually, the real problem is: "we don't have enough money/resources/manpower to solve this problem correctly"
Often resources aren't the problem. It's resources being badly applied, usually because of foolish project management practices and overcomplicated development methodologies. Give me a well organised group of 12 developers over a badly organised 120 any day of the week.
Transparency: again, this typically generates better code, at a cost of time (and money). Worth it in most cases: but may be hard to persuade management.
I have to admit I've got to that stubborn age when I'm going to dig out the people in charge and shake them until they listen :-)
Writing good code is a trivial excercise: any half-decent coder can learn to do it.
I have to disagree. Writing good robust code is damn difficult. I've been a professional programmer for more than half my life now and I'm still finding new and better ways to do it.
If it's so damn trivial why do so many people bugger it up on such a regular basis?
Re: On Quality
by gaal (Parson) on May 10, 2005 at 07:07 UTC
Note that I don't say "untested", I say code that cannot be tested. As projects grow it becomes harder to introduce tests if you didn't think about testability as you went along. And there goes your quality.
But for me, the strongest takeaway from the bits I've read about XP, TDD, and so on is the principle that you should always be free to rework what you have. I think you can get it right the first time, but then time passes and it is no longer very right. If you have tests and enough support from your development culture, you are not afraid to refactor, redesign, rewrite. All in the appropriate degree, of course. Don't throw everything away as change comes. Only what you need to.
Re: On Quality
by tlm (Prior) on May 10, 2005 at 10:48 UTC
I try to follow the guideline of "writing code to make life easier for the poor fellow who may need to debug it". That poor fellow is often me; after a few weeks, my own code begins to look to me as unfamiliar as anyone else's.
That's easier said than done, though. It takes practice to identify what will make debugging easier. For example, it took me some time to understand that the reason for avoiding global variables was to facilitate debugging, since to fully grok the state of such a variable one must, in principle, examine the code for the entire package, which can be anywhere. Once I realized this, it became clear that a lexical whose scope is a huge file is scarcely better than a package variable (aside from not being able to modify it from code in a different file), and I understood that the goal of shrinking the scope of variables is to make their fates obvious "at a glance", a goal that makes sense only to someone who knows firsthand the joys of debugging.
One implication of the "code for debugging" dictum that I find difficult to follow is to resist the golfish temptation to subject my code to multiple rounds of compression. Yes, succinct code can be clearer than long-winded code (largely because long-winded code is often a symptom of not fully understanding the problem in the first place), but it can also slide into obfuscation. Unfortunately, it is very difficult to know where to draw the line, because at the time of coding, one is so familiar with the problem that it is almost impossible to put one self in the shoes of someone seeing one's wonderfully concise gems for the first time. As I already remarked, that "someone" is often me a few weeks later.
the lowliest monk
Re: On Quality
by mstone (Deacon) on May 10, 2005 at 20:23 UTC
Broadly speaking, it sounds like you're talking about making sure your designs are logically/algorithmically correct. That's an honorable goal, but as Alan Perlis once said, "it's easier to write an incorrect program than understand a correct one."
Correctness is still something of a black art. Most of the research in actual machine-assisted verification seems to be happening in the area of Formal Methods, but nobody seems to have come up with a real killer app in that field, yet. Too many of the tools and notations out there use characters that don't exist on a normal keyboard. In that sense, they're kind of like APL.. they say you can write any program in the world as a one-liner in APL, and in six months nobody, including you, will be able to read it. ;-)
Joking aside, though, you'd probably do well to dig through the literature on correctness and add as many of the general techniques as possible to your bag of tricks. Three which I happen to like are:
Can you explain this in more detail? I think I have a rough intuition about what that statement means, but itís too vague to be very revealing.
Makeshifts last the longest.
It means, "avoid arbitrary limits", but the concreteness of "0, 1, or unlimited" is a useful benchmark.
There are sometimes fundamental reasons for imposing limitations like "there can be at most one X per Y" (0 or 1), "there must always be exactly one X for each Y" (1), "no Y can ever have an X" (0). While limitations like "a Y can never have more than 5 Xs" are most often just arbitrary limits.
Arbitrary limits very often come back to bite you. So it is good to have at least some motivation to do the extra (or perhaps just "different") work up-front to eliminate such arbitrary restrictions.
So being at least suspicious of any "at most N" (for N > 1) restrictions is a good habit.
Of course, sometimes 0 or 1 limits are actually arbitrary and (rarely) specific finite limits above 1 are not arbitrary. But the "0, 1, or unlimited" test is very useful.
Re: On Quality
by kgraff (Monk) on May 10, 2005 at 13:09 UTC
Quality as attitude rather than procedures -- could be from Zen and the Art of Motorcycle Maintenance.
Re: On Quality
by Joost (Canon) on May 10, 2005 at 22:50 UTC
I doubt that language factors into this...
I wanted to disrupt the existing functionality from the core to completely change the way we looked at the problem.
The reason this is a language-agnostic observation is that it has *nothing* to do with the code itself. You're talking about changing specifications. Working on the specs until they're simple and easy to understand is the most important (and usually hardest) part of designing software. Once the specs are straight-forward and easy to understand, the rest of the design (including writing the code) is much easier.
When I'm designing software I try to describe the problem in the simplest possible terms (and usually that includes explaining/working out a completely different description of the original problem to/with the client), then think of the simplest solution , and implement that. The reason is simple: software and specifications are only useful to humans, and humans actually don't get along well with complexity. This includes the end-user and the programmer.
It still takes experience and taste to make it work, though - I've made plenty compromises to this ideal. I don't know of any sound-bites to explain software design :-)
Regardless of the specification, the actual implementation of a solution for a given specification can vary tremendously. Often times the person giving the specification does not need to know anything about how it is implemented, but the implementor needs to be able to anticipate how the specifier may change his specification. The OPs observation has very much to do with the code itself and how you implement things and how tight you couple things together.
Re: On Quality
by Scott7477 (Chaplain) on Jun 02, 2006 at 06:39 UTC
You might consider that statement to be a blinding flash of the obvious, but I'm positive that many monks can give examples from their careers where their bosses decided to skip certain processes such as completing a spec before starting coding. The decision to skip a process implies that there is a cost to that process that can be avoided without penalty later on. Another snippet from the Amazon review sums up the penalty: "What costs, and costs dearly in terms of rework, test, warranty, inspection, and service after service, is doing things wrong."