Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid

Re (tilly) 1: (OT) Rewriting, from scratch, a huge code base

by tilly (Archbishop)
on Sep 29, 2001 at 01:24 UTC ( #115511=note: print w/replies, xml ) Need Help??

in reply to (OT) Rewriting, from scratch, a huge code base

I believe that this paper is the one that may have motivated Understanding *IS* Better by chromatic. He certainly agreed with it.

That said, I disagree with the thesis. I do not believe that old code is good code. I do not believe that all code is worth rescuing. There are times a rewrite is necessary. Here are a few reasons that I have done rewrites in the past and would do them again:

  1. The code is scattered across several languages and rewriting in just one would allow more consistent argument processing and error handling. For instance I once removed a lot of old Expect scripts with Perl for this reason.
  2. The code relies on interfaces that are fundamentally broken. For instance code built on Text::CSV is unable to handle embedded newlines. Given a system which processes csv files, and needs to handle embedded newlines, the code has to be fixed and the broken module removed.
  3. The code is of sufficiently low quality that fixing it is harder than replacing it. IBM found in the 80's that when they tracked bugs, something like 10% of the components were most of the bugs. Rewriting those components (once identified) from scratch significantly reduced overall bug counts.
  4. The code is full of hard-coded information (eg paths) that you need to track down and replace with something more flexible. A particularly good opportunity for this is when you need to move it from one machine to another. Choices, choices, recreate the environment that it needs and dependencies that are not documented, or replace its functionality with a version that is more portable?
  5. The system depends on a component that you are trying to eliminate. Fairly often a system will have two parts that do pretty much the same thing. Life would be easier if you were only using one of them. (Less to remember, easier to teach people how things work, etc.) In the process of doing that, parts that use the losing component will be replaced as opportunity permits.
An excellent example of code that needed a major rewrite was Netscape 4's render engine. People say that Netscape 4 was rewritten because of performance issues. That isn't what I remember. What I remember is that Netscape 4 was rewritten because to support what people were trying to do it needed to be able to do incremental renders and incremental re-renders. This was not optional. This was required to implement large parts of the HTML spec. The hack used in the old engine was to re-render the whole page from scratch. The problem was that while this kind of worked in the easy cases, it was fundamentally broken. What people noticed is that it was slow. However it also meant that a Netscape page refused to render until you could place all of the pieces. A single slow image will not block an IE render. It would block an old Netscape render of a page until it got enough of the image to figure out how big it is. Also it was fragile. For instance you can take Netscape 4, do a render with a dynamically created page, resize, and you don't have a rendered page any more! Tracking this kind of thing down is no fun at all.

In other words the primary reason for rewriting the renderer in Netscape 4 was not performance, it is that the old render engine tied them to an inherently buggy API that was biting them over and over again. The performance was a visible second issue. Of course rewriting the rest of Netscape went beyond that...

However he is right that the worst way to do a rewrite is to sit down and start writing something completely new from scratch. Instead I like to work as I suggested in Re (tilly) 1: Best way to fix a broken but functional program?. Decide on an overall flow of a new design. Pull some of the existing mess out and make it a shell around the new design. For instance if you need a new rendering engine, then release something with the old, spec out the new engine. Then start incrementally writing the new, as you go scooping out from the old. It may take longer, but you don't lose the existing knowledge. You don't stop yourself from delivering the product. And if you do it right, at some point the old becomes a small shell that you can kill when you get the right moment.

Now some may tell me that I just invented refactoring. I disagree. The fundamental principle of refactoring is to incrementally rewrite code through a series of transformations. This is a technique for writing a new project from scratch, while analyzing the old code very carefully for important things that it had, and with the plan of throwing the old code away when you can. Conceptually refactoring is the process of transforming cr*p into soil. This is a process of incremental replacement laying the ground for apparently catastrophic replacement once the new foundation is there.

Now this doesn't mean that I don't think that refactoring is a great idea. It is. But trying to preserve something just because it is already written is a mistake in my books.

  • Comment on Re (tilly) 1: (OT) Rewriting, from scratch, a huge code base

Replies are listed 'Best First'.
Re: Re (tilly) 1: (OT) Rewriting, from scratch, a huge code base
by chromatic (Archbishop) on Sep 29, 2001 at 10:51 UTC
    Good catch. This is precisely the article I had in mind. Yes, I unashamedly think that rewriting from scratch is the wrong kind of laziness.

    Take a look at File::Find sometime. What a mess. My refactored version passes all of the tests (at least on a Unixy system) and is *half* the code of the original. How long would have it taken to rewrite that from scratch? Far longer than it took to make incremental changes and get it into a better working order. (If anyone's interested and can debug on a different platform, let me know.)

    Mozilla's a great example. Also consider Perl 6. It's been a year and a couple of months, and with all of the brilliant ideas and hard work and smart people, we've got a handful of design documents, a virtual machine that does some math and can print things out, and (admittedly) saner bytecode. This, while we're still fixing bugs and trying to finish the test suite for Perl 5! Do the internals need to be improved? Yeah. Will Perl 6 deliver? Undoubtedly. Why throw away 35 megabytes of source code if Perl 6 will act 95% the same as Perl 5?

    Realistically speaking, if you don't have design documents or tests or even a good idea of what the software is doing, how confident can you be that your rewrite will do what it's supposed to do, at the same level? (If it's not working period, that's a different story.)

    And, yeah, your process sure smells a lot like "improving the design of existing code." I don't know of any Refactoring gurus who'd claim that you should have x% of the original code left when you're done refactoring.

    My point (and maybe Joel would agree) is that though maintenance isn't the fun part of programming, you very rarely have (or should take) the luxury of skipping it.

    Ovid, you'll have to write tests sometime. My recommendation is to do it now, based on the old code. You'll get a handle for what it's really supposed to be doing, you'll immediately see how to fix it, and you'll grow as a programmer very quickly.


      Colour me unconvinced.

      First take the Netscape/Mozilla project. The article addressed that one and said that the decision to rewrite was an unmitigated disaster. And implies strongly that if they had decided to work with the existing code-base, they would have had better results. Well the way that I remember it, they were getting eaten alive by IE, and development was crippled by having to deal with and work around layers of bug fixes on bug fixes. The fact that one set of nightmares came true doesn't mean that the other would not have. Hindsight isn't 20/20. Rather it is speculation with the comfort of knowing you will never find out if you are wrong.

      Take next Perl 6. Perl 6 had many major goals. The most important was to reinvigorate the Perl community. Others included making it easier to get into the internals and easier to port to different platforms (eg the JVM, C#, or a more aggressively optimized binary). Note that Perl 5 was not doing too well at these tasks despite much energy and interest. Well Perl 6 has done quite well at the first, and I am fairly confident that it will be able to succeed in the others. It just won't do it on an aggressive schedule.

      But I said I don't like to argue from failure. How about a success? Take a look at perl. Scan for the words "version 5". Perl 5 is a complete rewrite of Perl 4. According to Joel that was a horrible mistake, and Perl 5 was bound to fail. Didn't fail that I see. In fact when I take a look at what it resulted in, I don't think that features like lexical scope, removing 2/3 of the reserved keywords, adding references, etc, etc, etc would have happened in the same timeframe doing incremental refactoring. Furthermore I give Larry Wall due credit, he has probably been writing influential free software projects longer than both of us have been writing software combined. Given his trail of successes, when he thinks a rewrite is doable, it probably is. If he thinks that it is a good idea to get where he wants Perl to get, well he is the one whose vision got it where it was.

      Now this is not to say that refactoring is a bad idea. When it works, it works well. It is a useful tool. I am glad that it helped you on File::Find. But I think that most big projects can profitably use multiple modes. For instance the Linux project does both. Most of the time you do incremental ongoing development. But I think that ESR made the right choice when he made CML 2 a complete rewrite. Sometimes you incrementally adapt a component. Sometimes you replace it. You are doing something wrong if you need to replace big components very often.

      And one final thing. Ovid is dealing with a system, one of whose problems being that it had a bunch of features added without much rhyme or reason because an ex-employee thought they were "cool". It does not have a large user base. I don't think, therefore, that he should build tests based on the current behaviour, enshrining the misfeatures in tests. Rather he should do some research about how the system is actually used, and only test for what people use from it. Whether or not he rewrites from scratch, blindly refactoring based on the current behaviour will not solve one of the problems that he wants to solve. And, whether or not he rewrites, he should think about how to solve the business problem. Perl 4 did not stop development just because Larry was working on Perl 5. Perl 5 is not stopping active development just because Perl 6 is being worked on. Creating great software is one thing. But you need to survive to actually do it...

        You're right, we're all speculating with the benefit of hindsight. Probably no one at Mozilla thought they'd spend three just to get something that could honestly replace Netscape 4. (This has suddenly become a Software Quality Rant.)

        I would like to see programmers give up the idea that the only way to understand code is to rewrite it. It's nice that new programmers have the energy and inclination to start things from scratch, but we all know where the elephants go to die .

        Maybe I'm going too far, but I doubt we'll see quality software until programmers learn how to improve *existing* software. Part of that means writing maintainable software, yes. Ovid's got it exactly right, above.

        But if they only know how to rewrite, not maintain, there's little hope of that.

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://115511]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (3)
As of 2022-08-15 01:34 GMT
Find Nodes?
    Voting Booth?

    No recent polls found