Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Nobody Expects the Agile Imposition (Part VI): Architecture

by eyepopslikeamosquito (Archbishop)
on Jan 23, 2011 at 10:15 UTC ( [id://883757]=perlmeditation: print w/replies, xml ) Need Help??

The Joy of Legacy Code

Have you ever played a game called Jenga? The idea behind Jenga is that you start by making a tower of blocks. Each player removes a block from somewhere in the tower, and moves it to the top of the tower. The top of the tower looks tidy, but it's very heavy and the bottom of the tower is growing more and more unstable. Eventually, someone's going to take away a block from the bottom and it'll all fall down.

I came into Perl development quite late, and I saw a very intricate, delicate interplay of ideas inside the Perl sources. It amazed me how people could create a structure so complex and so clever, but which worked so well. It was only much later that I realised that what I was seeing was not a delicate and intricate structure but the bottom end of a tower of Jenga. For example, fields in structures that ostensibly meant one thing were reused for completely unrelated purposes, the equivalent of taking blocks from the bottom and putting them on the top.

-- The Tower of Perl by Simon Cozens

The perl5 internals are a complete mess. It's like Jenga - to get the perl5 tower taller and do something new you select a block somewhere in the middle, with trepidation pull it out slowly, and then carefully balance it somewhere new, hoping the whole edifice won't collapse as a result.

-- Nicholas Clark

The Joy of Legacy Code. I'm sure you've all got your own war stories of legacy code that has grown and grown until it resembles the delicate and fragile Jenga tower lightheartedly described by Cozens and Clark above. Not even Perl Monks has been spared:

The problem isn't an infrastructure issue, however -- and speaking as one of the handful of people who've had a hand in developing the site's software: It's our own gosh-darn fault. Perlmonks is WAY more complex than when it originally launched. It does a crapload of perl evals and sql queries per page. It's vulnerable to resource hogs. Searching can cripple the database. And right now, I don't think we're gonna fix these problems any time soon. ... It's not a matter of computer resources, as much as human engineering resources.

-- Re: perlmonks too slow by nate (original co-author of the Everything Engine)

Will Rewriting Help?

Netscape 6.0 is finally going into its first public beta. There never was a version 5.0. The last major release, version 4.0, was released almost three years ago. Three years is an awfully long time in the Internet world. During this time, Netscape sat by, helplessly, as their market share plummeted. It's a bit smarmy of me to criticize them for waiting so long between releases. They didn't do it on purpose, now, did they? Well, yes. They did. They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.

It's important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time. First of all, you probably don't even have the same programming team that worked on version one, so you don't actually have "more experience". You're just going to make most of the old mistakes again, and introduce some new problems that weren't in the original version.

-- Joel Spolsky on not Rewriting

Now the two teams are in a race. The tiger team must build a new system that does everything that the old system does. Not only that, they have to keep up with the changes that are continuously being made to the old system. Management will not replace the old system until the new system can do everything that the old system does. This race can go on for a very long time. I've seen it take 10 years. And by the time it's done, the original members of the tiger team are long gone, and the current members are demanding that the new system be redesigned because it's such a mess.

-- Robert C Martin in Clean Code (p.5)

"It's harder to read code than to write it" (Joel Spolsky) - writing something new is cognitively less demanding (and more fun) than the hard work of understanding an existing codebase ... which might explain the typical exchange below :)

Developer: The project I inherited has weak code, I need to rewrite it from scratch
Boss: Will there ever be an engineer who says, the last guy did a great job, let's keep all of it?
Developer: I'm hoping the idiot you hire to replace me says that

-- Green Vs Brown Programming Languages

As indicated above, a grand rewrite is not necessarily the answer. Indeed, I've seen nothing but disaster whenever companies attempt complete rewrites of large working systems.

Apart from the daunting technical difficulties of performing the large rewrite, there's often substantial cultural resistance to replacing established software, even when the rewrite goes smoothly and introduces significant improvements. Examples that spring to mind here are: GNU Hurd replacing Linux; CPANPLUS replacing CPAN; Module::Build replacing ExtUtils::MakeMaker; Python 3 replacing Python 2; and Perl 6 replacing Perl 5 ... though, admittedly, Subversion and git seem to have faced little resistance from diehard CVS users.

That's not to say it can't be done though. The great Netscape rewrite (ridiculed by Spolsky above) -- though a commercial disaster -- metamorphosed into an open source success story. Another example of a successful rewrite, pointed out by tilly below, is the Perl 5 rewrite of Perl 4.

What About Refactoring?

Well, if rewriting won't help, what are we supposed to do? We surely need to provide a glimmer of hope for those poor souls condemned, day after day, to anxiously poking at a terrifying Jenga tower. Maintaining such a tangled mess is cruel, inefficient, and ultimately unsustainable for any business.

The only humane option left that I can see is to relentlessly refactor legacy code, subsystem by subsystem, continuously and forever. To always keep it clean. To prevent it becoming a tangled tower in the first place. Though such an approach seems sensible to me, it can be politically problematic to gain funding for such an endeavor. Apart from the difficulty of justifying the return on investment of such work, you further incur an opportunity cost in that time spent refactoring old code is time not spent developing new products and new features.

Habitability is the characteristic of source code that enables programmers, coders, bug-fixers, and people coming to the code later in its life to understand its construction and intentions and to change it comfortably and confidently. Habitability makes a place livable, like home. And this is what we want in software -- that developers feel at home, can place their hands on any item without having to think deeply about where it is. It's something like clarity, but clarity is too hard to come by.

-- Richard Gabriel's Patterns of Software

Like Richard Gabriel, I prefer to aim for the more pragmatic "habitable code" rather than some perfectly abstracted ideal. And I admire Robert C Martin's homespun advice of "follow the boy scout rule and always leave the campground cleaner than you found it" because this simple rule gives hope to the maintenance programmer that things will improve in the future. I'm interested to hear of any tips you may have to motivate and make life more enjoyable for the maintainer of awful old legacy code.

Unit Testing Legacy Code

For many years, I've argued passionately for the many benefits of Test Driven Development:

  • Improved interfaces and design. Writing a test first forces you to focus on interface. Hard to test code is often hard to use. Simpler interfaces are easier to test. Functions that are encapsulated and easy to test are easy to reuse. Components that are easy to mock are usually more flexible/extensible. Testing components in isolation ensures they can be understood in isolation and promotes low coupling/high cohesion.
  • Easier Maintenance. Regression tests are a safety net when making bug fixes. No tested component can break accidentally. No fixed bugs can recur. Essential when refactoring.
  • Improved Technical Documentation. Well-written tests are a precise, up-to-date form of technical documentation.
  • Debugging. Spend less time in crack-pipe debugging sessions.
  • Automation. Easy to test code is easy to script.
  • Improved Reliability and Security. How does the code handle bad input?
  • Easier to verify the component with memory checking and other tools (e.g. valgrind).
  • Improved Estimation. You've finished when all your tests pass. Your true rate of progress is more visible to others.
  • Improved Bug Reports. When a bug comes in, write a new test for it and refer to the test from the bug report.
  • Reduce time spent in System Testing.
  • Improved test coverage. If tests aren't written early, they tend never to get written. Without the discipline of TDD, developers tend to move on to the next task before completing the tests for the current one.
  • Psychological. Instant and positive feedback; especially important during long development projects.

So I was at first enthusiastic about the approach recommended by Michael Feathers in Working Effectively with Legacy Code, namely to (carefully) break dependencies and write a unit test each time you need to change legacy code, thus gradually improving the code quality while organically growing a valuable set of regression tests.

The Legacy Code Dilemma: When we change code, we should have tests in place. To put tests in place, we often have to change code.

-- Michael Feathers (p.16)

Feathers further catalogues a variety of dependency breaking techniques to minimize the risk of making the initial legacy code changes required to unit test.

Though I've had modest success with this approach, there's one glaring omission in Feathers' book: how to deal with concurrency-related bugs in large, complex event-driven or multi-threaded legacy systems. Unit testing, by its nature, is not helpful in this all too common scenario. Overcoming this well-known limitation of unit testing ain't easy.

Unit Testing Concurrent Code

Test-driven development, a practice enabling developers to detect bugs early by incorporating unit testing into the development process, has become wide-spread, but it has only been effective for programs with a single thread of control. The order of operations in different threads is essentially non-deterministic, making it more complicated to reason about program properties in concurrent programs than in single-threaded programs.

-- from a recent PhD proposal to develop a concurrent testing framework

See the "Testing Concurrent Software References" section below for more references in this active area of research. Though I haven't used any of these tools yet, I'd be interested to hear from folks who have or who have general advice and tips on how to troubleshoot and fix complex concurrency-related bugs. In particular, I'm not aware of any Perl-based concurrent testing frameworks.

In practice, the most effective, if crude, method I've found for dealing with nasty concurrency bugs is good tracing code at just the right places combined with understanding and reasoning about the legacy code, performing experiments, and "thinking like a detective".

One especially useful experiment (mentioned in Clean Code) is to add "jiggle points" at critical places in your concurrent code and have the jiggle point either do nothing, yield, or sleep for a short interval. There are more sophisticated tools available, for example IBM's ConTest, that use this approach to flush out bugs in concurrent code.

Agile Design

In our ongoing "debate" on TDD, Bob and I have discovered that we agree that software architecture has an important place in development, though we likely have different visions of exactly what that means. Such quibbles are relatively unimportant, however, because we can accept for granted that responsible professionals give some time to thinking and planning at the outset of a project. The late-1990s notions of design driven only by the tests and the code are long gone.

-- James Coplien in foreword of Clean Code

While Kent Beck's four rules of simple design, namely:

  • Runs all the tests.
  • Contains no duplication.
  • Expresses all the design ideas that are in the system.
  • Minimizes the number of entities such as classes, methods, functions, and the like.
are helpful in crafting well-designed software, I don't agree with some extremists who claim that's all there is to design. Software design is an art requiring experience, talent, good taste, and deep domain, computer science and software usability knowledge. I feel there's a bit more to it than the four simple rules above. So, to supplement Beck's four simple rules, I present my twenty tortuous rules of non-simple design. :-)
  • Learn from prior art. Use models and design patterns. Most designs should not be done from scratch. It's usually better to find an existing working system and use it as a starting model for a new design.
  • Define sound conceptual models and domain abstractions. Unearth the key concepts/classes and their most fundamental relationships.
  • Aim for balance. Avoid over-simplistic, brittle and inflexible designs. Avoid over-complicated bloated designs with too much flexibility and unneeded features. Be sufficient, not complete; it is easier to add a new feature than to remove a mis-feature.
  • Plan to evolve the design over time.
  • Design iteratively. Some experimentation is essential. Look for ways to eliminate ungainly parts of the design.
  • Use a combination of bottom-up and top-down approaches.
  • Apply Separation of Concerns and the Law of Demeter.
  • Systems should be designed as a set of cohesive modules as loosely coupled as is reasonably feasible.
  • Minimize the exposure of implementation details; provide stable interfaces to protect the remainder of the program from the details of the implementation (which are likely to change). Don't just provide full access to the data used in the implementation. Minimize global data.
  • Systems should be designed so that each component can be easily tested in isolation.
  • When in doubt, or when the choice is arbitrary, follow the common standard practice or idiom.
  • Avoid duplication (DRY).
  • Declarative trumps imperative.
  • Use descriptive, explanatory, consistent and regular names.
  • Reflect the user mental model, not the implementation model.
  • Reserve the best shortcuts for commonly used features (Huffman coding).
  • Establish a rational error handling policy and follow it strictly. Document all errors in the user's dialect.
  • Interfaces matter. Once an interface becomes widely used, changing it becomes practically impossible (just about anything else can be fixed in a later release).
  • Design interfaces that are: consistent; easy to use correctly; hard to use incorrectly; easy to read, maintain and extend; clearly documented; appropriate to your audience.
  • Apply the principle of least astonishment.
  • Consider the design from the perspectives of: usability, simplicity, declarativeness, expressiveness, regularity, learnability, extensibility, customizability, testability, supportability, portability, efficiency, scalability, maintainability, interoperability, robustness, concurrency, error handling, security. Resolve any conflicts between perspectives based on requirements.

Agile Architecture

A project has many stakeholders, each making an investment (time, money, effort) into the project. Each will have different goals for the solution, and they may measure value differently. The Agile Architect's goal is to deliver a solution which best meets the needs and aspirations of all the stakeholders, recognising that this may sometimes mean a trade-off. The Agile Architect must work in a way that makes the best use of the various resources invested in the project.

The solution must be seen as part of a whole, which includes other systems and projects. It must be robust enough to be changed and extended over time. You must support further work, whether it is to change the solution or simply to operate it efficiently.

The cost of change is significant in any major real-world system, so the Agile Architect must balance planning for change against other goals. The Agile Architect must also seek to manage and minimise complexity, which helps to maximise stakeholder value. The aim is a solution which is neither simplistic and brittle, nor over-complicated by over-building for flexibility.

-- from Principles for the Agile Architect

Schwaber's Legacy Core/Infrastructure Catastrophe

In a 2006 Google Tech Talk, Ken Schwaber stated that a chronic legacy core or infrastructure problem existed with every single organisation he helped implement Scrum.

Unfortunately as I've been helping organisations implement Scrum, I've run into a very common problem with every organisation. What these organizations have is a problem called Core or Infrastructure software. This core functionality has three characteristics:

  1. Fragile; if I changed one thing in that core piece of functionality, it tended to break other things.
  2. No good test harnesses around it. So if you went in and broke something, you tended not to know about it until it was up on all the servers and then your customers would let you know about it. That's not good.
  3. Only a few engineers know how to work on it. There were only a few suckers left in the entire company who still know how to and were willing to work on the infrastructure. Everyone else had fled to newer stuff.

-- Ken Schwaber, Google tech talk on Scrum, Sep 5, 2006 (35:50)

Ken continued with a specific anecdote highlighting the strain this core architecture constraint puts on a Scrum cross-functional team:

I remember one company that has about 120 engineers, developers of all kinds of whom 10 are still able to work on the core functionality. The other 110 are working on new stuff. We brought all the engineers into the room. We said, okay, the product manager for the first area and the lead engineer for the first area come on up here. Now select the people you need to do this work over the next month, including, of course, the core engineers. And they did and we said, okay, now leave, get out of here and start working. ... when we got to the fifth product manager and the lead engineer and they said we can't do anything. There's no core engineers left. We looked around the room and there were 60 engineers left. They were thoroughly constrained by the core piece of functionality.

If you have enough money, you rebuild your core. If you don't have enough money and the competition is breathing down your neck you shift into another market or you sell your company. Venture capitalists are into this now, buying dead companies. Design-dead software.

-- Ken Schwaber, Google tech talk on Scrum, Sep 5, 2006 (38:40)

This anecdote rings true with my experience; I've worked at many companies where the original authors of critical core software had long left the company, few folks understood it, and noone dared touch it.

How Does it Happen?

Say you've got a velocity of 20. But product management want more stuff. And so, that's going to require, because that's more stuff, that's going to require that you have a velocity of 22 to do it. Well, gees, how are you going to get a velocity of 22? Are you going to be smarter when you wake up? Are you going to put in new engineering tools? No, none of that will work. So, what you'll actually do to get the increased velocity is of course cut quality, because if you remove quality, you can do more crap, right?

Now if you do this and that release goes out on time, some grumbles from the customers you know, whatever. But customers always grumble and the product manager is promoted, you know, drives a new BMW, parks in one of the fancy spots.

The next release that you start because you're working from a slightly worse code base with clever tricks in it, unrefactored code, no tests -- the best velocity you can really do is 18. Well, that's no good and noone's going to get promoted for that. So the product management team comes down and says, guys you just gotta do it. So you cut quality again but this time when you cut quality, the best you can do is 20 because you're starting from a worse code base. Now it takes about five years, release by release, for you right here to build your own design dead product.

It's got two aspects to it. One is, when we are told to do more, we cut quality without telling a soul. It's just second nature. I have trained over 5500 people and put them through an exercise like this, but very subtle, very sneaky, where push comes to shove and they have a choice of saying, well, we can't do it, or saying we'll do it and cutting quality. Only 120 of the 5500 said no. All of the others just cut quality automatically. It's in our bones. The other part of this habit is product management, them believing in magic, that all they have to do is tell us to do something and, this is the illusion we support, by cutting quality, it'll get done.

And these are what's called good short-term tactics. These are horrible long-term strategies because it's a back-your-company-into-a-corner strategy.

-- Ken Schwaber, Google tech talk on Scrum, Sep 5, 2006 (41:50)

While Ken's plausible explanation of how this happens spookily reminds me of some of my commercial experiences, there are doubtless other ways it can happen. After all, to the best of my knowledge, no Perl 5 pumpking has ever been offered a BMW as an inducement to get a release out early.

A Mythical Perl-based Commercial Company

For fun, and to better understand why this sort of thing happens, let's consider what might transpire if Perl 5 or Perl 6 formed the crucial core software of a commercial closed-source company writing customer-facing software in cross-functional Scrum teams. In this scenario, Perl is an internal tool; the customer doesn't know or care about it, they just want a system that satisfies their needs.

I speculate that most developers and product managers in such a mythical Perl 5-based company would go for the BMW by working on new pure Perl 5 products because their velocity would likely be an order of magnitude higher when writing new Perl 5 components than when changing the underlying Perl 5 C core. Not only that, but hiring expert C programmers with sufficient skill, intelligence, and tenacity to change the Perl core would likely prove to be a significant constraint. So I predict that in such a mythical commercial company, development of the Perl 5 C core would slow down, with only critical bug fixes applied.

Despite Ken Schwaber's dire predictions of "design-dead companies" rapidly going out of business, I see this company as commercially viable for quite a few years (though not indefinitely) because the Perl 5 C core is stable and proven, with very few critical bugs, and, most importantly, is well decoupled. That is, you can write new Perl 5 code without needing to understand anything of the Perl 5 implementation. And teams writing in Perl 5 are likely to be very competitive in the commercial marketplace when competing against companies writing in C, for instance. Such an approach, however, cannot be sustainable in the long term and sooner or later you'll need to untangle your legacy code or rewrite it.

Because Perl 6 is less mature and still evolving, the velocity of teams using it to deliver customer-focused software is likely to be much lower than for Perl 5 teams. That is, the team may be happily and productively writing new Perl 6 code ... then hit an impediment that requires them to switch context and add a new feature or make a bug fix to the Perl 6 core. Team context switches like this are very harmful to team velocity in my experience. This Perl 6 scenario is much closer to most commercial organizations today because their core software is typically incomplete and still evolving. Indeed, agile proponents encourage you to avoid the waste of writing customer software that is never used with slogans like Do the simplest thing that can possibly work and YAGNI.

In summary then, to circumvent Spolsky's "Netscape Rewrite Disaster" and sidestep Schwaber's "Legacy Core/Infrastructure Catastrophe", companies must continuously refactor to keep their core software in a clean and maintainable state. Such unrelenting and diligent work requires formidable discipline however, and few companies have the long term perspective and the will to do it.

Other Articles in This Series

References

Agile Architecture References

Legacy Code References

Testing Concurrent Software References

Updated 23-jan-2011: Removed reference to Windows NT rewrite plus minor wording improvements.

Replies are listed 'Best First'.
Re: Nobody Expects the Agile Imposition (Part VI): Architecture
by tilly (Archbishop) on Jan 23, 2011 at 10:37 UTC
    Corrections and useful facts for you.

    Windows NT was not a rewrite of Windows 95. In fact it was released in 1993, well before Windows 95 got released.

    It is unfair to say that Python 3 has adoption problems. In fact the rate of adoption is slightly ahead of what was initially expected.

    There are a lot more successful rewrites you can add to the list. For instance Perl 5 is a rewrite of Perl 4, vim is a rewrite of vi, and less is a rewrite of more.

      Re Windows 95, thanks for the correction, I'll update the root node. It seems that Windows NT was a rewrite of Windows 3 and that Windows 95 was derived from the Windows 3 code base.

      Python 3: I only claimed it was meeting "substantial resistance". Maybe that's unfair, depending on your interpretation of "substantial", but it's certainly meeting some resistance based on random web chatter on the subject. Well, I'm a Python user and I'm resisting it. ;-) My personal opinion is that breaking backward compatibility was unwarranted for a release with relatively modest improvements. Many businesses with large investments in Python 2.x code will resist Python 3 indefinitely because upgrading will prove too risky and/or too expensive.

      Update (2017): Even a company as wealthy as Google, according to this Hacker News item, are still heavily using Python 2. This is hardly surprising. Where is the ROI on spending millions of dollars rewriting millions of lines of already working code, without adding any customer value, while being almost guaranteed to suffer numerous breakages to critical business systems? You also pay an Opportunity cost. Curiously, I see some of Google's legacy Python 2 systems are being rewritten in Go perhaps because at least there is some perceived customer value (faster performance) in a Go rewrite. For smaller less wealthy companies, rewriting millions of lines of working Python 2 code in Python 3 could well put them out of business. Of course, if you don't have much Python 2 code, switching to Python 3 is a no brainer.

      Update (2020): I see Jython is still Python 2 and IronPython3 unfinished. At least Perl doesn't have to worry about updating Java and CLR versions of the language. :)

      Update (2023): Despite (or perhaps because ;-) it abandoned its many Python 2 users, Python won the language adoption war, at least it's now No. 1 in the TIOBE index. This topic is analysed in more detail at Organizational Culture (Part VI): Sociology.

      See Also

        It seems that Windows NT was a rewrite of Windows 3

        Absolutely not. Windows NT was an entirely separate, new development of 32-bit code. Ie the Win32 API.

        Win32s was a thunked win32 emulation retrofitted to the 16-bit Windows 3.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Nobody Expects the Agile Imposition (Part VI): Architecture
by ELISHEVA (Prior) on Jan 25, 2011 at 11:47 UTC

    I've been hesitant to respond because I know you've put a lot of work and thought into this essay, but there is something here that just isn't ringing true for me. I'm struggling to put it into words, but I think it might be this.

    The entire essay relies very heavily on three assumptions:

    • There is a clearly recognizable distinction between refactoring and rewriting that is so obvious that it doesn't need to be explained. Rewriting is bad and loses information. Refactoring is good, but only if done continually and with the help of tests.
    • Satisfying the customer is good and over design is bad. The difference between "design what the customer needs and no more" and over design is clear and also doesn't need explaining.
    • Human beings, and particularly crack programmers, are largely incentivized by external rewards - customer statisfaction, money, BMW, etc.

      ELISHEVA:

      Rewriting vs. Refactoring

      If you start from a blank source file, I'd consider that rewriting, even if you start transplanting subroutines from the original back into the new file. I feel that taking a working system and transforming it without breaking it is refactoring, while building a new thing (even when borrowing heavily) from the original is rewriting. Of course, you can do both in the same project, as you may refactor some bits and rewrite other bits.

      The problem I generally see with rewriting is that there's quite a bit of knowledge that's encoded into the system that's not immediately obvious. Things like:
      • Order dependencies: Systems consuming the output expect a particular sequence of records or operations and the order isn't documented.
      • Workarounds for problems in other systems: Sometimes bugs become part of the standard interface, and the documentation isn't updated. So writing fresh from the spec causes you to rediscover those issues.
      Refactoring addresses these (somewhat) by making a change at a time. By writing tests for all the changes you'll hopefully capture some of this hidden knowledge in your tests. They won't necessarily be documented any better, but when you put it into production and other systems break, you should be able to trace back to the original test and document accordingly.

      Of course, refactoring has its own issues. If you can imagine a cleaner structure for your program and try to refactor towards it, you'll find that "you can't get there from here", or you have to first go to Timbuktu before you can get back home.

      With my experience (quite a bit), I find that the less you know about the domain or the application, the more you should lean toward refactoring. Similarly, the closer you are to being a domain expert, the more sense rewriting can make. The problem is that it's often difficult to objectively judge just how much knowledge you have about the domain. My approach is normally to find some dividing lines in the system where I can break it apart with the fewest changes possible. Then I can choose to refactor some chunks and rewrite others.

      Overdesign and customer needs

      You've pretty much hit the nail on the head: If I'm paraphrasing your arguments correctly, the difference between overengineering or not is largely communication. If you're going off and doing anything beyond what you've discussed with the customer, you're overengineering things. If you think the system needs to do something specific, or that the architecture needs to go in a certain direction, you need to have a talk with the customer about anticipated future changes so you can shape things correctly. So if you discuss things with the customer and get buy-in, then you're doing your job correctly. If you have discussed things with the customer, and they're adamant about a particular direction, then you need to do what they want, or you're throwing their money away.

      (Gasp - they have a business need for a framework architecture?)

      I've heard it mentioned that "functionality is an asset, while code is a liability". Too often, programmers know they need some functionality, but build their own rather than buying it. (I, unfortunately, succumb to this temptation a bit too often, myself.) I've been trying to periodically take a break from design and/or coding so I can sit back and review the requirements so I can stop myself from going off into the rabbit holes.

      ...roboticus

      When your only tool is a hammer, all problems look like your thumb.

        To be quite frank, I believe that the essential difference between “refactoring” and “rewriting” is that one term is politically expedient, while the other term is not.   In both cases, you are doing the exact same thing in terms of the code:   you’re replacing the existing code with something altogether new, which renders the code inoperable (un-compileable) for an extended period of time and which must, in the end, be re-validated to verify that the new code works the same as the old.   The term, “refactoring,” is currently sexy and implies improvement ... “making an already-good thing better” ... whereas “rewriting” (wrongly...) implies previous failure.

        I do admit to the reality that, sometimes, in order to get approval and funding to do what badly needs to be done, you are obliged to resort to “necessary euphemisms.”

        Like it or not, computer software is very fragile (and therefore, costly) stuff, simply because it is riddled with functional and data dependencies.   It is, so to speak, “a house of cards,” which can only stand up to a very limited amount of “remodeling.”   I simply think that this is ... the essential and unavoidable nature of the beast.   It obligates us to try to do the best that we can, knowing that there are serious limits to that.   I submit that there is no silver-bullet language or technique to be had.   (He would rightly be a gadzillionaire who discovered it.)

        With regard to the point of “overdesign and customer needs,” there is the consideration that (a) the customer does not always know just where his business will take him; and (b) in any case, he is not a professional software builder and does not profess to be.   Sometimes you do need to “go beyond what you discussed with the customer,” because in your professional judgment as a software engineer, those additional elements (for example...) create the foundations for future characteristics of the system that are reasonably foreseeable as well as engineering-practical.   But, you need to be sure that you get all points about what the customer requests, and of what you have in turn decided to do, and every single subsequent change to the foregoing, in writing and signed-off and filed away for all eternity.

        Part of the (successful) argument for “frameworks” is that the cost of developing and maintaining them can be cost-amortized (or simply “unpaid-effort amortized”) among many projects that employ them ... thus allowing all of those projects to enjoy the full benefits without incurring the full costs.   The use of frameworks imposes a certain specific “world view” upon the project, however ... namely, the world-view of that particular framework’s designers, quirks and oddities and all.   Choose your project’s spouse very carefully.   The project’s entire future direction is necessarily molded around that of the framework, and in a very rigid way, except to the extent that the project’s actual implementation might be, by deliberate choice, architecturally divided into (framework-based) “client” and (non framework-based) “server” portions.   The cost/benefit analysis of using frameworks usually prevails in spite of this consideration, because so much of the constituent code in so many projects isn’t unique at all.

      Are all people driven by this velocity you talk about? By the opportunity to have a BMW in the right spot in the parking lot? Some are, but others aren't. If not, how does that affect the way you manage a project? Will focusing so much on velocity promote incentive or unintentionally kill it? I think it depends very much on the team. There is no formula and no way around tuning management practices to the individuals involved in the work. That is what makes good management hard work.
      *applause*. I couldn't agree more. I get intensely irritated when I see people being rewarded for writing unclean code at high velocity, being promoted to a new job, while I'm left to clean up their mess. If I can find the motivation (noone is offering me a BMW;-), I'll discuss intrinsic versus extrinsic motivators and other management issues at length in a future installment of this series.

      I think 'scrum' has its place, but not as a model for all project management every where and anywhere.
      I agree. I hope this whole series of articles (especially the first one) has made that clear. If it's appropriate and the team wants to use it, knock yourself out with Scrum, but do not impose it on the team from outside. For the record, while I generally support agile and lean principles, I prefer to think for myself rather than blindly follow a "branded" methodology. If forced to choose a "branded" methodology, I'd choose Kanban.

      This is partly due to it being open source and partly due to its small footprint and stability. Code bases come and go, but the core architecture hasn't changed in literally decades.
      I guess that depends how you define "core architecture". :) I'd say there are at least three competing core architectures for implementing Unix: The infamous 1992 Linux is obsolete debate between old hand respected operating system researcher Andrew Tanenbaum and young upstart Linus Torvalds makes interesting reading. I guess it shows that "theoretical (academic) superiority" does not necessarily translate to success in the marketplace.

      I think it's fair to say that monolithic kernels still dominate the Unix arena, though Tru64 UNIX is built on top of the Mach microkernel and Mac OS X is built on the XNU Mach/BSD-based hybrid kernel. Though I'd like to see the microkernel based GNU Hurd succeed, sadly that now looks doubtful after more than twenty years of development -- yet another example of the perils associated with "writing new systems from scratch".

        It is worth noting that I personally know the original GNU architect, and he claims that he thought at the time that the easiest way forward was to build on top of BSD. But RMS chose Mach, in part because academia was very fond of microkernels at that point.

        He says that RMS has acknowledged that this decision was a mistake.

      IBM who saved itself by making the transition from type-writers to computers and business IT design in the late 80's and 90's

      Oh dear!

      Truth be told, a large part of NT is actually borrowing from *nix

      Oh dear, oh dear.

      If you are going to write authoritatively about history, it would really be better if you actually knew something about it.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Actually, IBM restructured itself (and thereby I guess saved itself) by moving from a hardware+OS vendor (OS360) to a consulting company (also see its purchase of PWC), so that part isn't that far fetched.

        The comment about NT and *nix - that was based on my memory of press reports at the time it was being developed. If I recall correctly they originally wanted to do a green field system and then found that they had to borrow certain parts of the *nix architecture - what exactly I don't remember. I know many of the developers came from DEC, but the few things I'm finding on the web focus on the VMS influence. Business press reports on technology often get it wrong, so I might be remembering someone reporting the DEC hirings and just assuming it was DEC UNIX rather than VMS that ended up in NT.

      The difference between "design what the customer needs and no more" and over design is clear and also doesn't need explaining.
      To clarify, I'm not an unthinking follower of "design what the customer needs and no more"; I feel that's a dangerous over-simplification. While writing code that is never used is certainly waste, and one has to beware of over-engineering, I don't view this as a black and white issue. I touched on this in the "Agile Design" section where I stated:
      Software design is an art requiring experience, talent, good taste, and deep domain, computer science and software usability knowledge. I feel there's a bit more to it than the four simple rules above.
      and then continued on to present my twenty tortuous rules. :)

      This post is proof-positive that, when you see More... at the bottom of what seems to be a very short posting, it pays to click on it.   Too bad I can only vote it up “once.”

Re: Nobody Expects the Agile Imposition (Part VI): Architecture
by JavaFan (Canon) on Jan 23, 2011 at 14:05 UTC
    A Mythical Perl-based Commercial Company

    For fun, and to better understand why this sort of thing happens, let's consider what might transpire if Perl 5 or Perl 6 formed the crucial core software of a commercial closed-source company writing customer-facing software in cross-functional Scrum teams.

    Such a company exists. It's the same company that hosts the official repository of the Perl sources, which has donated a large amount of money to TPF a few years ago, and has been sponsoring YAPC's both in North America and Europe. The company is called Bookings.

      Out of curiosity, do Bookings staff members actively work on the Perl 5 C sources? Or do they just fund Perl development?

      To further clarify, the main point of my scenario was to ponder whether employees of such a mythical company, working in Scrum cross-functional teams with a goal of producing "customer value", would be eager to work on the Perl 5 C code or whether they would try to avoid doing that and instead focus on writing new Perl 5 systems to provide "better customer value at a higher velocity" (and so get to drive a new BMW and park in one of the fancy spots:-). In this mythical scenario, the customer does not know or care about Perl, they just want their systems delivered on time that satisfy their needs. Perl is mimicking the closed-source "infrastructure or core component" that caused so many headaches for Schwaber when implementing Scrum in cross-functional teams that are meant to be self sufficient; that is, each team is meant to be capable of maintaining the Perl C sources.

        A cynical person could say that Booking.com is actively hindering Perl5 development by hiring so many (ex-)pumpkings and other people knowledgeable in Perl :-)

        As far as I'm aware, demerphq, BooK, Abigail and Rafaël (and likely many others whose names I just currently don't have in mind) work there.

Re: Nobody Expects the Agile Imposition (Part VI): Architecture
by sundialsvc4 (Abbot) on Jan 23, 2011 at 22:54 UTC

    I find myself pondering, more and more and more these days, precisely how much “really new” code there now needs to be in this world... and, how long we are going to continue to run that code on “our” machines.   I am beginning to suspect that we might well see core business functionality becoming a “software service” that is “hosted in the cloud,” such that the role of traditional software development – scrum or otherwise or what-have-you – just might change quite radically.   We might soon find ourselves being referred to as “assemblers,” tho’ not in the traditional computer sense at all.   Having built more-or-less the exact same things so many times, we ought to be getting very good at being able to buy them, instead.   We say that we build, applications.   But, is that definition changing before our eyes?   If the only thing that you need to do anything is a web browser . . .

      We say that we build, applications. But, is that definition changing before our eyes? If the only thing that you need to do anything is a web browser . . .

      Well, speaking as someone who for a living writes things that happen in a web browser, the definition sure seems about the same. I have a pre-written cross-platform UI toolkit with a weak but usually sufficient set of control primitives... but the hard part of most applications isn't assembling the UI anyway.

      I tend to believe that if the set of pre-written inter-pluggable primitives ever actually becomes rich enough to do all the stuff we "program" to achieve, all we'll have really done is just made a new programming language; it's not qualitatively different, and it's still going to require people with the same skill set as "paleoprogramming".

        I tend to believe that if the set of pre-written inter-pluggable primitives ever actually becomes rich enough to do all the stuff we "program" to achieve, all we'll have really done is just made a new programming language;

        I think that was one of the lessons of the 4GL (forth generation language) movement. Any toolkit sufficiently expressive to cover all of the client's business cases inevitably needed the full set of control flow statements. It quickly ceased being something just anyone could use and turned into something that required a programmer.

        What makes programming programming is not the units we work with - bits and bytes vs. complex objects. Rather it is the logic that binds them together into something useful. Once that logic begins to include conditionals, loops and the need to organize collection of data and functionality into discrete loosely coupled sub-systems or objects, it requires, as you say, "the same skill set as 'paleoprogramming'".

Re: Nobody Expects the Agile Imposition (Part VI): Architecture
by Jenda (Abbot) on Jan 25, 2011 at 18:31 UTC
    But as we progress in our development and add shiny new things to the top of Perl’s tower, we’re making the bottom more unwieldy. One of these days, at least some part - if not all - of the tower is going to collapse.

    This is why we need Perl 6. We now know what our tower should look like, and we need to build it from that design right from the start.

    These are the last two paragraphs from "The Tower of Perl" article you link to and were published August 9, 2001 ... looks like the tower still did not collapse and the rewrite takes ... quite a lot of time.

    Jenda
    Enoch was right!
    Enjoy the last years of Rome.

Re: Nobody Expects the Agile Imposition (Part VI): Architecture
by mr_mischief (Monsignor) on Feb 01, 2011 at 00:07 UTC

    Your thoughts about rewrites seem unorthodox to me. Let me clarify what I think of new projects, rewrites, and refactoring.

    Subversion, git, and Mercurial are not rewrites of CVS. They are new projects with similar goals. If they had the exact same feature sets they'd be called clones. There's no rewriting at all. There's just a fresh writing.

    A total rewrite is when you start writing the same project over from scratch. You throw your existing code base in the bin and plan to eventually ship a new version that started from different empty files. It probably won't pass the same external tests, and unit tests likely won't resemble the old ones. It likely uses an improved framework or a completely different one based on different concepts.

    A partial rewrite is when you rewrite some portion -- a module, a source file, a few functions -- over from scratch. Most of the external tests will still work so long as you don't change too many features at the same time. Unit tests for the rewritten portions will likely need to change unless you carefully stick to the same API and internal interfaces as before.

    Refactoring is when you clean up existing code and don't remove any code until you've got the replacement ready so it passes the same unit tests. You don't violate separation of concerns at all while refactoring. You just clean up what's there between change orders. The APIs between modules don't change. The internal interfaces stay the same except among very closely related functions or methods, and you end up with basically the same program. All external tests of the program pass without change. Most unit tests don't change, and the very few that do are just minor tweaks. The implementation is just clearer and maybe the execution path is shorter for the most common cases. Bugs probably don't even get fixed, although they are likely to be easier to notice by reasoning about the code. You're just cleaning the code, and you can generate a new ticket for the newly found bugs.

    A change order is executed from any feature requests or bug tickets. This is when functionality changes without a rewrite. Let's talk about bugs first. Generally just enough lines are changed to fix behavior for a bug, and the code around it is only cleaned up at this point if necessary to make the bug fix manageable. The test changes for the bug are to test the fixed behavior and to test for the buggy behavior as well to see if it returns. This often means boundary checking or a little fuzzing.

    The feature request might be to add, change, or remove a feature. The amount of code change can vary. The only tests that should need to change are those relating to the feature itself in the external tests. The unit tests should change for any new or removed APIs and internal interfaces adjusted for the feature.

    What I like to do with a project is to take all the bug-fix change orders and implement them. Then I validate against my tests. Then I refactor the whole program. Then I take the feature requests and apply those. Then I refactor the whole program again. Then, if necessary, I optimize. Then, if I can refactor the optimized code without killing the performance, I refactor again. Then the process starts over with new change orders. Does it always happen this way? Of course not. I'd like that, though.

    If I took the project over from another team, I'd try to refactor it all up front before making any changes in functionality. Then I'd start with the above process.

    This seems quite a bit different from the terminology you're using. I understand not throwing away an important code base. Saying that's what someone writing a new alternative to an unrelated project is doing doesn't seem quite accurate to me, though. Git and subversion are based on different ideas for accomplishing different but similar tasks compared to CVS for example. People wanting to rewrite CVS would be trying to end up with something that is CVS but with none of the original code. The other change tracking systems were written with something better than CVS in mind and didn't have any code already bugfixed and tested for their something better.

      Thank you for a well thought out response. While the newer word "refactoring" seems to be pretty well-defined, I feel that the older word "rewriting" is not. From Martin Fowler's original Refactoring book:

      Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. ... In essence when you refactor you are improving the design of the code after it has been written.
      From refactoring.com:
      Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. Its heart is a series of small behavior preserving transformations. Each transformation (called a 'refactoring') does little, but a sequence of transformations can produce a significant restructuring. Since each refactoring is small, it's less likely to go wrong. The system is also kept fully working after each small refactoring, reducing the chances that a system can get seriously broken during the restructuring.
      Hopefully, most folks will agree with those definitions. Now it gets much harder. For example, your opinion:
      Subversion, git, and Mercurial are not rewrites of CVS.
      does not agree with mine. My personal view is that Subversion was a "rewrite" of CVS, while the other two were not. I don't feel strongly though. I may well be "unorthodox", as you claim, yet I was pleasantly surprised to discover that many others, including Joel Spolsky, share my opinion. From Joel Spolsky:
      You may also want to look into Subversion, a ground-up rewrite of CVS with many advantages.
      From Open Source Software Development (wikipedia):
      A good example of a complete rewrite was the Subversion version control system, whose developers started from scratch: they believed the codebase of CVS (an older attempt at creating a version control system), was useless and needed to be completely scrapped.
      From Concurrent Versions System (c2.com)
      SubVersion is a project to rewrite CVS from scratch, in a more flexible and extendible way - and then to extend it.
      Finally, a probing (and relevant to this thread) question from Shlomi Fish interviews Ben Collins-Sussman:
      Subversion was a re-write from the grounds up done by many of the original CVS workers. Do you think it could have been faster to replace CVS (or CVSNT) component by component, thus yielding Subversion?

      To take another example, while I view Perl 6 as a "rewrite" of Perl 5, I suspect many monks would disagree with that view; a couple of them have already made that plain in this thread. Note however that Larry Wall at least seems to view Perl 6 as a "rewrite" of Perl:

      Perl 5 was my rewrite of Perl. I want Perl 6 to be the community's rewrite of Perl and of the community.
      Admittedly, that quote was taken from State of the Onion, TPC4, and the direction of Perl 6 has changed a bit since then. I'd be interested to know if Larry still views Perl 6 as a "rewrite" of Perl 5.

      Open Source Software Development (wikipedia) neatly summarizes the available rewrite/refactor options:

      Often open source developers feel that their code requires a revamp. This can be either because the code was written or maintained without proper refactoring (as is often the case if the code was inherited from a previous developer), or because a proposed enhancement or extension of it cannot be cleanly implemented with the existing codebase. A final reason for wishing to revamp the code is that the code "smells bad" (to quote Martin Fowler's Refactoring book) and does not meet the developer's standards. There are several kinds of revamps:
      1. Refactoring implies that the code is moved from one place to another, methods, functions or classes are extracted, duplicate code is eliminated and so forth - all while maintaining an integrity of the code. Such refactoring can be done in small amounts (so-called "continuous refactoring") to justify a certain change, or one can decide on large amounts of refactoring to an existing code that last for several days or weeks.
      2. "Partial rewrites" involve rewriting a certain part of the code from scratch, while keeping the rest of the code. Such partial rewrites have been common in the Linux kernel development, where several subsystems were rewritten or re-implemented from scratch, while keeping the rest of the code intact.
      3. Complete rewrites involve starting the project from scratch, while possibly still making use of some old code. A good example of a complete rewrite was the Subversion version control system, whose developers started from scratch: they believed the codebase of CVS (an older attempt at creating a version control system), was useless and needed to be completely scrapped. Another good example of such a rewrite was the Apache web server, which was almost completely re-written between version 1.3.x and version 2.0.x.

      Apart from arguing over semantics, the interesting strategic decision we face is whether to extend an existing legacy code base or throw it away and start from scratch. There is no one "right" answer to that question: it depends on the project, the team, the quality of the existing code base, and many other factors. Perhaps the most important thing is striving to prevent legacy code degenerating into a tangled mess in the first place.

      Update (2023):

        To take another example, while I view Perl 6 as a "rewrite" of Perl 5, I suspect many monks would disagree with that view; a couple of them have already made that plain in this thread. Note however that Larry Wall at least seems to view Perl 6 as a "rewrite" of Perl:

        Perl 5 was my rewrite of Perl. I want Perl 6 to be the community's rewrite of Perl and of the community.

        Sorry to be pedantic--it's not usually my thing--but I think you subtly reinterpreting Mr Wall's words in support of your argument.

        The man himself will set me straight if it is of interest to him, but I think that "Perl 6 to be the ... rewrite of Perl" is considerably different from "Perl 6 as a "rewrite" of Perl 5".

        'Perl', unadorned by the version number, is neither an implementation that can be re-written, nor a design evolution that can be reimplemented. It is a 'only'--and precisely completely--a concept; an ethos; an idea.

        As such, Perl 5 wasn't a rewrite of the Perl 4 implementation; but rather a rewrite of the Perl design that was then implemented as Perl 5. Ditto for Perl 6 relative to Perl 5.

        The (one; but a good one) definition of 'rewrite' in the context of software is:

        A rewrite in computer programming is the act or result of re-implementing a large portion of existing functionality without re-use of its source code. When the rewrite is not using existing code at all, it is common to speak of a rewrite from scratch. ..

        On the basis of both that definition, and my limited expereince of both, calling the feature rich Subversion a rewrite of the CVS, is like calling the Ford Focus a rewrite of the Ford Model-T. They serve a similar niche and target audience; but the way they go about achieving it is so utterly different.

        The goal of re-implementing the same basic functionality is present; but the provision of so much additional functionality makes the term 'rewrite' an inadequate description of the reality.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        My intent was not to start an argument over semantics nor over anything else. I merely intended to clarify where I think some imprecision and unnecessary disagreement has entered the thread. If we keep using words we define differently as a basis, then we at least need to know how those words are being used by each party. Otherwise we'll talk past one another and nobody really knows where we would agree and disagree no matter how civil or friendly the discussion.

        I also think it helps to remember that intentions toward a project can change over time. What one thinks will be a straightforward rewrite from the beginning can change in focus and gain features before the rewrite is done (or even really started). The new design can be a totally different sort of beast from the old, but since it's still in the same lineage the distinction is blurred. In fact, I suspect that the svn folks intended to rewrite CVS but looking back would only loosely use that term for what they finally did. I think Larry would say Perl 5 is a rewrite of Perl 4 from the point of view of both the language and the perl tool. I would probably say that, anyway. I think he intended originally for Perl 6 to be a rewrite of some sort, but the language is the only thing being rewritten IMO. I think Rakudo and Parrot are definitely not rewrites of perl 5.6 or 5.8 although the language implemented is still in the Perl family. How Larry actually does view things of course would be for Larry to say no matter what I think he might say.

Re: Nobody Expects the Agile Imposition (Part VI): Architecture
by sundialsvc4 (Abbot) on Feb 15, 2011 at 22:49 UTC

    Over the course of a great many years, I’ve noticed that the software industry seems to be locked in an ersatz science-fiction movie.   The scene opens in a graveyard, beside an open grave, with shovels and picks all around ... and the grave is surrounded by cribs, and in each crib there is a happy young baby.   Everyone in the scene wants desperately to grab a shovel and fill-in the grave, but no one can do so, because the life-force that is still sustaining everything is within the erstwhile “corpse” that has been consigned to the grave.   (In fact, a baleful-looking old man is standing upright in that grave, and he ... the oh-by-the-way source of all that business sustaining life-force ... is far busier than all the rest of them combined, with nary a shovelful of dirt upon his head.)

    As the science-fiction movie progresses, an amazing thing happens.   The bouncing, happy babies almost instantly turn into old men, and graves are promptly dug for them in which they calmly stand, busily doing the jobs for which they were intended, even as a brand new set of bouncing babies appear.   (The engineers promptly turn their attention to the new set of babies, as a new crop of clever young publishers write and sell a new crop of books.)

    Perhaps... we should give more serious consideration to the fact that none of the “crappy, old” legacy code in any of our shops ever started out that way, and also to the fact that every bit of the “new and improved,” “Agile&trade, Scrum™ insert silver-bullet buzzword here™” code that we are now writing will soon turn out that way.

    Let me say it again.   The new-and-improved systems that we are writing today will become the legacy-code of next week, regardless of what we do.

    (Sux to be the bearer of bad news, but this old phart doth stand his ground, and who among ye will stand with me?   Who shall stand to show me wrong?)

    If our methods (“It will be so much better this time!   I promise!!”) were actually new and improved better, then “the legacy code problem” would cease to exist altogether, would it not? ...

    Perhaps... we should stop trying so hard to bury Caesar, and spend a lot more time figuring out to give the old boy a facelift and a shave.   The “convoluted, incomprehensible” logic of a legacy system consists of two three four (unfortunately, inseparable) parts:

    1. The code that is specific to the exact representations of code-and-data that were chosen at some particular time (the “Y2K Problem™” being the most-obvious example of this) ... and ...
    2. The (representation independent) business logic that is buried in all of that rigid concrete ... but which actually represents the business, as it actually is.
    3. It is effing huge, consisting of not one but perhaps hundreds or thousands of individual parts.   All of them are moving ominously.
    4. I t     W o r k s .

    The “so what?” take-away that I would offer is that ... every piece of computer software that we have ever designed, and that we ever will design, is a similarly “concrete” structure.   Oh, we can cast it in many different languages and dress it all up in many ways (calling every single one of those ways “a silver bullet™” if it suits our marketing purposes), but our essential modus operandi, from the point-of-view of the physical hardware, really has not evolved at all.   The “new and improved™” working methods that we now use, are (I would suggest...) really not materially different from the “new and improved™” working methods that our parents used.   Or (gasp!   I am dating myself here!) ... we, ourselves used ... to create the “crufty, old, legacy” systems that we now decry.

    As an example of what I am saying ... consider the “New and Improved™ System” that your “New and Improved™ Agile™ Scrum™ New-Buzzword™ team just developed.   The realities of Business are upon ye, even as one-third of your development team just went to greener pastures while another two-thirds of your team just had their visas revoked due to some unforseen technicality.   Your company just swallowed or got swallowed-up by another company in what was a truly excellent business deal, and their 1,650,000 paying customers must be none the wiser when the deal is consummated eight weeks hence.   Can your “methodology” cope with that?   I doubt.   But is it pragmatic business reality?   Yes.

    Perhaps we should all be focusing our collective attention on things like ... change control, or the merging of development teams, or the assimilation of totally-unrelated code bases that (while well-designed by their own teams at their own time) are now in “a Brady Bunch moment.”   Perhaps we are staring too earnestly at the Eastern sky, waiting for a savior to come that will never come.   (I cordially request “religious indulgence,” and promise that I mean no “religious slight” or disrespect for the sake of metaphor.)   Maybe we are earnestly pursuing the wrong solution to the wrong problem, just as our predecessors did.   Maybe we should take full ownership of “legacy code.”   Both our predecessors’, and, soon enough, our own.

    /me straps on his hopefully flame-proof bunny suit and waits for the circus to begin...

      Please rewrite that without using italics, underline, bold or  <font color="lightblue">

        There.   Is that better?

        “HTML.   Professional drivers on a closed course.   Do Not Attempt.”

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://883757]
Approved by tilly
Front-paged by tilly
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (4)
As of 2024-03-19 02:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found