When I first heard the term "best practice" at London Business School in 1999 I fell in love with the term. It seemed to epitomize everything I loved about good management: a desire to cull the best; thoughtful consideration of what works and what does not; a business's willingness to reinvent itself on an ever growing understanding.
The term "best practice" grew out of and was popularized via the corporate learning literature of the mid and late ninties. As organizations began applying techniques like TQM (total quality management) and Six Sigma throughout the 1980's and early 1990's it became increasingly clear that some work groups outperformed others - even in the same organizations. These sub-divisions were said to have "best practices" and management experts began discussing how these well performing divisions could teach their methods for success to other parts of their company.
In the last 10 years the term "best practice" has lost much of its association with the process of learning. Instead best practice has become a buzz word that is increasingly associated with a laundry list of rules and
procedures. Perhaps it is our innate need to measure ourselves against a standard. Or perhaps it is the word "best". There can only be one best, even if it takes a process to find it. Why reinvent the wheel once the best has been found?
Nowhere is this more clear than in the way many organizations and some monks seem to use Damian Conway's book on Perl best practices. The best practice in Damian Conway's book refers (or should refer) to the process that Damian Conway went through while developing his coding practice. He wrote this book in part because, over the
years, his own coding style had come to resemble an archeological dig though his own coding history. Interview with Damian Conway, Brian d foy. However, few people talk about his process, whereas many preach (or complain about) his rule list.
It may be human nature to turn best practices into best rules, but it isn't good management:
1. Best practice by the rulebook oversimplifies the knowledge transfer process. Knowlege consists of several components: facts, recipes, thinking processes, information gathering skills, and methods of
evaluation. Rules are only effective in transferring the first two of these. However, all the rest are essential. Without them rules get out of date or will be applied in counter productive ways.
Facts, recipes, and coding standards are like wheels and
brakes. But they do not drive the car. If the driver doesn't know the difference between the brake and the accelerator, the car will crash no matter how wonderful the wheels. Hard to communicate skills like information gathering and methods of evaluation are what drive the coding car, not the rules capturing layout and syntax.
If we focus only on rules, it is natural to assume that knowledge will be transferred simply by giving people enough motivation to follow rules. But this doesn't turn out to be the case.
In 1996 (Strategic Management Journal), Gabriel Szulanski (The Wharton School) published a study analyzing the impediments to knowledge transfer. (see Exploring internal stickiness: Impediments to the transfer of
best practice in the firm). He considered many factors that might get in the way. The study concluded that motivation was overshadowed by three other issues: "lack of absorptive capacity", "causal ambiguity",
and "arduousness of the relationship".
If rules alone were enough none of these would matter.
"Lack of absorptive capacity" means that the necessary background knowledge to understand and value the rule is missing. Causal ambiguity means insufficient knowledge of how the rules relate to outcomes. Put in plain English: we aren't very good at applying rules without reasons or context.
However, explaining rules also means transferring judgment - something that cannot be captured purely in the rules themselves. And this brings us to the last barrier to knowledge transfer: "arduousness of the relationship". This awkward term refers to how well the knowledge provider and receiver work together. Do they have a mentoring relationship that can answer questions and provide background information? Or are they simply conduits for authority, insisting on the value of the rules without helping show how the knowledge can be adapted to exceptional situations?
2. An overemphasis on rules is a short-term investment in a long-term illusion. Software is full of symbols and a great deal of code is boiler plate. It is easy to imagine that rules play a large role in software and the right set of rules will have a large payback.
This might be true if writing software were merely a transformation process. But if it were, we'd have developed software to automatically translate business processes, math books, and motion studies into software long ago. To be sure some of the coding today could probably be done by software, but not all of it. In every human endeavor there is a certain amount of boiler plate activity that passes for intellectual labour. This can be automated. But there is also a certain amount of genuine creativity and analysis. It takes a human being to know which is which. It takes a human being to do the
later.
If we want superior development teams, we need to spend our energy nurturing what only we humans can do. This is where our investment needs to sit. As for the things we can do with rules: if we focus our skills on the creative portions we will figure out a way to write software that makes the boiler plate things go away. It is only a matter of time.
3. Rules that free us from thinking do not provide for change. Rules that free us from thinking are, by their very nature, static. In 1994 a management book "Built to Last" took the management world by storm and became a knock out best seller for several years thereafter. 10 years later, the magazine "Fast Company" wrote an article reviewing the impact of the book and the companies featured in that book. Was Build to Last Built to Last - in 2004 about half the companies described no longer would qualify as built to last. When interviewed for the article, one of the authors of the book,
James C. Collins, argued that these companies had lost sight of what had made them great. He emphasized "Theeee most important part of the book is chapter four! ... Preserve the core! And! Stimulate progress! To be built to last, you have to be built for change!"
4. If it isn't abnormal it can't produce abnormal returns. The things that can be reduced to judgment-free rules offer no competitive advantage because they can be easily reproduced. No matter how hard we try we cannot build the best coding shop
by following everybody else's rules. To excel, our practices need to be closely matched to our team's strengths and weaknesses.
Some of the more recent management literature has begun stressing the concept of "signature practices". Signature practices are practices that are unique to an organization. They capture its special ethos and talents and serve as a focal point around which the company (or coding team) can develop its competitive edge. (See, for example "Beyond Best Practice", by Linda Gratton and Sumatra Ghoshal, Sloan Management Review, April 15, 2005).
I don't mean to be knocking rules. They have their place. But if we want to have an outstanding development team, our definition of best practice needs to expand beyond rules. We need to think about what makes our teams thrive. What helps them be at their most creative? What gets them into flow? When are they best at sharing knowledge with each other? At understanding each others code? Incorporating new team members? At meeting customers' needs? And then we have to be prepared to be ruthless in getting rid of anything that gets in the way of that. Even if it is the rules themselves.
Best, beth
Update: Clarification of point #4, in response to mzedeler below.
Re: Best practices, revisited
by CountZero (Bishop) on Jul 05, 2009 at 19:04 UTC
|
No battle plan survives contact with the enemy.(Helmuth Karl Bernhard Graf von Moltke) Why would it be different with "Best Practices"? The value of Damian's book lies in the fact that he explains why he considers some practices "best". You may agree or not agree, but it is not presented as a fixed list never to be changed nor to be doubted.If you find that a rule or a practice doesn't fit your style, why shouldn't you change it? But your argument to change it should be at least as good as the argument that imposes it. For instance I'm a terrible typist (as all who read my CB-postings will readily agree), so I wouldn't think of writing even small scripts without "use strict;. On the other hand I rarely if ever "use warnings;"
CountZero A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James
| [reply] [d/l] [select] |
|
For instance I'm a terrible typist (as all who read my CB-postings will readily agree), so I wouldn't think of writing even small scripts without "use strict;. On the other hand I rarely if ever "use warnings;"
I'd think you'd love the uninitialized warning, then. Maybe you never use constant hash keys?
| [reply] |
|
Or maybe not. Undef is too often a valid value or an expected return value. A warning for the read access to a nonexistent hash key might make sense. And would point to the line of the possible typo. Warning about the use of an undef, even if called "uninitialized value", is mostly pointless. It forces you to write more code and more code means more bugs and it often points to a wrong place in the code.
If you are afraid of mistyping the hash key, lock the hash via Hash::Util.
Jenda
Enoch was right!
Enjoy the last years of Rome.
| [reply] |
|
|
For some strange reason I seem not to have a problem with hash-keys. They get rarely mistyped.
CountZero A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James
| [reply] |
Re: Best practices, revisited
by ack (Deacon) on Jul 06, 2009 at 05:52 UTC
|
Every now and then, it seems that the Monestary just aligns with my daily struggles at the office and I get to see the wonderful depth of thought and experience in the Monestary. All ELISHEVA's writing and all of the wonderful responses to her posting are such a joy to read and to think about. As the central "grey beard" Senior Systems Engineer at work, I have, over the last 12-14 months, been asked by senior management to put together our "Small Satellite Systems Engineering 'Best Practices'". In that endeavor I have experienced (far more times than I care to remember) all of what all of you, especially ELISHEVA are talking about. My single biggest struggles are to (1) get everyone to understand that these are NOT rules, they are mentoring "lessons learned" from 3 decades of successfully building satellites, (2) they are and never must be "static"...they have to evolve and change and shift as technology and new challenges emerge, and (3) they are there to make everyone think rather than just do "checklist engineering" (i.e, engineering by checklist/rules/cookbooks) as it is not-so-affectionately known at work. I am so in debt to all who have posted on this topic; you have crystalized and succinctly captured everything that has been swirling around in my head (and on paper) for over a year now. Thank, you all fellow Monks.
| [reply] |
Re: Best practices, revisited
by mzedeler (Pilgrim) on Jul 05, 2009 at 22:12 UTC
|
4. If it isn't abnormal it can't produce abnormal returns.
While I agree on 1-3, I disagree on this one. You can't claim that any piece of great code is great only if some best practice rule was broken along the way.
Second, you're well underway writing a set of meta-best practice-rules - this particular one saying "Any good software must break at least one best practise rule". Stating (meta) rules kind of defeats the whole purpose, doesn't it? By the way, Douglas Hofstadter has some wonderful comments on such self-referencing rules.
I agree with the spirit behind the words - that independent thinking is the primary drive behind programming and since it is a creative process, there is very little chance that any one best practice rule will stand unbroken.
| [reply] |
|
The intent was not to say that you have at least one rule that breaks the rules. I'm sorry if I wasn't clear. There are many different ways to be abnormal - having an idiosyncratic rule is only one possibility. Some might find an extra rule. Some might apply old rules in new creative ways. And some might throw out one or two rules and replace them with an organic process.
There is no one right balance between structure and creativity. A lot depends on the management talent of the organization and its inherent culture. Culture can change over time, but not by fiat and only rarely by radical breaks.
An organization that has spent its entire life running development by a strict waterfall model isn't likely to do well if it goes to a few seminars and suddenly adopts extreme programming (XP) in all of its departments. XP is as much a culture as it is a technique and cultures do not change all that quickly.
An organization with managers that are themselves good programmers and teachers will have different options than an organization where managers have been cultivated to apply rules and follow orders. Both can find ways to excel, but only if they really understand what they are good at and where they need to grow. Even then, they will only excel if they set out a realistic staircase towards their goals. What makes the resulting practices "signature" is that they are closely matched to the organization rather than some external standard of "best".
That being said, you might be right about the meta rules. It seems to be a fundamental problem of any discourse on how to improve things. Focusing on process and matching process to people is hard and uncertain work and many people want short cuts and certainty. The best practice discussion of the late 1990's wasn't supposed to be a fancy name for the methods and procedures manuals of the 70's but that is what it seems to be becoming. My read on Damian is also that he values the reasons behind the rules more than the rules themselves, but that isn't how many managers have treated his book. Perhaps you have some ideas about how to break this cycle?
Best, beth
| [reply] |
|
That being said, you might be right about the meta rules. It seems to be a fundamental problem of any discourse on how to improve things.
Its the kind of paradox that makes my head spin.
My read on Damian is also that he values the reasons behind the rules more than the rules themselves, but that isn't how many managers have treated his book. Perhaps you have some ideas about how to break this cycle?
My conclusion that the only way to improve overall quality is by empowering people to make their own informed choices, but then again - I think that approach only applies to the particular segment I belong to (professional services in software development and refactoring). I wouldn't claim that I have found any general solution.
That said, I have to mention that my experience with very authoritarian management is very limited. First off, my productivity takes a hit so serious that I can't defend the solutions I deliver (so I leave). Second, I live in a country where the work culture is very anti-authoritarian (Denmark).
The experience I have is from organizations on the brink of total chaos, where such basic things as having one common sccs is a very rare sight. Such places don't have any standardized procedures for any part of the development cycle and there is nothing apart from the individual developers own conscience and skills that keeps the code free of dirty hacks.
Seen from that perspective, rules are good. Especially if you get fired if you break them repeatedly. I'm talking about rules like "Check code into sccs", "Don't write scripts with over 30 global variables" and "Don't write your own XML parser" (and yes - I can come up with examples of developers who has broken this rule repeatedly!).
| [reply] |
Re: Best practices, revisited
by gwadej (Chaplain) on Jul 06, 2009 at 03:19 UTC
|
When I used to train developers, I spent quite a bit of time on our standard practices. The argument that I finally settled on was paraphrased from a mentor of mine.
There are an infinite number of "solutions" to any programming problem. Most of them are wrong. Of the remainder, there are still a huge number (probably infinite) number of "right" answers. The purpose of the rules is to reduce the number of minor, useless variations in the solutions, so you can focus on the good solutions.
(Or, something to that effect. I don't think I've taught that class in about a decade.)
In most of the problems I have had to solve, any given set of best practices would not have altered the important part of the solution.
| [reply] |
Re: Best practices, revisited
by Jenda (Abbot) on Jul 06, 2009 at 01:44 UTC
|
The biggest impediment to knowledge transfer is the open space office. In a noisy hangar in which you hear every single word anyone utters you learn to stay silent. As silent as you possibly can.
Jenda
Enoch was right!
Enjoy the last years of Rome.
| [reply] |
Re: Best practices, revisited
by fullermd (Vicar) on Jul 06, 2009 at 08:23 UTC
|
If rules alone were enough none of these would matter. "Lack of absorptive capacity" means that the necessary background knowledge to understand and value the rule is missing. Causal ambiguity means insufficient knowledge of how the rules relate to outcomes.
The point of having and following rules is to be able to do things without thinking. It's really good to be able to do things without having to think about them. It's really bad to not be able to know when you have to switch gears and think about them, though.
Or, less long-windedly, "having rules for how to do things doesn't free you from having to know what you're doing."
| [reply] |
Re: Best practices, revisited
by mr_mischief (Monsignor) on Jul 07, 2009 at 19:06 UTC
|
I've never managed more than a couple of people as programmers nor more than half a dozen on any sort of project except over a very short term. However, I've often wondered if the US medical training model would work well with programmers. I understand the basics of this method are similar in much of the world, too. That makes sense, because parts of it come from as long ago as Roman times. Most of those reading probably have an idea how this works, but allow me to recap it anyway for clarity and comparison.
First, you have students, who spend all their time in lectures and practical labs. Then students spend some time shadowing doctors. Then, you have interns. These are called such because it seems they are interred in the hospital -- they almost never leave. Interns do a rotation with every medical team in the hospital before they decide which specialty they want. Then, they become residents. This places a small group of resident physicians under a mentor or two within a particular medical field. A breakdown of orthopedic surgery, internal medicine, emergency medicine/trauma, anesthesia, infectious diseases, diagnostic medicine, critical care, reconstructive surgery, cardiology/vascular medicine and dermatology (possibly with others like psychiatry or obstetrics/gynecology) seems fairly typical. Only after working a short time with all of these does an intern get to become a resident in one of them. Then, after residency, some become attending physicians, some go elsewhere like small clinics, and some go on to a sub-specialty that may require even more school and training.
I know most programmers don't start with three to four years of programming school, an internship, and then a residency on top of their undergraduate degree. However, would it really hurt a company to have a programmer complete an assignment in every type of programming the company does before going into just one team? If I have a regular need for UI programming, mathematical programming, business rules programming, accounting programming, scripting/macro engine implementation, automated unit tests, automated scaffold tests of the whole system, template systems, build systems, and configuration handling, I think maybe I'd like every new programmer to have worked with each part of that. Maybe they only need to write a subroutine or refactor a handful of them in each area, but as a pair programmer with a strong mentor. Then, they can have a mentor on the team which they eventually join. When a problem crosses team lines, perhaps an open and frank consultation from a member of the other team or a temporary joint subteam could look at the problem rather than shooting memos back and forth.
The best practices from one medical field often get boiled down into bits comprehensible by doctors with the same basic skills but in a different field of medicine. That way, a doctor can go to a conference or a lecture and get a better idea of how to refer a case. Updating the fringes of one's knowledge with the capabilities the other teams offer make one better able to work with that other team rather than duplicating effort (and often in a less effective way). There's no reason not to have workshops once in a while (internally or externally taught) that keep your programmers on top of what their peers can do, even if it doesn't teach everyone how to do it themselves. Lots of issues are going to cross team boundaries.
Even if you have a team per product or per project instead of teams for different types of tasks within your library of functions, there's room for cross-transfer of ideas. However, I have always had the notion that most of a programming shop's work should be creating and maintaining the libraries that will be plugged together to make applications. That way, you can have your teams of specialized types of programming, and you can have a small team that ties every application together based on its unique needs with some help from the maintainers of each subsystem.
Medicine is a much more poorly defined problem space than software design. We don't know all the specs for the human body yet. We don't know all the effects of the inputs from the environment, and they seem to be different for different people. Yet medicine works well enough to trust our lives to it. With specialized teams, training rotations, referrals, and consultations, much of the creep of features across boundaries is eliminated. Sometimes you'll find a general practitioner who is slower to consult with another doctor or to refer a patient, but the general rule is to stick to your specialty. Either you manage the patient's whole care, or you work especially deep in one area.
If we can trust such a clear separation of systems with one small overall management team for each patient, then perhaps it would work in the better-defined problem space of software design. Each system within an application could be maintained by deeply trained specialists, and the application-specific code that ties it all together could be managed by a small team for that application. As an added bonus, the tools maintained by the deep specialists can be reusable across applications in software (which is much more difficult across patients). The rotating training schedule, the mentoring of residents within their specialty, and the continuing cross-training of basic concepts among the different fields are keys to separation of concerns in modern medical practice.
As programmers, software designers, project manager, engineers, etc. we often continue to complain about a lack of modularity in an application. Often we hear others insisting this is because a top-down designer didn't make the breaks between modules in a well-considered way. It is my contention that the problem is having an application designed from scratch from the top-down in the first place.
Very few new inventions have off-the-shelf parts, but then very few well-trodden product areas have products designed from scratch. Ford doesn't design a new engine for every car model. They design a handful of engine sizes, and pick which ones are options for a particular model. Every few years there's a new standard for graphics card interfaces to the motherboard in a PC, but you seldom see a workstation with completely custom graphics infrastructure designed into it outside especially customized markets. Having building blocks one can plug together off the shelf in an intelligent way is a big time-saver. It also allows those blocks to be improved across several applications at once. Having a team dedicated to maintaining and improving each kind of building block just makes sense. Having a smaller team in charge of plugging those together and putting some custom stuff on top make sense, too. Any changes or improvements needed by one application might make sense to add to a particular subsystem, and all the better if it can be made an update across the reusable library.
Communication of some sort among the various teams is one of the biggest hurdles, then, because the delineation of the design is based on the lines along which you very carefully considered breaking your personnel into teams. If you're designing a stable of applications that have similar needs, then it'd be crazy to not at least consider dedicating a portion of your development staff to the common infrastructure. Once you've done that, make sure the application team, the subsystem teams, and the quality control people are all talking with one another rather than just to one another. Constant collaboration across the boundaries is, after all, how your doctor would do it.
| [reply] |
Re: Best practices, revisited
by Win (Novice) on Jul 07, 2009 at 08:10 UTC
|
Beautifully written meditation! Thanks for putting that together.
I think that if one was to see software development as an engineering discipline (which it is mostly) then there is a tendency to want to pass on good practice through the laying out of rules. I do agree however that quite often rules are probably not the best way of passing on knowledge. Perhaps when passing on knowledge a more scientific (rather than engineering) approach is required. For example, I have elucidated a number of generalisations from software developments (not Perl based). I haven’t presented these as rules. I first laid out a series of reasons for creating distinct modules and then a series of general aims and requirements to meet aims. No rules in sight. I think I had in mind what you wrote. I was concerned that people should produce software that was easily evolvable and to do that with rules felt too restrictive.
| [reply] |
Re: Best practices, revisited
by Xiong (Hermit) on Jan 30, 2010 at 00:37 UTC
|
Thinking is, for most people, more difficult than following rules blindly. For those who find it easier to think, thinking while following rules is still more difficult. Few indeed are those who think deeply, and follow, break, and make rules with art, style, and discretion.
Many things can be taught, perhaps even thinking; but it's tough. I often wonder yet if I know how to think myself.
My personal meta-rules are:
- Study and learn rules.
- Try stuff and see what works.
- Ask, "What if?"
- Pretend my project and my role are much more important and public than they really are, then ask, "Does that expedient still look okay?"
- Try other stuff. Be wrong.
- Ask for help. Pay particular attention to help that annoys or upsets me.
- Sleep on it.
- Study more. Study more. Study more. Study more. Study more.
- Try more other stuff. Be right.
- Make a rule but don't expect anyone else to follow it. Be an example.
- Remember that some rules are stupid but so widely accepted that following them may be smart.
The anecdote goes that a stonemason building a medieval cathedral was questioned. "Why are you dressing that block so carefully? That side will be cemented into the wall and hidden." The mason said, "God sees that side, too."
I agree that the best part of PBP is that Conway explains why for each rule. I'm free to accept or reject each rule but I can't follow the book blindly, email him, and complain: "But you said Do This!"
| [reply] |
|
|