|Perl Monk, Perl Meditation|
Re^2: Swallowing an elephant in 10 easy stepsby ELISHEVA (Prior)
|on Aug 17, 2009 at 07:05 UTC||Need Help??|
The amount of time spent talking with/listening to people varies with the project and the availability of people. I tend to have two very different kinds of conversations: (a) requirement oriented conversations focused on goals and priorities (b) deeply technical conversations focused on understanding the system and finding the least disruptive way to meet those goals.
Getting a sense of goals and priorities before one starts such a large analysis project is an absolute must! Even if I and my team are the end users, it is still a good idea to explicitly clarify our goals and expectations one to another. When we share a work environment and culture it is all too easy to make assumptions.
The goal setting conversations can strongly affect the choices made during the formal technical analysis. The process described above gives one an overview. I generally know what data there is and how it flows, but I have only studied a few of the subsystems in depth. Having a sense of goals is especially helpful in step 10, where I choose a subsystem (and sometimes more than one) to focus in on. In earlier phases knowing goals helps me keep an eye out for (a) hidden resources in both code and data that could support goals but aren't being used (b) missing resources that must be added. (c) possible bugs or extension difficulties in existing features. I usually try to keep notes as I go.
Requirements conversations tend to happen primarily before even beginning the process and then again at the end, when "what next" decisions need to be made. They can also happen midway, if the analysis process seems to be taking more time than originally expected. I always like to give people a chance to reevaluate priorities. I'm more likely to get paid for my time if the client is the one calculating the trade-off between goals, time, and money.
Deep technical conversations tend to happen during the process described above. First, in some cases there are no people. If I'm reviewing an open source package for its range of features and customization support, documentation, code, and database schemas are all I have to go on. If it is poor or sketchy then the process takes longer because I have to reconstruct more.
Even when there are people, I need to keep in mind two important constraints: Many people, including technical people, are much better at doing their job than explaining it. Even if they are good at explaining, they have limited time to assist in the more detailed phases of analysis. The more I can get on my own, the happier they are.
During step 1, the initial documentation phase, I look for "oral documentation" and try to assess the level of team knowledge about their own system. Knowledge of one's own system can deteriorate over time. In a complex system, most people are only asked to work on a small part of it. Team culture may not encourage them to look outside of their own box. It may be easier to hard code data in a function than go through the bureaucracy involved in adding a new field to a database table. This eventually results in patches and enhancements that don't quite fit with the original design and even the original authors of the system may no longer fully understand how the system works. But when the oral documentation is good, that is wonderful and it can indeed save hours of time.
I also like to understand how the development and maintenance teams function. Is there an overall philosophy? Has it changed over time? Do they have a formal structure to enforce that philosophy? Or do they rely on organic methods? Does it work? Why or why not? Do they read their own documentation? If not, why? When they do, does it help? What would make it more helpful? This kind of information gives me context when I see a design with two competing architectures for the same aspect (e.g. two different permissions management infrastructure).
At the end of each step, I ask the existing team to review my work. This helps catch misunderstandings, especially in the earlier steps where I haven't yet connected up the code and data. I'm constantly trying to control the level of detail and sometimes I make incorrect snap judgments based on the way things are named. If that can be caught early on by an experienced team member, so much the better. As you have said, understanding data flow is a crucial tool in correcting mistakes. How data is used is just as important as what it is called and how it is arranged (though sometimes these are in conflict). In the early stages, data flow knowledge isn't there yet, though sometimes I will try a quick grep through the code to see if it sheds any light on an oddly named field.
While going through each step, I keep a list of unresolved questions. I pick the most important questions and ask the existing team how they might answer them. I find that people seem to find it much easier to talk about how a system works when they are asked about a specific part of it. Even design philosophy discussions tend to be more informative and exact if I can point to two specific examples that challenge the assumption of a single philosophy.