Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Estimating Task Complexity VS Loudmouth

by Eyck (Priest)
on Nov 23, 2004 at 14:57 UTC ( #409908=perlquestion: print w/ replies, xml ) Need Help??
Eyck has asked for the wisdom of the Perl Monks concerning the following question:

Hi,

how can one estimate complexity of task at hand?

There is this whole Computer Sciences way of doing that, O-notation etc... but it is largely theoretical and concernes with algorithms, not tasks.

Task like 'create user' is numerically trivial, but may be extremely time consuming when you're talking to Active Directory.

How can I estimate if I'm at all able to accomplish something like 'Create Interface Module For Loudmouth Library'?

that would require using Inline::C or h2xs, following on that question, how do they compare in terms of required knowledge and free time?

UPDATE Loudmouth is C jabber library, it works where Net::XMPP doesn't. I can either work to fix Net::XMPP or expose loudmouth to perl.That is what triggered thoughttrain on the subject.

To clear things a bit - solving Travelling Salesmen Problem is trivial as a task, because there are easy to use libraries (like Graph::Kruskal) that solve this, and it's just a metter of massaging your data to feed them.

Comment on Estimating Task Complexity VS Loudmouth
Re: Estimating Task Complexity VS Loudmouth
by PreferredUserName (Pilgrim) on Nov 23, 2004 at 15:30 UTC
    You should create a Perl interface module to the C library localtime() function. If you can't do that successfully, you should probably think of a smaller project.

    If you are successful, then create a Perl interface to a single function from the Loudmouth Library (whatever that is). Multiply the effort that required by the number of functions you want to expose to Perl. That's about how much effort will be required.

Re: Estimating Task Complexity VS Loudmouth
by diotalevi (Canon) on Nov 23, 2004 at 15:45 UTC

    You are going to count the number of instances of actions. So create three agents, two LEI activities, six user visible views, two forms, etc. Decide on how long it takes to do an average instance of each category, multiply by the quantity required and then if your estimate feels wrong, ask yourself where you weren't accounting for the extra time. These components have relationships so perhaps the interactions are going to be counted as well.

    If you don't already know how long it takes to do something, try to make a guess based on other things you've done that are similar. Eventually your guesses will improve.

Re: Estimating Task Complexity VS Loudmouth
by BrowserUk (Pope) on Nov 23, 2004 at 15:57 UTC

    There is only one, even vaguely accurate, method that I am aware of: Experience.

    No other method I've seen comes even close to the guestimate of an experienced, hands-on, developer(read programmer; coder; hacker), but don't be fooled. The experience has to be relevant.

    You can have many years, or decades of experience of coding in one language, one OS, or one field of work, and yet be a total noob when it comes any other particular Lang/OS/task.

    Longevity can help, especially if it includes a wide range of projects in a leader or manager role, but unless you've done something, or been a part of something really quite similar, there are no guarentees that your experience will be of any help at all.

    If your tackling a task fo which you have no experience, and noone with relevant experience to reference, then the best you can do is:

    1. break the task down into lots of very small items of work.
    2. Do your very best to apply realistic estimates (in hours) to each of those items.
    3. Total those estimates and add at least 50% and often 100% to account for:
      • integration;
      • unforeseen addition effort required;
      • spec changes;
      • delays through one source or another:
        • holidays.
        • short weeks.
        • sick leave.
        • company bingos.
        • irrelavant meetings.
    4. Multiply by the "Scotty factor".

      Start with 5 and let them beat you back to 2.

    Never under-estimate. Either to secure the work, or to impress the Boss. If you manage to meet your under-estimate through hardwork, overtime (paid or not) or shear brilliance; you've simply made a rod for your own back. It will become the norm and expected.

    If the project is new to you and you are going to be held accountable to your estimate: Over estimate the time. Grossly. And then request a reasonable fraction of that time to produce a prototype.

    If you're successful, write the prototype as if it will become the real thing but:

    1. Write it top down.
    2. Dummy-up the lower levels as you go.
    3. Go back and fill in as many of them as you can before the deadline.
    4. Stick to the deadline! Doing as little at each level as allows you to move on to the next.

      This includes:

      • Skipping pretty UI's;
      • Omitting error recovery--just die if things are wrong.
      • Ignore edge cases--use carefully selected "middle-of-the-road" data for basic functionality testing (and mid-project demos--if you cannot avoid them).
      • Ignore integrity--replicate a clean dataset for basic testing (& demos).
      • Ignore security--but bear it in mind.

        Add isAuthorised(); checks wherever seems appropriate, but dummy that to sub isAuthorised( return 1; } for getting the prototype going.

      • Go back and expand as many dummied parts as you can in the time.

    When you demo the prototype (at the deadline):

    1. Start by demoing clean; middle of the road data.

      Do this quickly and trivially, reserving as much time for the rest of the demo.

    2. Re-run the demo, retaining the dataset from the first run, but demonstrating the flaws. The lack of integrity; security; error recovery etc.

      Place an estimate on the time required to fix each of these things. By this point you will have a much better picture of the overall problem, including what is easy and what is hard, and, what you hadn't thought of in your first estimate.

      With luck and a good tailwind, your prototype will show enough promise to encourage management/customer to accept realistic estimates for it's completion/conversion to production grade code.

    Above all, be realistic.


    Examine what is said, not who speaks.
    "But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
    "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
    "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
      Heh - Brilliant! But I would add that, against your better judgement (and no matter how much you warn against it), the prototype *will* become the production code.

      Factor in some time to redesign the ugly bits once you more fully grok the problem domain.

        ... the prototype *will* become the production code.

        That is the intent. But only once the omissions of the prototype are demonstrated and acknowledged. That way the refactoring of the prototype has budget for, or an explicit waiver covering, those omissions.


        Examine what is said, not who speaks.
        "But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
        "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
        "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
Re: Estimating Task Complexity VS Loudmouth
by terra incognita (Pilgrim) on Nov 23, 2004 at 15:58 UTC
    There is no silver bullet that will tell you how much time a task will take. Especially if you are new to the technology you are coding with. To do estimating well you need to have a constant feedback loop and accurate records of how long a task actually took. Each estimate you make needs to be feed in to the next estimate you make. Even if you use an algorithm for this you will still need to go through iterations before you will have an accurate estimate.

    Here is how I provide estimates on my work.

    If I have not worked with the technology before I try to strip out everything I am familiar with and provide an estimate on that first.
    Then I do a quick assessment of how difficult the new stuff will be for me to learn. Part of that assessment is what resources I can call on if I get stuck, can I create something simple right away, and does something already exist I can use. I also try to create something using the new technology. I classify this as learning time.
    I then provide an estimate (guess) how long it will take to code the task in the new technology if had experience in it.
    Add some padding time, usually 15 20%.

    You will be off, the chances of getting it right the first time is remote. However if you estimate a larger time than you need, it will be easier to manage the project and/or customer expectations. Studies make it clear that most projects will be under estimated so go longer than you feel you will need. As well if you are enhancing or maintaining code your productivity will be lower than if you are making new code. Typically programmers doing enhancement work are 70% as effective and maintenance programming is 25% as effective as new coding, so you will need to add padding for that.

    No matter what the number is that you come up with keep your boss or client in the loop with the current estimate. They will be much more open to changes if you are open and honest with them. As well keep really accurate estimates of the time the task actually took. That way next time you will be that much closer ion your estimate.

Re: Estimating Task Complexity VS Loudmouth
by exussum0 (Vicar) on Nov 23, 2004 at 16:17 UTC
    There is this whole Computer Sciences way of doing that, O-notation etc... but it is largely theoretical and concernes with algorithms, not tasks.
    It's not completely theoretical. It's based on finding the worse count of "stuff" done. It doesn't have to relate to computer science. It gets paired up as computer science is heavily process involved. If you think about it, average case, which is theta, is applied in real life all the time, like miles per gallon for a car. big-omega for mpg for an H2, i think is 4mpg. I wouldn't want to imagine what O is for it..

    ----
    Then B.I. said, "Hov' remind yourself nobody built like you, you designed yourself"

Re: Estimating Task Complexity VS Loudmouth
by samtregar (Abbot) on Nov 23, 2004 at 16:49 UTC
    Let me answer the only question you asked that I can make sense of:

    using Inline::C or h2xs, following on that question, how do they compare in terms of required knowledge and free time?

    In my opinion Inline::C is easier to learn and requires less time overall to use. The main advantage of h2xs is a greater body of sample code in the form of modules on CPAN. Start with the Inline::C cookbook and hit the docs when something you need to do isn't there. Also, there are chapters on h2xs and Inline::C in my book.

    -sam

Re: Estimating Task Complexity VS Loudmouth
by rrwo (Friar) on Nov 23, 2004 at 18:19 UTC

    There is this whole Computer Sciences way of doing that, O-notation etc... but it is largely theoretical and concernes with algorithms, not tasks.

    What do you mean by "estimate complexity"? It's one thing to look at complexity in the abstract sense (e.g. it takes N^2 operations on N items).

    Task like 'create user' is numerically trivial, but may be extremely time consuming when you're talking to Active Directory.

    It sounds like you're asking about how to analyze how much time a task takes (or many tasks take).

    "Tasks" are not trivial. Tasks such as creating a user make use of algorithms (such as searching or sorting), so the traditional analysis applies. If anything, they are more complex, because you'll have to consider CPU and memory load, on the machines running the task, as well as network usage. Factor in the time of day. Even the degree of disk usage and fragmentation may affect how well a certain task runs.

    How can I estimate if I'm at all able to accomplish something like 'Create Interface Module For Loudmouth Library'?

    that would require using Inline::C or h2xs, following on that question, how do they compare in terms of required knowledge and free time?

    I'm not sure what you mean by this. You want to create a module that does what?

      I'm talking about programmer/sysadmin - machine interface, not about CPU time or system load.

Re: Estimating Task Complexity VS Loudmouth
by tachyon (Chancellor) on Nov 24, 2004 at 03:04 UTC
Re: Estimating Task Complexity VS Loudmouth
by artist (Parson) on Nov 24, 2004 at 17:14 UTC
    The first thing to do is to set the time aside for estimation whenever required. It serve important purpose. It tells you that you are concentrating task of estimation. With practice, and other advices given here, you can be a better estimator. Also just like the ability to do the manual task depend upon individual, so is the setting estimate for the task.
Re: Estimating Task Complexity VS Loudmouth
by toma (Vicar) on Nov 25, 2004 at 00:00 UTC
    Here is another xs tutorial.

    I have had good luck with Inline::C.

    Usually, if I need C, I code in C, or a mixture or C and C++.

    It should work perfectly the first time! - toma

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://409908]
Approved by BrowserUk
Front-paged by dragonchild
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (11)
As of 2014-07-12 04:00 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    When choosing user names for websites, I prefer to use:








    Results (238 votes), past polls