http://www.perlmonks.org?node_id=227954


in reply to An informal introduction to O(N) notation

I think it's usefull to have a look at a more formal way of saying what the Big O means after having read this informal orientation. In the end, it's not that hard to understand.

You can express the complexity of an algorithm as a complete function that includes constants (startup cost) and constant factors (for example if you iterate over a list thrice), based on the assumption that all primitve actions (+,-,/,*,assignation,...) cost 1 "step" of time. Let's call this function f(n,m,...) where n,m,... are all the variable factors that influence the problem. How can we now find a function g(n,m,...) so that O(g(n,m,...)) = f(n,m,...)? Simple

f(n,m,...) = O(g(n,m,...)) means that there exists a constant "c", for which c * g(n,m,...) >= f(n,m,...) is true for large enough n,m,...

If you think about this relation it's clear that you can forget all constant factors or constants in your program, because a large enough c will always suffice to make c*g() grow faster. Because O(n^2) = O(n^3) you should always find the smallest g() for which f() = O(g()) is true in order to have a representative function g().

f(n,m,...) = &Omega;(g(n,m,...) means that there is .... c * g(n,m,...) <= f(n,m,...)

This is the lower bound. And finally Θ marks an average "middle":

f(n,m,...) = &Theta;(g(n,m,...)) means that f(n,m,...) = O( g(n,m,...) ) and f(n,m,...) = &Omega;( g(n,m,...) )
--
http://fruiture.de

Replies are listed 'Best First'.
Re: Re: An informal introduction to O(N) notation
by dakkar (Hermit) on Mar 25, 2003 at 14:26 UTC

    Nitpicking:

    'Ο' is used for an estimate in excess (it's supposed to be a capital omicron, not an 'oh')

    'Ω' is used for an estimate in defect (o-mega "is bigger" than o-micron)

    'Θ' is used for the actual growth.

    For example: mergesort is Ω(1), Ο(n2), Θ(n log n)

    This is important since it's quite common to know boundaries on the complexity, but not the exact complexity. For example, matrix multiplication is Ω(n2) (you have to generate all the elements), and surely Ο(n3) (there is a naif algorithm with that complexity), but we don't know the exact function.

    -- 
            dakkar - Mobilis in mobile
    
      While we're nitpicking: O() defines an upper bound (typically, but not exclusively, on worst case behaviour) Ω() defines a lower bound (again typically, but not exclusively, on best-case behaviour) An algorithm is said to run in Θ(f(n)) if it runs in O(f(n)) and in Ω(f(n)), i.e. the upper and lower bounds are no more than a constant multiple of each other. It is a tighter definition than average-case execution complexity which depends on first defining the properties of an "average" input. I think what you're trying to define above is average-case execution complexity - by definition of the Θ() notation your statement above cannot be true. BTW mergesort is O(n lg n). - Menahem
Re: Re: An informal introduction to O(N) notation
by theorbtwo (Prior) on Jan 18, 2003 at 21:54 UTC

    Thanks. This was the "more formal, but still readable" defintion of O() that I was looking for.

    Also, BTW, while &Theta() is defined as the average-case, I think the normal-case is more often actualy used -- the average case can be impossible (or at least, impraticle) to compute. (Then again, I don't really know what I'm talking about.)


    Warning: Unless otherwise stated, code is untested. Do not use without understanding. Code is posted in the hopes it is useful, but without warranty. All copyrights are relinquished into the public domain unless otherwise stated. I am not an angel. I am capable of error, and err on a fairly regular basis. If I made a mistake, please let me know (such as by replying to this node).