Syntactic Confectionery Delight PerlMonks

Big-O Notation

 on Jul 05, 2001 at 07:48 UTC ( #94000=note: print w/replies, xml ) Need Help??

in reply to What??? You wanna learn math?

Just a quick note on the O(n) or so-called "Big-O Notation" used by computer science folks. It's not really "math"1 so much as it is an approximation. You can't get an exact number out of it, per se, and it isn't really something you can do algebra on in a direct sense. Quoting from that link:
Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items. Informally, saying some equation f(n) = O(g(n)) means it is less than some constant multiple of g(n). More formally it means there are positive constants c and k, such that 0 <= f(n) <= cg(n) for all n >= k. The values of c and k must be fixed for the function f and must not depend on n.
A first year computer science textbook is certainly in order if you want to discover some of the theory behind the way algorithms work. The classic White Book is a popular choice of professors and students alike. One of the authors is Ronald L. Rivest, the "R" in RSA encryption. It introduces you to a variety of different algorithms used to find substrings, sort lists, and navigate networks. Some of this is redundant considering Perl does a lot of this for you, but understanding the theory behind it, and the work you are making Perl do, can help you write better code.

The other tool that would be handy is a graphing calculator program, or even Excel, where you could familiarize yourself with the shape of various functions such as log(n). In computer science terms, these shapes are used to approximate how a function will scale, or how long it will take to run given a data-set of size n.

Here's a quick rundown on some popular types:
• O(n) - As the data doubles in size, the algorithm takes twice as long to run.
• O(n2) - As the data doubles in size, the algorithm takes four times as long to run.
• O(n3) - As the data doubles in size, the algorithm takes eight times as long to run.
• O(log n) - As the data size increases, the amount of time the algorithm takes to run increases, but increases less and less as the size increases more and more.
• O(en) - As the data size increases, the amount of time the algorithm takes to run increases exponentially, becoming very long very quickly.
For example, if you had a sort function which runs in O(n) time, it is fairly efficient because if you try and sort 20 things, it only takes about twice as long as sorting 10. If you had another sort function that ran in O(n3) time, sorting 20 things would take eight times longer.
1. Okay, so maybe adamsj is correct in saying that it is math, but I was merely trying to suggest that it's not, well, mathy math, which is another way of saying that understanding O(n) is not quite as hard as, say, learning multi-dimensional Calculus. I would have to agree that O(n)-notation is math inasmuch as it uses math to express, not surprisingly, mathematically how these functions are expected to run.

Replies are listed 'Best First'.
Re: Big-O Notation
by adamsj (Hermit) on Jul 05, 2001 at 22:10 UTC
As a former bookseller and math student, I want to second your nomination of Introduction to Algorithms--I wish my copy weren't hundreds of miles away--and also want to quibble with what you said about O(f(n)).

It is math.1

A lot of interesting math--especially math of interest to computing science--is done through approximation rather than turning out exact answers. I, too, used to look down on that as not real math, but I was wrong.

Now that that's out of my system, here's a little more good advice from my bookseller self:

Don't buy a current used textbook. Instead, look for an old edition. Trust me on this--little has changed in calculus and trig in the last century or so--and old texts are worthless to the college bookstore. If they have any, they probably have them out on the dollar a sack sale table.

If they don't have them out there, ask the textbook manager or try a used non-textbook store, and don't pay over five bucks--ten at the most--or ask around for a freebie at the math department at the local colleges.

There is a very good chance that your math teacher can't help you--mine couldn't (but he was a good track coach, I'm told), when I took on calculus in tenth grade. What he did do for me (since he couldn't explain how the Chain Rule was derived) was tell me that if I'd sit in class and quietly study calculus, he'd grade me just on my text scores (i.e., no algebra homework). That I wasn't supposed to bug him with calculus questions went unsaid.

This doesn't mean that your teacher won't be able to help you directly with the material--more likely she can than can't, but it's not a slam dunk--but it does mean that she can surely help you in some manner. However, the degree of help you get in this optional exercise is more than directly proportional to the politeness and respect with which you approach her. Only a saint (and I've benefited from a couple) will help a hateful student.

Let's see, what else? Oh, yeah--for an introductory text on discrete math, I like Discrete Mathematics by Richard Johnsonbaugh. It's in its fourth edition, so look for an old edition, and again, don't go over ten bucks.

Good luck, and have fun!

They laughed at Joan of Arc, but she went right ahead and built it. --Gracie Allen

1. I'm jealous of tadman, since, while I would agree with him that the ideas in the various O-notations are simpler than the ideas in multi-variable calculus, I found them harder to learn and prove.
Thank You! I find myself in the exact same category as the poster, I'm in high school, and have trouble with all this calculous. I looked at Discrete Mathematics, and it's perfect, designed for an introductory book. I'm just finishing up Knuth's The Art of Computer Programming, and once I read Discrete Mathematics I'll go back and reread Knuth's books; mabey I'll finally understand the math sections! Again, thanks.

The 15 year old, freshman programmer,
Stephen Rawls

You might check out Concrete Mathematics: A Foundation to Computer Science by Graham, Knuth and Patashnik.

From the fatbrain page:

Concrete mathematics is a blending of continuous and discrete mathematics. "More concretely," the authors explain, "it is the controlled manipulation of mathematical formulas, using a collection of techniques for solving problems." The subject matter is primarily an expansion of the Mathematical Preliminaries section in Knuth's classis Art of Computer Programming, but the style of presentation is more leisurely, and individual topics are covered more deeply.
Re: Big-O Notation
by Nitsuj (Hermit) on Jul 06, 2001 at 07:49 UTC
Eh, I'll add my \$.02 (hehe).
I think that the best math in computer science is the REALLY "outlandish" stuff, that you'll NEVER see in other fields. I love toying with Automata and Computation Theory problems. Nothing like a Turing Machine to make your day complete. And that is as MATHY as it gets, though it certainly doesn't feel like math was in high school, I didn't love math so much in HS... Not because of the teachers and all, but because I was taking the area of triangles just a bit more often than I care to think of.

Just Another Perl Backpacker

Create A New User
Node Status?
node history
Node Type: note [id://94000]
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others taking refuge in the Monastery: (4)
As of 2018-04-25 05:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
My travels bear the most uncanny semblance to ...

Results (88 votes). Check out past polls.

Notices?