Pathologically Eclectic Rubbish Lister | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
I can't really tell. But first make it clear so you can debug it. Then when it's running, see if it's too slow. If so, *then* try to figure out how to speed it up. Normally to speed things up, you'll have to do one or more of:
Putting my "old man" hat on: When I started with computers, additions were cheap (only a handful of cycles), multiplications were expensive (hundreds of cycles), and transcendental functions were *really slow*. Today, however, multiplications are nearly as inexpensive as additions, and transcendental functions aren't terribly slow, either. I have a piece of paper around here somewhere--on it, I would document the range of cycles it took to perform several operations every time I upgraded computers. It was really kind of interesting watching the cycles dropping generation after generation. CPUs are *much* more efficient than when I started, so even if today's computers ran at the same clock speed as my first computer, it would still be 100 times faster. The reason I'm rambling on like this: For serious optimization, you need to look over the current literature. Each decade seems to have new issues arise, while others go away. I don't work nearly so hard to remove multiplications as I used to. But back then, I didn't worry about memory speed, either. But with the ever-increasing mismatch of CPU cycle speeds and memory I/O speeds, you have to pay more attention to how you're using your cache. I haven't seriously tried to optimize a program for nearly ten years. I'm sure that my assembly-language tuning techniques from back then wouldn't be nearly as useful today. I'd have to learn new ones for the current generation of computers. </old-man-hat> ...roboticus When your only tool is a hammer, all problems look like your thumb. In reply to Re^3: Simulating uni-dimensional Hawkes processes
by roboticus
|
|