Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid

Re^2: How to generate test data?

by abdullah.yildiz (Novice)
on Nov 25, 2012 at 00:48 UTC ( #1005444=note: print w/replies, xml ) Need Help??

in reply to Re: How to generate test data?
in thread How to generate test data?

Thank you for your answer. I have another problem. I'm trying to find the data size which causes the algorithm to take at least 15 minutes. I do the following:
open (DATASET_RANDOM_INTEGER, '<DATASET_RANDOM_INTEGER.dat'); @numbers = <DATASET_RANDOM_INTEGER>; close (DATASET_RANDOM_INTEGER); #MEASURE THE TIME DURING WHICH THE ALGORITHM IS PERFORMED #START $start = Benchmark->new; #RUN THE ALGORITHM @sorted_numbers = sort { $a <=> $b } @numbers; #FINISH $end = Benchmark->new; $diff = timediff( $end, $start );
However, it's too time consuming since for example when I increase the input size to 100 million, I couldn't foresee that how long it will take to finish (As I write these message, it has been running for two hours). What is the way of accelerating the execution of my code so that it uses more CPU at a time unit?

Replies are listed 'Best First'.
Re^3: How to generate test data?
by roboticus (Chancellor) on Nov 25, 2012 at 02:33 UTC


    Regarding how to choose the size of a dataset to make it take 15 minutes: If I wanted to do that, I'd start out by using progressively larger datasets to see how the time changes with dataset size. For example, look at these three datasets:

    Dataset sizeSubroutine ASubroutine BSubroutine C
    1000 6 1 30
    2000 11 4 40
    3000 17 8 48
    4000 22 16 55

    Once I get a few samples, I'd try to predict the next dataset size. If you look at the values for subroutine A, it looks like a simple linear progression: it looks like it handles about 160-ish items per second for all four dataset sizes. So if I wanted to make it run for 15 minutes, I'd expect it to take 15*60*160 data items. Subroutine B, however isn't linear. It looks like it gets slower and slower as the dataset increases--in this case it takes roughly T = (X/1000)^2 seconds for a dataset. Solve for X when T=15*60 seconds and that would be a reasonable prediction. The third subroutine starts out pretty slow, but you can see that the time it consumes changes less and less as you add data samples. (I was shooting for a logarithmic progression, but I don't feel like doing the math, so that one's left as an exercise for the reader!)

    *HOWEVER*, these predictions assume that everything else will remain the same as the dataset grows. But you may find that at a certain dataset size, an algorithm may take a sudden, drastic increase in the time it takes. (For example you might exhaust your main memory and the OS may start swapping.) So rather than immediately going for 15 minutes, you might try to predict a dataset size that would take less time, like one or two minutes and see how far off you are. I frequently approach a final value by doubling each time (unless I'm using something like subroutine B).

    I hope this is somewhat helpful...

    Modern computers are so fast, though, that I expect it'll take a pretty large dataset to consume 15 minutes. (That, or a sufficiently horrible sort algorithm.)


    When your only tool is a hammer, all problems look like your thumb.

      Thank you for your suggestions.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1005444]
[james28909]: how can i write a while loop on one line (in a script) something like print "this\n" && print "that" && <DATA> && print the other\n" && next if /this matches/ while <DATA>?
[james28909]: i have tried with && and and and have tried to rearrange the functions as well to no avail

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (4)
As of 2018-05-20 18:40 GMT
Find Nodes?
    Voting Booth?