There's more than one way to do things | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
And blam-o, you're set up to choose your next file, it's as if file_d.txt never existed, and you can repeat ad infinitum until you've selected enough files out Depending on the number of files, what about just repetitively picking additional files until you get something different from the first? Your "pick" algorithm is fast, so why splice out a file and recompute offsets each time? If you're picking a high percentage of the total files, then you'll be doing lots of useless picks of files already chosen, but if you're picking 2 of 300 files, it should work pretty well. -xdg Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk. In reply to Re: Efficiently selecting a random, weighted element
by xdg
|
|