Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re: Efficiently selecting a random, weighted element

by xdg (Monsignor)
on Oct 10, 2006 at 18:11 UTC ( #577462=note: print w/replies, xml ) Need Help??


in reply to Efficiently selecting a random, weighted element

And blam-o, you're set up to choose your next file, it's as if file_d.txt never existed, and you can repeat ad infinitum until you've selected enough files out

Depending on the number of files, what about just repetitively picking additional files until you get something different from the first? Your "pick" algorithm is fast, so why splice out a file and recompute offsets each time?

If you're picking a high percentage of the total files, then you'll be doing lots of useless picks of files already chosen, but if you're picking 2 of 300 files, it should work pretty well.

-xdg

Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

  • Comment on Re: Efficiently selecting a random, weighted element

Replies are listed 'Best First'.
Re^2: Efficiently selecting a random, weighted element
by jimt (Chaplain) on Oct 15, 2006 at 12:41 UTC

    I like this approach, but I'll need to think it over. I may even get off my butt and write some code to actually benchmark it.

    My concern is that the chances of collisions vary not only with the number of files, but how they're waited. To use a contrived example, say we are picking 2 out of 300 files, but 1 of those files is weighted to contain 98% of the hitspace? You'll probably pick that the first time, and then re-pick it quite a bit until you actually successfully get something else.

    But for general use, this could definitely be an improvement. Maybe a hybrid approach - pick a file, and if it contains below a certain percentage of the hitspace, then just leave it alone and continue. If it is above a certain percentage, then splice it out (keeping in mind that you'd need to re-calculate previously saved indexes. If you keep index 4 flagged as one to skip over, and then you remove index 3, you need to change your flag to ignore index 3 instead of index 4, and so on).

      Like almost any algorithm, it all depends on the exact nature of the problem space.

      One refinement, if you really want to consider splicing out high-weight elements, is to create your array in sorted order so that all your highest weight files are at the end of the array. That will decrease the amount of recalculation necessary if you choose to drop them.

      In the extreme case, if the highest weight word is chosen, you just pop the last element of the array and decrease the word count and you're done. If the second highest weight word is chosen, you splice out that word and have only one index to recalculate. Etc.

      -xdg

      Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Re^2: Efficiently selecting a random, weighted element
by an0 (Initiate) on Oct 15, 2006 at 08:54 UTC
    I think xdg's suggestion is reasonable.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://577462]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (9)
As of 2019-11-22 15:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Strict and warnings: which comes first?



    Results (113 votes). Check out past polls.

    Notices?