Depends on just how many dupes you produce to get there. If you have to throw away more dupes than you generated valid neighbours in the first place, it seems much better to invest a fraction of the effort in redoing the combinatorics over and over. You get to save all the memory too.
The first versions of the approach I went with were not directly designed to avoid duplicates, and produced nearly 4◊ as many results as there were unique results, for a Hamming distance of 2 on a string of length 6. I assume that as numbers go up, any approach that does not avoid dupes to begin with will waste humongous amounts of time on them. Of course this is relatively off-the-cuff; I havenít reasoned it deeply, so it might not be as bad as I think.
Makeshifts last the longest.