I had taken bart
's word for the merge sort in the CB. I later told him the math was wrong (in the CB) but didn't change it because I knew the math was also wrong for his desired method of finishing the sorting. The thing neither of us considered is that the problem space decreases with each pass. In any case, regardless of the accuracy of the math - the merge sort is still the most efficient given the data structure.
Actually the paper by Bespamyatnikh & Segal contains a proof that you can do it in O(N log log N) time.
What is the "it" that is O(N log log N) time though? The partial sort needed to obtain the LIS, obtaining the LIS itself, or completing the patience sort? What I understood from the paper, which I admittedly only read far enough to know that it was over my head, was that the O(N log log N) was not for a complete sort which Wikipedia agrees with.
As far as the binary search is concerned - I have provided implementations to get to the partial sort using both methods so Benchmarking shouldn't be hard. Additionally, implementing a binary search & splice approach to bench against the merge sort is also straight forward.