Your code doesn't do the complete patience sorting, it does more or less sort, but you're left with some sorted piles of cards after it finishes — typically 11 piles for a deck of 52 cards, I read. You still need to do the second step: pick the lowest card from every pile, but you don't know what pile that is — except when you just start, then it's the leftmost one. It would be some form of merge sort.
I found one of the original scientific papers in PDF format (30 pages, 277k), in year 1999, item 2.
On to your questions.
Do the algorithms still work if the original list sorted in ascending order is non-continguous (! 1..N)?
Of course they do. All you use for the sorting action, is compare two items. Apart from that, their actual value is never used.
Do the algorithms still work if the original list contains duplicates?
Yes, but depending on what you do when they compare the same, your sort might be a stable sort, or not. You would get a stable sort if you treat the current card as the larger one, when they do agree.
Are there ways to maintain the same speed and decrease the memory consumption?
If so, I haven't found one.
Would there be a benefit in replacing the linear card placement search with a binary one or would the overhead outweigh the benefit.
No idea... In practice
, I found binary search often to be slower than linear search, for few items — sometimes not even that
few, as I've seen linear search being faster for a search in 500 items.
But we're still stuck with an incomplete sorting.
Perhaps binary search, or rather, a binary tree, could be benificial to complete the sorting: when you have constructed the piles, put (just) the top cards in the binary tree, remove the lowest value from it and from the pile, and insert the next one from the same pile that it came from. Repeat until all piles are expleted and the tree is empty.