in reply to Scaling Hash Limits
Leveraging CountZero’s comment especially here: “55 million records” definitely need to be indexed to be useful, but an in-memory hash table (might or ...) might not be the best way to do it, if only because it obliges the entire data structure to be “in memory,” with all of the potential (and, rather unpredictable) impact of virtual-memory paging should the system become memory-constrained. (Also, and mostly for this reason, it does not scale well ...)
Yes, a hash-table, or tables, probably is the best “in-memory” choice. The design question is ... is “in-memory” the best choice? I suggest that it might not be.
If you put these data into any sort of database, the records are, first of all, stored on disk, which has unlimited capacity. And yet, the data are indexed ... also on-disk. The database can locate the records-of-interest and then bring those into memory for further processing as needed. If the database grows to 100 or 1,000 times this number of records ... well, it will just take a little longer, but it won’t fail. You want to design systems that do not degrade severely (and unexpectedly) as volume increases. “In-memory” approaches are notorious for “hit the wall and go splat” when memory runs short and paging kicks in. So, you either need to establish confidence that this won’t happen to you in-production at worst-loads, or design in some other way to mitigate that business risk.