Assuming a unique index (i.e. one row returned for any matched key) the database should be highly scalable. The speed is almost entirely dependant on the number of physical IO operations (reads) that are required to find and return the required data. For example, in a quick test with Sybase, on a 4.5 million row table a query that looks up a row based on a unique index finds that row with only 2 physical disk reads. On a 15 million row table a similar query takes 3 physical IOs. The server finds this row in 10 milliseconds if the data isn't cached, and requires no measurable time (based on its internal measuring tools) to fetch the data if it is cached.
Obviously you have lots of overhead (network IO, etc) over fetching data from an in-memory hash, but a modern RDBMS server will scale to hundreds of millions of rows for simple one-to-one lookups with no problems at all (BTW - these timings were done a dual Xeon linux server, hardly heavy-duty hardware as far as database servers are concerned...)