in reply to
Memory usage with DBI
The "list of massive results" is stored locally in the driver, and is fetched through whatever method you prefer. This means that if you have a large table, you will need a large buffer. This bit me too, in DBI + 'SELECT *' - Memory Use Galore?. The two solutions were as follows:
- Use the 'LIMIT ?,?' parameter to incrementally retrieve the information in blocks that are a managable size. For example, 'LIMIT 10,0', then 'LIMIT 10,10', etc. There is a risk, though, of duplicated or missing results if the table is modified between fetches.
- Use the low-level MySQL 'mysql_use_result' feature, and a separate DBI handle, which switches to a non-buffered method. There is no real risk of duplicate or missing results.
It would seem that if you are fetching the data row-by-row that the client would do the same, but this is not the case. Every time you call "execute", either explicitly or implicitly, such as through "do", the result of that is buffered in the client. This can consume an awful lot of memory, especially on tables with lots of rows.
You can reduce memory usage by selecting only a few key columns. 'SELECT *' will download everything, and if you have several long text fields, this can be very expensive. 'SELECT some_key_field', by comparision, is much more compact.