in reply to A database table size issue
You've already gotten a good answer, so I'll just digress a bit.
I don't often see a task to process *every* row in a table where using SQL isn't better choice. It's pretty expensive to serialize each row and ship it to the other machine which does some processing, and then repeats the process to get the data back into the database. Since you haven't mentioned your level of experience with databases, I thought I'd mention a couple things, just in case.
- Can your task be done with SQL? It's a pretty simple language, and since the database server is optimized to use it, it's frequently faster than anything you'll be able to do locally (especially as you already have the overhead of shipping the data across a pipe).
- Even if your modifications are too complicated for a simple update, it can be better to make a temporary table and then do a set of transformations in SQL before updating the original table.
- If you find that you really need to get the data to the local desktop, be sure to get only the data you need to work on (via the WHERE clause, appropriate joins, etc.) so you can minimize the impact to your application, and all other users of the database.
- If you need to process all the rows and it's going to be a frequent operation, you might consider using the bulk-loader capabilities of your database to export the table into a flat file, which you can process easily with perl, and then bulk load the results back into the database. The bulk-loading tools for the databases I use (Sybase, MSSQL server, Oracle) are pretty efficient for whacking large amounts of data (and 30GB certainly qualifies).
If you want, feel free to share the types of transformations you're wanting and I'll try to help with the SQL.
Update: Typo fix (s/to/too/).
When your only tool is a hammer, all problems look like your thumb.