continuously?! surely not?!!... perhaps you should post the code?
if divide and conquer is not applicable then i have one other suggestion:
open a local UNIX socket/pipe
fork a predefined set of children (say, 8), have each of the children open a DBI handle to mysql
set up the parent process to loop over your insert data spooling it to the socket that the children are all listening to/blocking on; meanwhile each of children are blocking in the $server->accept() call waiting for the parent to send them data
upon reading a chunk of data, enter the insert routine and loop back to the accept
continue until parent runs out of data
an example of this approach (a pre-forking server, like apache) is in the perl cookback in the chapter on sockets or IPC, though i don't have the book in front of me at the moment, so can't give specific recipe.
but you're probably right; unless you're loading >1Gb of data (which does or does not need transformation prior to insert), it's probably fast enough. ;-)