More importantly, multiprocessing of large volume inserts into a single DB will quite likely slow things down. Regardless of whether you are using forks or threads!
Think about what is happening at the DB server when you have multiple clients doing concurrent inserts or updates to the same tables and indexes. Regardless of what mechanisms the DB uses for locking or synchronisation, there is bound to be contention between the work being done by the server threads on behalf of those concurrent clients.
And if you have indexes and foreign keys etc. then those contentions compound exponentially. Add transactions into the mix and things get much slower very fast.
For mass updates, using the vendors bulk insertion tool from a single process, preferably on the same box as the DB and via named pipes rather than sockets if that is available, will always win hands down, over trying to multiprocess the same updates. Always.
For best speed, lock the table for the duration of the insert. If possible, drop all the indexes, perform the insertion and then re-build them.
If dropping the indexes is not possible (as you've mentioned elsewhere), then consider inserting the data into a non-indexed auxiliary table first, and then using an SQL query that runs wholly internal to the DB to perform the updates to the main table, based on data from that auxiliary table. Again, locking both first for the duration.
Finally, bend the ear of, or employ, a good (means expensive) DBA to set up your bulk insertions and updates processing for you. A few days of a good DBAs time in setting you up properly, can save you money over and over and over. There is no substitute for the skills of a good DBA. Pick one with at least 5 years of experience of the specific RDBMS you are using. More than most other programming fields, vendor specific knowledge is of prime importance for a DBA.