I think that the general consensus is that the problem becomes that there is “one thing,” and it is a big one-thing. The difficulty is a management issue, not a technical performance issue.
The traditional solution seems to be to store the data as separate files, in some kind of a subdirectory structure, using the database table as a reference. However, this can cause other issues ... because it presupposes the existence of a fine shared-file network filesystem between the various servers. Sometimes you want to keep everything under the auspices of the database server.
One strategy that I have seen used is to maintain multiple tables of images. Each image is uniquely identified (e.g. a UUID), and a master directory-table gives the (database name and) table-name where that image can be found. The application queries this table to find the image: it is an error for the key not to be found there. (Notice how the master-directory table can be rebuilt at any time if necessary, because of the use of globally-unique identifiers.) This hybrid strategy is intended strictly to allow the image-data tables to be maintained a more convenient size, while preserving “the database server” as the means of getting to the data. All of the “smarts” of doing this should, of course, be encapsulated into an opaque Perl object that knows how to Do The Right Thing.™