If you have “millions of rows,” the last thing you want to do is to send those millions of rows down a pipe to another computer, or even to another program in the same one. You will want to do this in a stored procedure that is executed by the server, on the server. Furthermore, you will probably want this stored procedure to execute a loop which runs through each record in a query such as SELECT class, ip_address, hostname, MAX(timestamp) .. GROUP BY .., and which then, for each record found (each one representing a single group and providing the timestamp that should be kept): starts a transaction, deletes the qualifying records (using a single DELETE), and then commits.
However, to do that efficiently, you will need to create an index on (class ip_address, hostname), and I would do that first. You may well find that, when you have done this, a pure-SQL solution using a NOT IN query will now yield acceptable performance. (Although it may be too phat to run as a single transaction.) The absence of such an index is probably why your existing approach is running too slow ... and, why every other one would run too slow, as well.