I'd build a hash using the log entry date as the key and the single field value as the (presumably numeric) value.
As each line in the source log file is read, test for the existence of the key, if found increment the value by the current line's field value. If it is not found create a new key value pair in the hash.
Sorting the hash after completing the processing of the file should be significantly less memory intensive as you'll be sorting the summary records not the detail records.
That's my 2 cents for what it's worth.
in reply to Working with a very large log file (parsing data out)