Your sample data (provided elsewhere in this thread) reflects lines of approximately 390 bytes. Your filesize is 500MB. So you have over 1.3 million lines to process. On my system calling strftime "%M,%Y,%m,%d,%H,%j,%W,%u,%A", gmtime $Y
1.3 million times takes over 30 seconds.
If the first ten digits that comprise "$Y" are repeated many times, you could cache on that value and not have to make 1.3 million calls to gmtime, and 1.3 million calls to strftime. But you're still calling split 1.3 million times, substr 1.3 million times, and so on.
You may find it better to chunk the input file and process it with four workers, each writing its own output files. Then cat the output files together. It's possible (though not entirely certain) that with a sane number of workers each doing its share of the work this could go faster.