|Problems? Is your data what you think it is?|
To remove duplicates, just store second and third fields in a hash (as a key) when you read the line. Then, print in file 1 or in file 2 depending on whether the key already exists in the hash.
Something like this (untested):
Code could be made more concise and the @temp array could be avoided for example with such a syntax: $key = join " ", (split / /, $_) [1,2];
but I preferred to do it simpler to understand.
Note that this will not work if your input file is huge to the point of getting your hash too large for your memory. The limit will depend on your system, but in general, everything below a million lines should probably work without problem on most current systems. If your file is larger (especially if it is much larger), you might need a different way of doing it.
Similarly, for the order of the 6th column, just store its value in a variable when you read a line and compare 6th column with the variable (i.e. 6th column of previous line).
I leave it to you to put the two rules together.