http://www.perlmonks.org?node_id=1011383


in reply to $/ usage

There's no need to slurp the file:

1) perl -nle '$result .= " $_"; END{print $result}' data.txt 2) perl -e "chomp(@lines=<>); print join ' ', @lines" data.txt

As for this:

Went with the following to do what i wanted :

perl -e 'undef $/; $text=<>; $text =~ tr/\n//; 1 while $text =~ s/\b( +\w+\d+\s*\d+\.\d+\.\d+\.\d+)\s*\1\b/$1/ig; print $text; $/="\n"; list

basically deletes the duplicates entries one after the other with "slurping"

1) As graff already pointed out:

If you were expecting the $/="\n"; at the end of your one-liner to do something, that's your problem. That step doesn't do anything.

The values of perl's global variables are set to the defaults when a perl program starts up. So setting a global variable in the last line of a perl program does nothing. Once a program ends, all the values that were assigned to any global variables during the program are lost.

2) Your regex doesn't work:

use strict; use warnings; use 5.010; my $text = 'S55 1.1.1.1 S66 2.2.2.2 S55 1.1.1.1'; $text =~ s/\b(\w+\d+\s*\d+\.\d+\.\d+\.\d+)\s*\1\b/$1/ig; say $text; --output:-- S55 1.1.1.1 S66 2.2.2.2 S55 1.1.1.1

3) Why would you ever try to cram so much code into the command line when you can write a perl program in a text file that is easier to write, edit and maintain? In any case, see if this does what you want:

perl -nle '$results{$_}=undef; END{print join " ", keys %results}' dat +a.txt

Note that the order of the ip addresses in the output will be random.