mdegges has asked for the wisdom of the Perl Monks concerning the following question:
Hi guys,
I'm a new perl user, and have been working on a perl web crawler for a couple weeks now. A few days ago I was really stuck on the normalize url part, so I showed my friend my code and asked him if he had any ideas. He emailed me back with the if(grep..) part down to the bottom.. but I have no idea what it's doing. I know grep's in the format {expr} stack and filters through the stack, but I'm still confused about it and want to understand.
I think it might actually be better to just not use grep altogether, but is there a way of doing that?
Thanks for any help!use File::Basename; #list of filenames to normalize my @index_file_names=qw(index.html index.htm index.php index.asp index +.cgi); sub normalize_url { my $old_url = $_[0]; chomp($old_url); #saves name at the end my $filename=basename($old_url); if (grep {$_ eq $filename} @index_file_names) { #saves the directory part my $normalized_url=dirname($old_url); $normalized_url; }else{ #don't need to normalize url $old_url; } }
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Is there a way around grep?
by toolic (Bishop) on Oct 11, 2011 at 01:20 UTC | |
Re: Is there a way around grep?
by BrowserUk (Patriarch) on Oct 11, 2011 at 03:01 UTC | |
Re: Is there a way around grep?
by ikegami (Patriarch) on Oct 11, 2011 at 02:35 UTC | |
by suaveant (Parson) on Oct 11, 2011 at 15:53 UTC | |
by AnomalousMonk (Archbishop) on Oct 11, 2011 at 22:42 UTC | |
by ikegami (Patriarch) on Oct 15, 2011 at 08:26 UTC |
Back to
Seekers of Perl Wisdom