|We don't bite newbies here... much|
Re: find common data in multiple filesby thanos1983 (Vicar)
|on Dec 28, 2017 at 10:18 UTC||Need Help??|
Since you are not telling us what is the problem e.g. the script is not running or it is not producing the desired output with a quick view we can not assist you.
A similar question parse multiple text files keep unique lines only was asked in the past and maybe you can find a possible solution to your problem that many Monks have tackled elegantly.
Update: I just tried to execute your sample of code, and it is not running. It looks you found the code somewhere you pasted here and did asked for someone to solve it for you. Can you show the minimum amount of effort that you tried to resolve it before and make the script executable?
Update 2: I had some time to kill so I put together this script that more or less does what you want. It reads all files from @ARGV and processes every line. Then it only keeps the lines that are in common. Assuming that lines are always the same and they are no combinations. By combinations I mean that you want only to detect duplicated lines.
Sample of code:
Update 2 continue: In case you want to detect uniquely lines that may contain only the $key or only the $value as duplicates, you can easily do it like this.
Sample of code:
Update 2 continue: I used the module List::MoreUtils and more specifically the function List::MoreUtils/duplicates that "Returns a new list by stripping values in LIST occuring less than twice.". The DATA that I used are from the sample of DATA files that you provided us.
Hope this helps, BR.
Seeking for Perl wisdom...on the process of learning...not there...yet!