XP is just a number | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
I've been working on a cgi tool that takes a file and searches for a particular pair of strings and then optionally sorts the results, eliminating any duplicates. I thought I had everything working fine until someone tried to run an 80MB file through it, and his browser timed out before the searching was done.
I'm looking for suggestions on how best to approach this problem. It seems that this fellow isn't an isolated case and more people are going to need this tool to search files of this size, or even larger. I know I could just read x lines at a time and display the results for just that section, but this presents a problem if the user requests that the results be sorted and that all duplicates are removed. The tool currently provides a count of the matches as well, something I could not easily provide with this method. Any suggestions? In reply to Searching large files before browser timeout by aijin
|
|