Do you know where your variables are? | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
One way to avoid the timeout would be to immediately send a results page to the browser. We cheat a bit. First generate a unique temporary page temp12345.htm where we will write our results when they become available. Next send a 307 Temporary Redirect header back to the browser that points to this temp12345.htm page. This temp page will then appear in the users browser window. Some text like: We are processing your job, please click refresh now and again to see it your job is complete! will inform the user of what is happening. You are then free to process the job. All you then need to do is write the result to your temp12345.html page and then when the user next presses refresh - voila the result. No timeouts. Until the results are written the user just gets the same please wait page with each refresh. For elegance you could add a meta refresh to refresh the page every 10 seconds or whatever, then the user does not even need to worry about the refresh. When you write the result you dump the auto refresh so the final page does not keep reloading. merlyn explains the whole thing in detail (and with code) here Hope this helps tachyon In reply to Re: Searching large files before browser timeout
by tachyon
|
|