|Perl: the Markov chain saw|
Re: Re: Re: Re: Optimising processing for large data files.by BrowserUk (Pope)
|on Apr 11, 2004 at 06:48 UTC||Need Help??|
...nothing to do with whether you are using true garbage collection.
I never used the phrase "true garbage collection".
You also included a false assertion about when databases can give a performance improvement.
Wrong. To quote you: "Sure, databases would not help with this problem."
Consider the case where you have a very large table,...
No, I will not consider that case. That case has no relevance to this discussion, nor to any assertions I made.
My assertion, in the context of the post (re-read the title!) was:
If you have a large volume of data in a flat file, and you need to process that data in it's entirety, then moving that data into a database will never allow you to process it faster.
That is the assertion I made. That is the only assertion I made with regard to databases.
Unless you can use some (fairly simple, so that it can be encapsulated into an SQL query) criteria to reduce the volume of the data that the application needs to process, moving the data into a DB will not help.
No matter how you cut it, switch it around and mix it up. For any given volume of data that an application needs to process, reading that volume of data from a flat file will always be quicker than retrieving it from a DB. Full stop.
No amount of what-if scenarios will change that nor correct any misassertion I didn't make.
Examine what is said, not who speaks."Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail