Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Re: Optimising processing for large data files.

by Vautrin (Hermit)
on Apr 10, 2004 at 16:35 UTC ( #344131=note: print w/ replies, xml ) Need Help??


in reply to Optimising processing for large data files.

Databases are never quicker unless you can use some fairly simplistic criteria to make wholesale reductions in the volume of the data that you need to process within your application program.

I recommend using databases to people not because of any kind of performance gain you get from the "database magic bullet", but because there's a lot of programming work you can cut out of the picture.

For instance, if you need a data structure that can persist outside of your program, and be accessed and modified via multiple programs, all you have to do is create a table to represent the data structure, and all of the work is basically done for you. SQL is all you need, and it's done very easily for someone who is familiar with it.

However, if you're rolling your own, it's going to take a lot of time, you're going to have to take care of a lot of details which are just provided for you using a database, and (if you're on a virtual host without root access) it may be the only thing you have write access to.


Want to support the EFF and FSF by buying cool stuff? Click here.


Comment on Re: Optimising processing for large data files.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://344131]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (9)
As of 2014-12-25 03:14 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (159 votes), past polls