Beefy Boxes and Bandwidth Generously Provided by pair Networks Russ
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

RE: The (lack of) POWER of flat files

by gaggio (Friar)
on Jul 04, 2000 at 19:07 UTC ( [id://21052]=note: print w/replies, xml ) Need Help??

This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.


in reply to The (lack of) POWER of flat files
in thread DBI vs MLDBM/GDBM_File, etc.

You are right, BBQ, when you say that there are tools and tools for each job. Don't make me say what I did not say. I said that flat files are fast depending on the use you make of them.

I should also have added that I am still a student, and I am not the administrator of University website. In my case, I think that flat files are the solution, compared to huge-pain-in-the-ass-to-install DMBS systems. I never said that MySQL was not fast. This is right, caching make the overall performance acceptable.

But again, I am saying that the easiest solution for ZZamboni might be to keep the flat file format. *Might*, because he did not say everything about what kind of data it is, and what use he wants to make out of it.

Father Gaggio

Replies are listed 'Best First'.
RE: RE: The (lack of) POWER of flat files
by BBQ (Curate) on Jul 04, 2000 at 20:38 UTC
    I am sorry if I misunderstood what you wrote. It is not my place, or intention to try and distort what you would most sincerely recommend to a fellow monk! Just because I don't agree with you, doesn't make me better, or more correct. I just disagree with your views on flatfiles, that is all... What you did say and I disagree with is:

    > 10,000 items to store. Well, why not having 10 files to store them?
    > The first file for the first 1000 items, and so forth. Speedwise, I
    > am telling you, you will end up with something a LOT faster than any
    > other big DB package or wrapper like all the DBI stuff. Because those
    > packages are, in fact, also using some big files I guess...

    My general DB and flatfile experience tells me that if you exceed a file with 2500 records, about 400 chars wide, and having more than one query per second, you are better off with a real DBMS. Yes, ZZamboni's easiest way out is probably going trhough flatfiles, but even in that case I would try doing something DBIsh. And while we are on that topic, splitting a large file in several smaller ones will not help at all (actually only make matters worse) if you don't have some sort of clever indexing system. By splitting the data in different files, you will not increase lookup speed, and will have a penalty for having to open each one of those files to do a full search! Again, I would split the files only, and only!, if you have a good indexing mechanism and can't afford (money, or machine wise) a DBMS. Most of the DMBSes already have clever indexing systems, so you don't have to reinvent the wheel.

    On a side note, caching won't make the perfomance merely acceptable, it will make it go through the roof!! There's no way of comparing disc access vs. ram access.

    #!/home/bbq/bin/perl
    # Trust no1!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://21052]
help
Sections?
Information?
Find Nodes?
Leftovers?
    Notices?
    hippoepoptai's answer Re: how do I set a cookie and redirect was blessed by hippo!
    erzuuliAnonymous Monks are no longer allowed to use Super Search, due to an excessive use of this resource by robots.