Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW

Re: Efficient processing of large directory

by MidLifeXis (Monsignor)
on Oct 02, 2003 at 17:25 UTC ( #295978=note: print w/replies, xml ) Need Help??

in reply to Efficient processing of large directory

In addition to the while suggestion above, depending on the filesystem you use, your suggestion to hash the directories can be a good one, especially if the system has to read multiple disk blocks to find the file you are looking for.

The same concept has been applied to mail spools (qmail), and suggested to help speed up access to home directories on hosts with large numbers of "users".

With the number of files you are considering, you probably want to consider something along the lines of NUMFILES < SPLIT ** DEPTH where SPLIT is the number of subdirectories that can fit in one disk block, and DEPTH is how deep your directory structure should go. Once you get to the point where NUMFILES is larger, then you start needing multiple directory reads to find the file you need to open.

Add this to the while suggestion above, and you should be able to access each individual file (such as an open() call) as quickly (Update: adj -> adv) as the OS can handle it.

Of course, this is all IIRC, and it has been a while since I have applied this in my studies.

Now this is all based on older file systems (inode, UFS style, chained directory block style, etc). The newer filesystems (btree, reiser?) may not have this "problem" anymore.

Update: Fixed spelling mistakes

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://295978]
marto boggles at code he wrote > 10 years ago....

How do I use this? | Other CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (5)
As of 2017-11-22 12:20 GMT
Find Nodes?
    Voting Booth?
    In order to be able to say "I know Perl", you must have:

    Results (320 votes). Check out past polls.