Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses

Re: Efficient processing of large directory

by MidLifeXis (Monsignor)
on Oct 02, 2003 at 17:25 UTC ( #295978=note: print w/replies, xml ) Need Help??

in reply to Efficient processing of large directory

In addition to the while suggestion above, depending on the filesystem you use, your suggestion to hash the directories can be a good one, especially if the system has to read multiple disk blocks to find the file you are looking for.

The same concept has been applied to mail spools (qmail), and suggested to help speed up access to home directories on hosts with large numbers of "users".

With the number of files you are considering, you probably want to consider something along the lines of NUMFILES < SPLIT ** DEPTH where SPLIT is the number of subdirectories that can fit in one disk block, and DEPTH is how deep your directory structure should go. Once you get to the point where NUMFILES is larger, then you start needing multiple directory reads to find the file you need to open.

Add this to the while suggestion above, and you should be able to access each individual file (such as an open() call) as quickly (Update: adj -> adv) as the OS can handle it.

Of course, this is all IIRC, and it has been a while since I have applied this in my studies.

Now this is all based on older file systems (inode, UFS style, chained directory block style, etc). The newer filesystems (btree, reiser?) may not have this "problem" anymore.

Update: Fixed spelling mistakes

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://295978]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (7)
As of 2018-06-19 14:45 GMT
Find Nodes?
    Voting Booth?
    Should cpanminus be part of the standard Perl release?

    Results (114 votes). Check out past polls.