XXX:/export/samfs-XXX01 /auto/XXX-01 nfs rw,nosuid,noatime,rsize=32768
Interesting, that should be reading in 32KB blocks. You would still see 4K blocks with strace
, though, which might be throwing off your analysis. Try seeing if the output of nfsstat
matches what you'd expect from strace
. If you find that it actually is reading in larger blocks, your sysadmins can try increasing rsize
Also, I seem to recall that you need NFSv3 to read blocks larger than 16K, so if you're not getting the full 32K you are asking for, you might want to look at that.
The readahead sounds intriguing. How would it work, if 200 clients tried to read the same file, though slightly offset in start time? Wouldn't read-ahead aggravate the server load in this case?
I'm not familiar with the internals of the Linux NFS code, but generally readahead will write into the buffer cache, and then client requests will be read from there. As long as it doesn't run out of memory it should do the right thing in the scenario you describe.