Or do you see any serious use cases for it?
Dunno, I think that for random access of small files (say, maybe, under a megabyte) in situations where performance is not critical, the ease of implementation can still outweigh the cost. On the other hand, at least in my experience such files are rare. For example, when inserting lines somewhere, one usually has to scan the file to locate the insertion point anyway, so in such cases a while(<>) loop would still feel more natural to me than a linear search in an array. It's maybe a nice module to show off some of the power (Update: as in expressiveness / TIMTOWTDI, not speed) of Perl to newcomers, although then one might cause the problem of "if all you have is a hammer, everything looks like a nail".
Of course ikegami has a good point. There is a huge difference between reading even a ~1MB file into in array, vs. reading it with Tie::File:
$ cp -L /usr/share/dict/words /tmp/test.txt
$ wc -l /tmp/test.txt
$ du -sh /tmp/test.txt
$ time perl -MTie::File -e 'open F, "/tmp/test.txt" or die;
print `ps -orss $$`; my @x = <F>; print `ps -orss $$`'
$ time perl -MTie::File -e 'tie my @a, "Tie::File", "/tmp/test.txt";
print `ps -orss $$`; $a=$_ for @a; print `ps -orss $$`'