Unless it is reasonably possible that “the single thing that you are looking for” is actually ≥ 2GB in size by itself, then you will be, one way or the other, reading it in some more conveniently-sized sections and in some suitable way dealing with the “fragments” that are left-over at the end of each read. (You move this unused portion to the start of your buffer, read more data to fill it up again, and keep going.) If you can identify a record separator to Perl (it doesn’t have to be \n), Perl will even do a lot of the leg-work for you, using its own buffering scheme.
One way that is sometimes useful to deal with very large static files is to memory-map them, e.g. PerlIO::mmap (or any of 64-or-so other packages I found in http://search.cpan.org using the key, “mmap.”) This technique uses the operating system’s virtual memory subsystem to do some of the dirty-work for you, by mapping a portion of the file (a movable “window” into it, of some selected-but-not-2GB size) into the process’s virtual memory address space ... this avoids copying. But you still can’t map “all of” a very large file.