Personally, I try to avoid IPC. If IPC is necessary, I use either files in a directory, a database or TCP, mostly in that order. Files in a directory have the nice advantage that I only need to worry about the server or the client and can manually test things by moving files into the watched directory. A database is good for serializing access if I have more than one watcher for a resource and they all want to grab the same resource/task. I use TCP servers only if I already do some other TCP interaction, like HTTP requests or talking to a mail server etc.. All these approaches are (fairly) network transparent in the sense that as long as the directory is mounted everywhere, the processes can even run on different machines.
A Win32::Pipe has the advantage of being manually testable because you can copy files to \\.\pipe\yourpipename and your server can easily find out whether it has been started already. It only allows communication in the direction from the client to the server (I think), but you don't have much buffering problems. Using Win32::Pipe isn't that network transparent unless you have administrator privileges and can set up the users so that they are allowed to access \\machine\pipe\yourpipename from other machines.
I haven't done much with pipes, and I find the file-based approach to be the simplest approach, so I would start that.