good chemistry is complicated,
and a little bit messy -LW
We install Perl modules into repos that we deploy from. Rarely, we put pure-Perl modules into our main repo of source code because either we expect to have to make progressive changes to the module or (even more rarely) because there is less overhead (and lead time) in deploying just the one repo.
Mostly, we install modules into a "Perl modules" repo which we deploy as part of setting up a system. Such repos have a separate directory for each platform we deploy to. We get the modules into the repo by checking out the appropriate subdirectory on an appropriate system and using whatever tools to install the module into that subdirectory.
Rarely we fix bugs in a "Perl modules" repo. So it is good to have "deploy" just be "copy the files" not crazy things like "run this tool" or even "compile this code". Much more worrying would be if deploy included "download this list of modules". It is also good that the work of getting a module installed only has to be done once per module per platform.
Unfortunately, the people maintaining theses repos failed to keep track of which versions of which modules have been included. So we have a backlog item to catalog that information (as a pre-requisite for evaluating the impact of standardizing on a newer version of Perl yet again).
It is good that the repo tracks what files got changed when. They just don't directly track which module distribution(s) got installed that caused those files to be added or updated. But it is also true that not all changes are due to installing a new version of a module, so using a repo is important, IMHO.
"Perl module" repos tend to be specific to a particular product / team. We have a small number of approved builds of Perl that are in a "Perl binary" repo with a separate subdirectory per Perl build per platform. For example, we might have two approved builds of Perl 5.10.1, named perl-5.10.1-1 and perl-5.10.1-2, so we'll have a subdirectory for each supported platform for each of those builds.
Each team picks the build of Perl that they will use and so have "perl-5.10.1-2" in their project configuration files. So the team for the Fluff project ends up with fluff/bin/perl being a symbolic link to /site/perl/perl-5.10.1-2/bin/perl.
And that means that all the modules that get added to the fluff-perl-modules and fluff-perl-graphics and fluff-perl-mason module repos are installed using /site/fluff/bin/perl.
We will probably evaluate some tools for managing a stable of modules for the purpose of simplifying and/or automating:
I doubt we'll get to the point of automating the step of, when we have new version(s) of module(s) to add, "for each supported platform, go to an appropriate 'build host', check out the appropriate subdirectory, install the list of new module versions, test, check in".
And I suspect that step (2) (as very rarely as it happens) can require going to backpan to get the same version of the module as we're using on all of the existing platforms. We do that because upgrading a module version is much more likely to cause problems, IME, than upgrading Perl versions or switching platforms (now that I don't have to deal with AI/X, HP/UX, nor Solaris). And because we don't bother to archive the distribution tar balls that we downloaded (which is probably a mistake -- that's something that any tool attempting to address this problem space should be capable of doing automatically).