I'm trying to implement parallelization in someone else code that searches through a very big parameter space (a.k.a. all the possible combinations of the possible values of the defined parameters).
Is it possible that you are solving an Optimization problem
? I.e. could you define a loss function which would show how far are you from your goal? If that's the case, take a look at existing optimization libraries such as NLopt which take that function and arrive at the solution by faster means than space-exhausting search. They do make assumptions about "good behaviour" of said function (i.e. it should reasonably decrease when you are approaching closer to the solution) and some of them require partial derivatives of this function with respect to all variables optimized (but sometimes you can use a numeric finite difference approximation instead).
Unfortunately, there are no NLopt bindings for Perl right now (which may make most of my post moot). Twice unfortunately, most optimizing algorithms are single-threaded by design (but there are exceptions). But if you write your task in C or C++ (Fortran isn't likely to be the language of choice for bioinformatics) you could use OpenMP to launch several parallel optimizing threads with different settings.
NB: It is usually considered good thing to parallelize the most outer loops, not the most inner ones.