Ok, I will try to analyze the whole process:
So, I have a simple interface (textarea), where the user enters a protein sequence (I work with Biology data).
Step 1: read this sequence and run external C program on it
Step 2: based on the results of the C program, run a Java code on them (both C and Java software cannot be altered, must be used as-is).
Step 3: gather results from both runs through the Perl script, make some calculations and produce a final output
The bottleneck in the whole process is that both steps 2 & 3 can take some time, and that time heavily depends on how many CPU cores I can dedicate to them. So, since the physical machine of the server has 2 cores, I was thinking that each time I could (should?) devote both of them to only one user-request (after all, my web-service is not Google, so I will not really get THAT many jobs at the same time...).
Does it make more sense now maybe? | [reply] [Watch: Dir/Any] |