|Do you know where your variables are?|
Re: Mixing Mysql and Perlby diskcrash (Hermit)
|on Jan 12, 2004 at 04:29 UTC||Need Help??|
In past projects I have considered a few guidance issues:
1. Where there is the luxury of being able to separate parts of the engine (as in real-time applications) the first Perl task grabs data from a raw data source, munges it into useful fields, formats rows and inserts the simplest possible rows into the MySQL table. This first part often has nasty time dependent characteristics, so in past apps I had too keep the processing light. The second Perl task comes along at some leisure, reads the rows, formats graphics and reports, or sorts data, and builds up output for web pages or other MySQL tables. So here the time cycles drove the architecture and divided the Perl/MySQL tasks.
2. The second design driver was failure modes. If a task or system fails, can the rest of the system still carry forward to a degree and provide some value until another part is restored? So one task might live on machine 1, write data to machine 2 and a task on machine 3 presents it. If 1 fails, users can at least get to existing data. If 3 fails, 1 is still recording data, even if it is not available for presentation. You might provide redundent "2" machines so there is no single point of failure. The professional systems use tools for message queueing to make this process even more reliable.
3. The final reason for task separation is scalability. You might be able to add additional systems for back/middle/frontends that divide and process the traffic as demand grows. If you start "divided" its easier to grow like parts in parallel.
Let us know what else you discover - also code it and try it!