|The stupid question is the question not asked|
Design question: handling hundreds of state machines in a Web contextby Anonymous Monk
|on Jan 02, 2013 at 16:24 UTC||Need Help??|
Anonymous Monk has asked for the
wisdom of the Perl Monks concerning the following question:
I am not allowed to describe the use case and it will sound weird, but we're trying to figure out the best way of handling thousands of small, event-driven state machines in a Web environment.
We have a standard Moose, Catalyst, DBIx::Class, TT, Postgres database stack and the potential for hundreds or thousands of users using the site at the same time. Each user can one one or more "workflows" attached to a given profile. Users choose which workflows they need or don't need. They'll have maybe three to five workflows at a time. Each workflow is a small state machine that should switch when the user takes a given action (which we're calling "events", so the term "event-driven" is misleading), such as checking out a document, editing it, visiting a new section of the site, etc.
We're trying to figure out the best way of writing this. Our primary concerns are performance and accuracy. It's better that the Web site crash than to have inaccurate workflows. We've considered having a small state machine module and representing all of the state machines as JSON and having all of them loaded into memory at once. With thousands of workflows, maybe this works, maybe it doesn't?
More importantly, what's the best way to trigger an "event"? Are we going to have to hardcode all of the actions everywhere? I suppose we can start applying roles at runtime to everything, but we're unsure of the best approach. We're not asking you to write the code for us, but any speculation on the best approach would be helpful.