Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Multithreaded (or similar) access to a complex data structure

by FloydATC (Chaplain)
on Nov 06, 2011 at 20:00 UTC ( #936331=perlquestion: print w/ replies, xml ) Need Help??
FloydATC has asked for the wisdom of the Perl Monks concerning the following question:

Esteemed monks,

I have a relatively large data structure made up from interconnected object "nodes" that represent all the physical devices in a network spanning many sites of various sizes.

These nodes are updated all the time by daemons collecting information such as ping status, response times, SNMP interface data, syslog data as well as operator data that chances the node structure by adding, moving and removing devices and connections.

The data structure allows spanning tree loops (because the physical network does) and allows me to do complex searches like determining the overall status of groups of sites, finding common root sources etc. very fast.

At the end of the pipeline sits an HTML/SVG engine that uses HTTP::Server::Simple::CGI to present everything as a web site with full drag/drop support, AJAX menus and the works. A typical request takes only a few milliseconds to serve.

So, what's the problem? As long as the requests are few and queries simple this is good enough but the current solution does not scale. At all. The problem is that my web server process is single threaded, because I can not for the life of me figure out a GOOD way to make it multithreaded:

Reason 1) DBI/Mysql does not work well with multithreading so live updates would be tricky. The current design is to use triggers on the large tables that get updated a lot, these triggers leave "clues" in a dedicated table, telling the server process which records have changed and need to be refreshed. This means updates get picked up without having to search for them but it's not ideal. Some data, like syslog messages, is simply too large to keep in memory and must be fetched from the database every time it's needed.

Reason 2) Copying the entire data structure very slow and difficult, because it's very deep and very recursive. Certainly too slow to do for each and every HTTP request. Besides, any changes made to the structure of a copy would have to somehow find its way back to the master and get distributed to the other copies.

I imagine there must be a way to have like a dozen copies of the same data structure running in its own process. Then, if an update comes in, somehow replicate that change to all of the others. If the HTTP bit scaled well enough, the ping/snmp/etc daemons could be rewritten to post their updates using HTTP and I could do away with the whole insane trigger/polling system.

Has anyone done something like this before? What tools or techniques did you use? Some kind of n-way realtime data replication, serializing stuff and sending it via sockets or whatever?

This is a pet system I develop in my spare time and rely heavily on at work. The current system gets the job done so it doesn't matter if I have to spend a year rewriting version 2 from scratch to get it perfect.

Update: DBI, not CGI stupid.

-- Time flies when you don't know what you're doing

Comment on Multithreaded (or similar) access to a complex data structure
Re: Multithreaded (or similar) access to a complex data structure
by BrowserUk (Pope) on Nov 06, 2011 at 21:43 UTC

    Up front. I've done very little CGI/webserver stuff at all.

    That said, I think that you are breaking the cardinal rule of presentation layer work, by mixing your presentation layer code and your 'application logic' code together in your CGI code.

    (IMO) your application should be split into two distinct parts:

    1. The Application Logic.

      This is a process that talks to your DB (and any other places it gets information) to build and maintain your datastructure.

      And periodically -- once a minute or once every 10 seconds or whatever frequency makes sense -- uses the information it holds to write a formatted web page to the web servers filesystem.

    2. The CGI script (URL responder).

      This is a standard CGI or mod_perl script that when it receives a connection, just reads the latest formatted page from the file system and sends it to the browser.

    By disconnecting the production of the page from the connections to the webserver, you ensure that no matter how thick and fast the connections arrive you can deal with them in a timely manner whilst imposing minimal load on your critical, public-facing front end, because they are in effect just responding with a static page. The latest updated version that is available. It also ensure that you don't hammer your DB requesting the same information over and over for every page request that arrives.

    Indeed, you could do away with the cgi completely and serve the webpage directly from the file-system though I don't know how the webserver would handle it if the file was in the process of being written when a request came in?

    And by disconnecting the production of the page from inbound requests, it allows you to choose the frequency with which you update that page. Whether that be on the basis of every few seconds, or when a significant event occurs or whatever makes sense to your application. And you only need to maintain one copy of your complex data structure in memory.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Up to a hundred changes can happen to the data structure every few seconds and there are so many ways to view it, this would mean literally hundreds of thousands of pages that would have to be cached every few seconds just in case some operator decided to have a look at it from just that perspective. (And that's just with the current data set, which is expected to expand tenfold over the next few years)

      Presentation logic is built into each object node because only "it" knows how to present itself based on information about the connected nodes and its own capabilities. For instance, if there is a problem somewhere in the network that affects "it" in one of many ways, "it" will show a warning color. This warning state propagates to other nearby nodes like site/area status pages, dynamic maps and so on. Except when an operator has told "it" not to, e.g. because a device is in maintenance mode.

      I try not to think of this in terms of web pages but rather as a visual representation/manipulation of a dynamic data structure using SVG glued together with HTML.

      It works beautifully. If only I could find a way to do it in paralell, it would be awesome.

      -- Time flies when you don't know what you're doing
        (And that's just with the current data set, which is expected to expand tenfold over the next few years)

        And therein lies the root of your problem. It's not just your implementation that doesn't scale, but your design.

        You are running a very complex simulation representing many discrete, real-world entities that communicate asynchronously. And you are running that simulation on single processor that has to iterate its way over a single, huge God-object in order to simulate all those asynchronous communications and concurrent state changes.

        If you attempt to then further complicate that by having multiple threads concurrently iterating their way through that God-object deriving their own particular views of it, they will need to apply locks either universally or to discrete chunks of the data structure in order to derive a consistent 'snap-shot in time' view of the entire entity.

        Each new viewer will therefore be impinging and impeding every other viewer, and as the number of viewers grows, so the time spent waiting for locks will grow exponentially. And as the size of the network grows, so the time required to perform a single traversal -- whether to update the status of the nodes as the effects of the state changes of individual nodes ripple through to their neighbours, and then to their neighbours, and then to theirs; or in order to gather a particular view of the instantaneous state of the network for a particular viewer -- grows exponentially as well.

        Sorry to be the bearer of bad tiding, given that you've almost certainly expended considerable effort on achieving what you have already, but your design is unsustainable. Adding further complexity to it -- in the form of concurrent viewers with all the additional lock contention and the potential for cascading deadlocking they represent -- will only further hamper your ability to run the simulation in anything like real time.

        The only realistic way forward is to:

        1. Distribute your network representation across multiple processors.

          Ie. break your God-object into manageable chunks each maintained by a different processor.

        2. Distribute the view building.

          By divorcing the presentation logic from the state maintenance.

          I encourage you to think about how Google et al. manage to respond to millions of viewers to their simulations of the entire web in real time. They distribute the state of their model (and the maintenance of that state) across a huge farm of processors. And then service view requests by having front-end processors request the relevant state from that back-office farm before gathering it all together to build their particular view.

        That is a bitter pill to deliver, and far worse to swallow, but I seriously believe it is your only way forward. Sorry. I wish you the very best of luck.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Multithreaded (or similar) access to a complex data structure
by Sigil (Novice) on Nov 07, 2011 at 02:09 UTC

    That sounds cool

    What about storing the tree in a Berkeley DB? Serialize each object with Storable, then load a DB wrapper into each thread and use Berkeley DB concurrent data store feature. Maybe a materialized path, like SNMP, for the data store keys each representing a device? Maybe a record to record the path to a change as events come in? Or a thread queue for each event?

    Just a thought.

      I'll have to look into this feature of Berkeley DB, I've never heard about it before. If this allows me to keep a "central database" in memory accessible by "child threads" who's purpose is to access and invoke object methods in the data structure, it might just be a way to go. Sending back changes can be done in any number of ways ofcourse.

      -- Time flies when you don't know what you're doing

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://936331]
Approved by keszler
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others scrutinizing the Monastery: (5)
As of 2014-12-29 04:27 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (184 votes), past polls