Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

Re: Saving and Loading of Variables

by idsfa (Vicar)
on Jul 18, 2006 at 16:00 UTC ( #562048=note: print w/ replies, xml ) Need Help??


in reply to Saving and Loading of Variables

I'm not understanding your phrase "hashes of hashes of combinations" very well. If the order of testing does matter, it would not retain that information. If it does not, it seems like this would generate redundant information.

It seems to me that you have a list of one or more rules, which a message either passes or does not:

+---------------+-----------+ | Combination | Failures | +---------------+-----------+

Where Combination is a (string-y) list of one or more rules which, when combined, cause a message to fail. A corresponding perl structure is:

%data{"@combination"} = $failure_rate;

The intelligent reader will judge for himself. Without examining the facts fully and fairly, there is no way of knowing whether vox populi is really vox dei, or merely vox asinorum. — Cyrus H. Gordon


Comment on Re: Saving and Loading of Variables
Select or Download Code
Re^2: Saving and Loading of Variables
by madbombX (Hermit) on Jul 18, 2006 at 17:10 UTC
    When a message fails, I get the following line in my log file:

    Jul 18 00:36:32 mail amavis[26338]: (26338-01) SPAM, <beachmiro@autoxray.com> -> <postmaster@example.com>, Yes, score=17.675 tag=2 tag2=5.4 kill=13.5 tests=[BAYES_99=3.5, HTML_50_60=0.134, HTML_MESSAGE=0.001, SPF_HELO_SOFTFAIL=2.432, SPF_SOFTFAIL=1.384, URIBL_JP_SURBL=4.087, URIBL_SBL=1.639, URIBL_SC_SURBL=4.498], autolearn=no, quarantine A5QG0LjkvtcT (spam-quarantine)

    I pull out the tests and which start at [ and end at ]. The first thing I want to see is how many times each test was failed (meaning that URIBL_SBL was failed once so increment that count and so on). BAYES_XX (99 in this case) is tagged on EVERY message that gets tagged (the BAYES_XX tagging is a special tag in spam-assassin). So my other combinations will be {BAYES_XX + URIBL_SBL}++ and the same for all other test failures.

    Eventually I will be moving to other combinations of tests that I see failed very frequently. This means that assuming URIBL_SBL failed 3 out of every 5 messages marked as SPAM, I would use that in place of the BAYES_XX for a while to test that. Now part of my data structure would look like:

    Note: In case you've already read this message, I changed the data structure to look like what is below:

    %tests{"BAYES_99"}{"Total"} = 540; %tests{"BAYES_99"}{"Value"} = 3.5; %tests{"URIBL_SBL"}{"Total"} = 24; %tests{"URIBL_SBL"}{"Value"} = 1.639; %tests{"SPF_HELO_SOFTFAIL"}{"Total"} = 3; %tests{"SPF_HELO_SOFTFAIL"}{"Value"} = 2.439; %tests{"BAYES_99+URIBL_SBL"}{"Total"} = 18; %tests{"BAYES_99+URIBL_SBL"}{"Value"} = 5.139; %tests{"URIBL_SBL+SPF_HELO_SOFTFAIL"}{"Total"} = 1; %tests{"URIBL_SBL+SPF_HELO_SOFTFAIL"}{"Value"} = 4.078;
    It maybe that my concept of a proper data structure for managing this information is wrong. But I am not sure how if this were to be serialized and not Data::Dumper'd that it would still be functionally correct when reloaded.

      me elides a long post which was made totally wrong by your clarification of the data

      The updated data is much more helpful, but still leaves some significant questions. For instance, the order in the hash keys of your combined tests does not seem to be related to the order in which they appear in the log file (compare "BAYES_99+URIBL_SBL" v. "URIBL_SBL+SPF_HELO_SOFTFAIL"). The "Totals" seem to imply that they are gathered over multiple runs, while "Value" is obviously the sum of the conditions matched by the current message only. And to be fair, this is the first time you mentioned wanting to record the combined score. I think your requirements need better definition: exactly what are you trying to measure?

      There is no way to guarantee that all of your processed data will be saved (think power failure). You need to decide what an acceptable level of data loss is. You will need to write out the current state to a checkpoint file (whether through a database, Storable, Data::Dumper or whatever) at least once before you exit. The more often you checkpoint, the less data you risk losing. The truly paranoid will note that you need some sort of transactional locking to protect against interruption in mid-update.

      You could install a signal handler to catch most of the things that could kill your program and have it checkpoint your current status. It won't work for non-maskable signals (or power cuts), but might help with your stated aversion.

      For the task you describe, you might want to consider just rotating your logs nightly (or hourly, depending upon your volume) and processing them "offline" rather than tailing the live log file. It avoids most of these concerns.


      The intelligent reader will judge for himself. Without examining the facts fully and fairly, there is no way of knowing whether vox populi is really vox dei, or merely vox asinorum. — Cyrus H. Gordon
        Your correct about the data loss thing. I am ok with slight loss (meaning about 100 messages or so) and my plan was to write the information out to a file every 100 messages. I can deal with some data loss.

        Its not so much that multiple runs are made, but that the message tests will be iterated a few times in order to get the combination values needed. Meaning aside from the BAYES_XX+TEST_X combination (which will be counted every time, I will specify which test(s) I want to catch in combination (which will require multiple iterations over a message). For instance, if I find that URIBL_SBL hits frequently in SPAM messages, then I will want to see what also hits frequently in SPAM messages and I will have a URIBL_SBL+TEST_Y category that will be created and "Totaled". The "Value" will always be the same is more for reference purposes than anything else.

        This all again brings me back to my question of what is the best way to store all this information on disk? Data::Dumper, Storable, FreezeThaw (amongst others) have been suggested, but I am curious as to the most efficient method for large amounts of data considering that it isn't just hashes and arrays but semi-complex data structures as well.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://562048]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others taking refuge in the Monastery: (5)
As of 2014-08-01 03:54 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (256 votes), past polls