Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: Storing large data structures on disk

by trwww (Priest)
on Jun 02, 2010 at 02:26 UTC ( [id://842712]=note: print w/replies, xml ) Need Help??


in reply to Storing large data structures on disk

For the last few hours I was trying different ways of storing (serializing) a large data structure to disk.

Just to make sure you understand perfectly clear because you seem to be ignoring the advice: the tool that a competent engineer reaches for when presented with this task is a database.

Otherwise, all you are doing is (poorly) reinventing a database. Even if you pull off this particular task, the very next thing you are going to be asked to do with the data is something that is going to be very simple with a database, but extremely difficult or impossible to implement in your custom data format.

Replies are listed 'Best First'.
Re^2: Storing large data structures on disk
by roibrodo (Sexton) on Jun 02, 2010 at 08:01 UTC
    Yest, I have starting digging into it. any recommended pointers for database newbies?
      I think you are headed towards a commercial relational database. re: This is because many nucleotides may point to the same results (they are highly depended).

      Your dataset size is huge, but databases are good at saying stuff "X" belongs to both "A" and "B".

      As just a simple primer and a "learn by doing with MySQL", try Learning SQL by Alan Beaulieu. This is just basic introductory stuff, but will give the bare basics of how relational tables interact. A huge commercial database will use SQL for the queries.

      From what I've read above, your representation of the dataset just isn't going to work because no realistic even network of machines can implement this. You are going to need multiple "serious machines" and a lot more thought about the organization of the data and what you need to process and algorithms to process the data.

      From your original post, I see 10GB of actual data. Other estimates I saw above are vastly greater. One approach would be to design for 10x what you have now and get that working (100GB). Jumping 2,3 or more orders of magnitude past the data you have now is unlikely to be successful (in my experience, 100x is often too large of a technology leap for "one shot"). Do 10x, learn stuff, then do another 10x.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://842712]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (5)
As of 2024-03-29 13:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found