Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked

Re^3: Store a huge amount of data on disk

by erix (Parson)
on Oct 18, 2011 at 17:08 UTC ( #932198=note: print w/replies, xml ) Need Help??

in reply to Re^2: Store a huge amount of data on disk
in thread Store a huge amount of data on disk

Whether it is fast enough depends, I think, as much on the disks on your system as on the software that you'll use to write to them.

From what you mentioned I suppose the total size to be something like 300 GB? It's probably useful/necessary (for postgres, or any other RDBMS) to have some criterium (date, perhaps) by which to partition.

(FWIW, a 40 GB table that we use intensively, accessed by unique id, gives access times of less than 100 ms. System has 32 GB, and a 8-disk raid10 array.)

Btw, postgresql *does* have a limit for text column values (1 GB, where you need 2 GB, but I suppose that could be avoided by splitting the value or something like that)

  • Comment on Re^3: Store a huge amount of data on disk

Replies are listed 'Best First'.
Re^4: Store a huge amount of data on disk
by Sewi (Friar) on Oct 18, 2011 at 18:41 UTC
    Thank you for that numbers. A 1 GB upper limit would be ok, too as we don't want to reach this limit, but it might happen. I expect that I need to split at some high limit anyway, 1 GB or 2 GB doesn't matter.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://932198]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others chilling in the Monastery: (3)
As of 2017-07-23 05:57 GMT
Find Nodes?
    Voting Booth?
    I came, I saw, I ...

    Results (345 votes). Check out past polls.