Problems? Is your data what you think it is? | |
PerlMonks |
Re^4: Best way to store/access large dataset?by stonecolddevin (Parson) |
on Jun 22, 2018 at 19:44 UTC ( [id://1217252]=note: print w/replies, xml ) | Need Help?? |
I'd be curious to see what you read that said Postgres wasn't recommended for lots of read operations. I don't think I've ever heard that before. If you're deeply invested in mariadb, it's probably fine. mysql has a lot of pitfalls, but people use it in large scale cases all the time. Regardless, my personal preference is Postgres. I don't think there would be an issues using it for high read volume or processing a large number of calculations, but it depends on what kind of traffic it's going to be taking. If it's a really specialized case, it's probably worth looking into some ETL (extract/transform/load) on AWS using EMR (Elastic MapReduce) and/or Athena. The key things here are how much data you're dealing with, how many calculations you need to perform, and how resource intensive those calculations are. I think Postgres will be just fine up to several million rows but if you're doing a ton of joining it might get hairy and be better to spread the work out a bit. Three thousand years of beautiful tradition, from Moses to Sandy Koufax, you're god damn right I'm living in the fucking past
In Section
Seekers of Perl Wisdom
|
|