There's no simple answer - it depends on your application.
- What is the size of the data set? How fast is it growing?
- What is the access pattern? (reads vs inserts vs updates). How is usage volume going to scale over time?
- Is it transaction-oriented, batch-oriented, or both?
- What resources are available (EG dedicated DB hardware, full time DBA)?
- What are your backup, disaster recovery, and availability requirements?
- Do you need to support ad-hoc queries (versus having a set of pre-defined operations)?
- Do you need replication? If so, what kind, how much latency, etc?
Postgres and MySQL are good for moderately sized data sets under moderate transactional usage and ad-hoc queries. Heavy transactional usage, larger sets of data, complex multi-site replication, or high availability are best served by one of the commercial databases (Sybase, Oracle, etc). Small to moderate sets of transactional data with low utilization are fine using something like SQLite.
If you don't need ad-hoc queries or strict ACID transactions, a NoSQL type solution might be better. For smaller data sets, a non-relational database like BerkeleyDB may be a better choice. IF you need extreme scalability (multi-terabyte data sets), a bigtable implementation (HBase, Cassandra, Voldemort, etc) is probably something you should consider. For ad-hoc data analysis against a large set of data, a columnar database like Sybase IQ or Vertica might be the best choice.