Re: is Sqlite db good enough to handle large data.
by marto (Cardinal) on Jul 21, 2019 at 16:56 UTC
|
| [reply] |
|
| [reply] |
|
SQLite is not ideal if you need multiple consumers of your application. What I mean is, if you have several retail locations trying to share the database, it's more difficult to do right. A serverless database is really just intended for in-process, on-system use, not for multi-process, many-system use. SQLite also is difficult to use in situations where you need master/replica configurations for redundancy or shared load, live backups, row locking, and so on.
When you talk about whether SQLite is appropriate for big data you are only considering one of the factors that would go in to deciding if it is an appropriate choice. And you've got a vision of how it will fit into your big data plan that differs from what we may be thinking. It can certainly handle large amounts of data. And it may have a place in certain parts of big data system.
But with the very limited information you're providing, I would suggest that no, it's not ideal. Why? Because one retail location could become two, or five, or twenty five, or five hundred. Why should each location have a local database? It could make sense for each location to cache information, or to keep track of information ephemerally, and in such a case SQLite might be fine. There are thousands of uses of SQLite. But the moment you start thinking in terms of a central repository for multiple applications or multiple instances or installations of an application to access, SQLite runs out of gas.
On the other hand, perhaps you're standing up a service that multiple locations can access, and that service you're contemplating backing with SQLite for your data. Again, maybe not ideal; how will you handle running multiple instances of your application as traffic from multiple locations grows beyond what a single application worker can handle? How will you deal with live backups in a 24/7 world? How will you handle hot promotion of replicas to master should master ever go down? You wouldn't want mission critical data deployed alongside a single application instance on a single piece of hardware; 1990 is 29 years behind us, and we've learned a lot about scalability and redundancy since then.
So while SQLite can handle tons of data, it may not provide flexibility and scalability in areas that have nothing to do with the size of the data set.
I love SQLite, and recently used it in a system that comprises thousands of servers. But in this case the servers each do some job and store in local SQLite databases batches of information that later get posted up to a service that receives from each of those thousands of servers. The SQLite databases are ok, in this case, being single-server instances; there's no need for them to become a shared resource, and there's no competition from other processes to simultaneously write into SQLite. So it was a good choice; minimal, nothing to maintain, almost nothing to install.
It sounds like the application you are contemplating has no current expectation of growing beyond a single instance. But in a retail world I can't imagine that would be a good plan for any business that may someday grow. Admittedly I know nothing about the application you're considering. But I do have some retail experience: the first half of my career was as a retail buyer for a chain of stores that was at one time a dozen locations, and later 80, and later hundreds. Solutions that would have worked for one store would never have worked for a dozen. And solutions that worked for a dozen would never have been able to scale to 80, let alone hundreds. Small business, mom and pop stores sometimes do grow beyond their single location, and if an application cannot support that, it is targeting the specific narrow niche of businesses that are content not to grow.
| [reply] |
|
|
|
|
Re: is Sqlite db good enough to handle large data.
by haukex (Archbishop) on Jul 21, 2019 at 16:57 UTC
|
large data
What do you mean by this? Many rows, many columns, or both, what kinds of columns, e.g. large BLOB columns, and so on? When in doubt, benchmark: use a Perl script to generate data that is representative of the data you'll be working with, and test the speeds. See also https://www.sqlite.org/whentouse.html.
| [reply] |
Re: is Sqlite db good enough to handle large data.
by erix (Prior) on Jul 22, 2019 at 20:33 UTC
|
SQLite is wonderful, even for not-so-small data, as long as it is readonly / write-rarely.
The SQLite people say (from whentouse.html):
"
SQLite is not directly comparable to client/server SQL database engines such as MySQL or PostgreSQL.
SQLite does not compete with client/server databases. SQLite competes with fopen().
"
If your "desktop application for retailers" just reads/writes to the local computer, one user at a time, SQLite might be all right.
If it reads/writes, by possibly many users concurrently, to some central database SQLite is probably not so good. Then I'd use PostgreSQL.
2019.07.23 - (edited slightly)
| [reply] |
|
| [reply] |
Re: is Sqlite db good enough to handle large data.
by holli (Abbot) on Jul 23, 2019 at 11:22 UTC
|
| [reply] [d/l] |
|
| [reply] |
Re: is Sqlite db good enough to handle large data.
by Anonymous Monk on Jul 22, 2019 at 06:18 UTC
|
Thank you all for your inputs and suggestions. I have gone through the links provided, I would like to go with sqlite because mainly my data would reside on local machine and not on any server. Also this would be for one client, one application.
Let me see how it goes.
Thank you,
| [reply] |
A reply falls below the community's threshold of quality. You may see it by logging in. |
Re: is Sqlite db good enough to handle large data.
by Anonymous Monk on Jul 23, 2019 at 10:04 UTC
|
While this application is still in progress. But if someone is interested in this by any chance. Please let me know on my email "pawark86@gmail.com".
Please keep "MUNIMJEE Enquiry" in subject line. Though I am not sure if this is the right platform, sorry in advance for the same.
Thank you
| [reply] |
Re: is Sqlite db good enough to handle large data.
by sundialsvc4 (Abbot) on Jul 23, 2019 at 01:30 UTC
|
The question as posted is whether or not SQLite is good enough to handle large amounts of data. The superficial answer is of course Yes. But, quite some time ago now, we moved away from "desktop database" topologies that were based on shared files – an offshoot from files that existed only on a single non-shared computer – and adopted the client/server model instead. Since every computer you'll actually encounter fully supports multiprogramming ... as earlier systems did not ... it is now a trivial matter to support a client/server capable database on your own computer. The advantage of this being that the topology requires no changes to become multi-user: "it already is."
SQLite just might be the penultimate advancement (IMHO) of "a filesystem-based database." It really is a remarkable, best-of-breed tool for its appointed, narrow, niche. But its technically-necessary design introduces issues that you might come to regret if your use-case grows in such a way that this requirement might very reasonably be expected to do. Hence my recommendation to instead use a local, but multi-user capable, daemon. Best to get off on the right foot, no matter what programming language/tool you are using.
| |
|
Fair points. The defining advantage of SQLite, for me, is that it does not require all the CREATE USER, GRANT PRIVILEGES and then debugging the localhost VS 127.0.0.1 and tens of other small problems (talking about MySQL here which is my main experience) arising when users try to install your application to different OS and with different privileges. SQLite can be run with default user and no networking which requires zero admin rights. I think?
| [reply] [d/l] [select] |
|
| [reply] |