How many transactions per second can happen in MongoDB? - database

I am writing a feature that might lead to us executing a few 100s or even 1000 mongodb transactions for a particular endpoint. I want to know if there is a maximum limit to the number of transactions that can occur in mongodb?
I read this old answer about SQL server Can SQL server 2008 handle 300 transactions a second? but couldn't find anything on mongo

It's really hard to find a non-biased benchmark, let alone the benchmark that your objectively reflect your projected workload.
Here is one, by makers of Cassandra (obviously, here Cassandra wins): Cassandra vs. MongoDB vs. Couchbase vs. HBase
few thousand operations/second as a starting point and it only goes up as the cluster size grows.
Once again - numbers here is just a baseline and can not be used to correctly estimate the performance of your application on your data. Not all the transactions are created equal.

Well, this isn't a direct answer to your question, but since you have quoted a comparison, I would like to share an experience with Couchbase. When it comes to Couchbase: a cluster's performance is usually limited by the network bandwidth (assuming you have given it SSD/NVMe storage which improves the storage latency). I have achieved in excess of 400k TPS on a 4 node cluster running Couchbase 4.x and 5.x. in a K/V use case.
Node specs below:
12 core x 2 Xeon on HP BL460c blades
SAS SSD's (NVMe would generally be a lot better)
10 GBPS network within the blade chassis
Before we arrived here, we moved on from MongoDB that was limiting the system throughput to a few tens of thousand at most.

Related

What is the maxium replication rate of Couchbase XDCR

We are currently using Couchbase for data caching and there is talk of doing cross-data center replication with it. However, we will need up to 1000 documents replicated to multiple locations every second. Documents will be between 2 and 64K each.
Is there anyone out there with XDCR experience who can tell me whether this is even feasible, or if we will have to use other means to replicate this data at that speed. The only "benchmark" in the documentation at Couchbase implies that the rate of XDCR is only about 100TPS. (149 ms to replicate 11 documents.)
The replication rate of XDCR is limited by network bandwidth and latency first, then CPU and disk IO. Assuming you have enough bandwidth between the datacenters and your clusters are provisioned properly, Couchbase will replicate hundreds of thousands of documents per second, or more. It's a pretty simple experiment to run, just set up XDCR between two singles node clusters and use one of the load generator tools that come with Couchbase to create some traffic. (cbworkloadgen in the Couchbase bin folder or cbc-pillowfight that comes with libcouchbase.)
There are several config settings you can play with to optimize throughput, such as increasing batch size, changing the optimistic replication threshold, etc.

Reason why Odoo being slow when there is huge data inside the database

We have observed one problem in Postgresql as it doesn't uses multi core of CPU for single query. For example, I have 8 cores in cpu. We are having 40 Million entries in stock.move table. When we apply massive query in single database connection to generate reporting & observe at backend side, we see only one core is 100% used, where as all other 7 are free. Due to that query execution time takes so longer and our odoo system being slow. Whereas problem is inside postgresql core. If by anyhow we can share a query between two or more cores than we can get performance boost in postgresql query execution.
I am sure by solving parallel query execution, we can make Odoo performance even faster. Anyone has any kind of suggestions regarding this ??
----------- * Editing this question to show you answer from Postgresql Core committee *---------
Here I am posting the answer which I got from one of top contributor of Postgresql database. ( I hope this information will be useful)
Hello Hiren,
It is expected behave. PostgreSQL doesn't support parallel CPU for
single query. This topic is under high development, and probably, this
feature will be in planned release 9.6 ~ September 2016. But table
with 40M rows isn't too big, so probably more CPU should not too help
to you (there is some overhead with start and processing multi CPU
query). You have to use some usual tricks like materialized view,
preagregations, ... the main idea of these tricks - don't try to
repeat often same calculation. Check health of PostgreSQL - indexes,
vacuum processing, statistics,.. Check hw - speed of IO. Check
PostgreSQL configuration - shared_buffers, work_mem. Some queries can
be slow due bad estimations - check a explain of slow queries. There
are some tools that can breaks some query to more queries and start
parallel execution, but I didn't use it. https://launchpad.net/stado
http://www.pgpool.net/docs/latest/tutorial-en.html#parallel
Regards Pavel Stehule
Well, I think you have your answer there -- PostgreSQL does not currently support parallel query yet. The general advice towards performance is very apt, and you might also consider partitioning, which might allow you to truncate partitions instead of deleting parts of a table, or increasing memory allocation. It's impossible to give good advice on that without knowing more about the query.
Having had experience with this sort of issue on non-parallel query Oracle systems, I suggest that you also consider what hardware you're using.
The modern trend towards CPUs with very many cores is a great help for web servers or other multi-process systems with many short-lived transactions, but you have a data processing system with few, large transactions. You need the correct hardware to support that. CPUs with fewer, more powerful cores are a better choice, and you have to pay attention to bandwidth to memory and storage.
This is why engineered systems have been popular with big data and data warehousing.

Improving database record retrieval throughput with appengine

Using AppEngine with Python and the HRD retrieving records sequentially (via an indexed field which is an incrementing integer timestamp) we get 15,000 records returned in 30-45 seconds. (Batching and limiting is used.) I did experiment with doing queries on two instances in parallel but still achieved the same overall throughput.
Is there a way to improve this overall number without changing any code? I'm hoping we can just pay some more and get better database throughput. (You can pay more for bigger frontends but that didn't affect database throughput.)
We will be changing our code to store multiple underlying data items in one database record, but hopefully there is a short term workaround.
Edit: These are log records being downloaded to another system. We will fix it in the future and know how to do so, but I'd rather work on more important things first.
Try splitting the records on different entity groups. That might force them to go to different physical servers. Read entity groups in parallel from multiple threads or instances.
Using cache mght not work well for large tables.
Maybe you can cache your records, like use Memcache:
https://developers.google.com/appengine/docs/python/memcache/
This could definitely speed up your application access. I don't think that App Engine Datastore is designed for speed but for scalability. Memcache however is.
BTW, if you are conscious about the performance that GAE gives as per what you pay, then maybe you can try setting up your own App Engine cloud with:
AppScale
JBoss CapeDwarf
Both have an active community support. I'm using CapeDwarf in my local environment it is still in BETA but it works.
Move to any of the in-memory databases. If you have Oracle Database, using TimesTen will improve the throughput multifold.

Very Large Mnesia Tables in Production

We are using Mnesia as a primary Database for a very large system. Mnesia Fragmented Tables have behaved so well over the testing period. System has got about 15 tables, each replicated across 2 sites (nodes), and each table is highly fragmented. During the testing phase, (which focused on availability, efficiency and load tests), we accepted the Mnesia with its many advantages of complex structures will do for us, given that all our applications running on top of the service are Erlang/OTP apps. We are running Yaws 1.91 as the main WebServer.
For efficiently configuring Fragmented Tables, we used a number of references who have used mnesia in large systems:
These are: Mnesia One Year Later Blog, Part 2 of the Blog, Followed it even here, About Hashing. These blog posts have helped us fine tune here and there to a better performance.
Now, the problem. Mnesia has table size limits, yes we agree. However, limits on number of fragments have not been mentioned anywhere. For performance reasons, and to cater for large data, about how many fragments would keep mnesia "okay" ?.
In some of our tables, we have 64 fragments. with n_disc_only_copies set to the number of nodes in the cluster so that each node has a copy per fragment. This has helped us solve issues of mnesia write failure if a given node is out of reach at an instant. Also in the blog above, he suggests that the number of fragments should be a power of 2, this statement (he says) was investigated from the way mnesia does its hashing of records. We however need more explanation on this, and which power of two are being talked about here: 2,4,16,32,64,128,...?
The system is intended to run on HP Proliant G6, containing Intel processors (2 processors, each 4 cores, 2.4 GHz speed each core, 8 MB Cache size), 20 GB RAM size, 1.5 Terabytes disk space. Now, 2 of these high power machines are in our disposal. System Database should be replicated across the two. Each server runs Solaris 10, 64 bit.
At what number of fragments may mnesia's performance start to de-grade? Is it okay if we increase the number of fragments from 64 to 128 for a given table? how about 65536 fragments (2 ^ 16) ? How do we scale out our mnesia to make use of the Terabyte space by using fragmentation?
Please do provide the answers to the questions and you may provide advice on any other parameters that may enhance the System.
NOTE: All tables that are to hold millions of records are created in disc_only_copies type, so no RAM problems. The RAM will be enough for the few RAM Tables we run. Other DBMS like MySQL Cluster and CouchDB will also contain data and are using the same hardware with our Mnesia DBMS. MySQL Cluster is replicated across the two servers (each holding two NDB Nodes, a MySQL server), the Management Node being on a different HOST.
The hint of having a power of two number of fragments is simply related to the fact the default fragmentation module mnesia_frag uses linear hashing so using 2^n fragments assures that records are equally distributed (more or less, obviously) between fragments.
Regarding the hardware at disposal, it's more a matter of performance testing.
The factors that can reduce performance are many and configuring a database like Mnesia is just one single part of the general problem.
I simply advice you to stress test one server and then test the algorithm on both servers to understand if it scales correctly.
Talking about Mnesia fragments number scaling remember that by using disc_only_copies most of the time is spent in two operations:
decide which fragment holds which record
retrieve the record from corresponding dets table (Mnesia backend)
The first one is not really dependent from the number of fragments considered that by default Mnesia uses linear hashing.
The second one is more related to hard disk latency than to other factors.
In the end a good solution could be to have more fragments and less records per fragment but trying at the same time to find the middle ground and not lose the advantages of some hard disk performance boosts like buffers and caches.

Need a storage solution that is scalable, distributed and can read data extremely fast and works with .NET

I currently have a data solution in RDBMS. The load on the server will grow by 10x, and I do not believe it will scale.
I believe what I need is a data store that can provide fault tolerant, scalable and that can retrieve data extremely fast.
The Stats
Records: 200 million
Total Data Size (not including indexes): 381 GB
New records per day: 200,000
Queries per Sec: 5,000
Query Result: 1 - 2000 records
Requirements
Very fast reads
Scalable
Fault tolerant
Able to execute complex queries (conditions across many columns)
Range Queries
Distributed
Partition – Is this required for 381 GB of data?
Able to Reload from file
In-Memory (not sure)
Not Required
ACID - Transactions
The primary purpose of the data store is retrieve data very fast. The queries that will access this data will have conditions across many different columns (30 columns and probably many more). I hope this is enough info.
I have read about many different types of data stores that include NoSQL, In-Memory, Distributed Hashed, Key-Value, Information Retrieval Library, Document Store, Structured Storage, Distributed Database, Tabular and others. And then there are over 2 dozen products that implement these database types. This is a lot of stuff to digest and figure out which would provide the best solution.
It would be preferred that the solution run on Windows and is compatible with Microsoft .NET.
Base on the information above, does any one have any suggestions and why?
Thanks
So, what is your problem? I do not really see anything even nontrivial here.
Fast and scaling: Grab a database (sorry, complex queries, columns = database) and get some NICE SAN - a HP EVA is great. I have seen it, in a database, deliver 800mb of random IO reads per seconds..... using 190 SAS discs. Fast enough for you? Sorry, but THIS is scalability.
400gb database size are not remarakble by any means.
Grab a decent server. Supermicro has one with space for 24 discs in 2 rack units height.
Grab a higher end SAS raid controller - Adaptec.
Plug in ReadSSD drives in a RAID 10 configuration. YOu will be surprised - you will saturate the IO bus faster than you can see "ouch". Scalability is there with 24 discs space. And an IO bus that can handle 1.2 Gigabyte per second.
Finally, get a pro to tune your database server(s). That simple. SQL Server is a lot more complicated to properly use than "ok, I just know how a select should look" (without really knmowing).

Resources