What is the maxium replication rate of Couchbase XDCR - benchmarking

We are currently using Couchbase for data caching and there is talk of doing cross-data center replication with it. However, we will need up to 1000 documents replicated to multiple locations every second. Documents will be between 2 and 64K each.
Is there anyone out there with XDCR experience who can tell me whether this is even feasible, or if we will have to use other means to replicate this data at that speed. The only "benchmark" in the documentation at Couchbase implies that the rate of XDCR is only about 100TPS. (149 ms to replicate 11 documents.)

The replication rate of XDCR is limited by network bandwidth and latency first, then CPU and disk IO. Assuming you have enough bandwidth between the datacenters and your clusters are provisioned properly, Couchbase will replicate hundreds of thousands of documents per second, or more. It's a pretty simple experiment to run, just set up XDCR between two singles node clusters and use one of the load generator tools that come with Couchbase to create some traffic. (cbworkloadgen in the Couchbase bin folder or cbc-pillowfight that comes with libcouchbase.)
There are several config settings you can play with to optimize throughput, such as increasing batch size, changing the optimistic replication threshold, etc.

Related

Transferring millions of documents to an external hard drive

I have 13 million documents on Azure Blob Storage that I can azcopy to my desktop memory within 24 hours. However, as soon as I try to transfer these files to my external hard drive, the time needed to complete the transfer jumps to 60 days. The files aren't large - each 100 kb - so the entire transfer is about 1.3 TB. I have tried:
Zipping the files, transfer, unzip. Problem: Unzipping takes just as long
azcopy directly into the SSD hard drive
robocopy files from internal to external drive
Simple ctrl-c ctrl-v.
Each of the above options take months to complete the transfer. Any ideas on how to speed this up??? Why would azcopy be so much faster for an internal drive than an external one?
There could be several reasons for the performance issue.
You can run a performance benchmark test on specific blob containers or file shares to view general performance statistics and to identify performance bottlenecks. You can run the test by uploading or downloading generated test data.
Use the following command to run a performance benchmark test.
Syntax
azcopy benchmark 'https://<storage-account-name>.blob.core.windows.net/<container-name>'
Optimize the performance of AzCopy with Azure Storage
There are several options for transferring data to and from Azure, depending on your needs: Transfer data to and from Azure
Azcopy Fast Data Transfer is a tool for fast upload of data into Azure – up to 4 terabytes per hour from a single client machine. It moves data from your premises to Blob Storage, to a clustered file system, or direct to an Azure VM. It can also move data between Azure regions.
The tool works by maximizing utilization of the network link. It efficiently uses all available bandwidth, even over long-distance links. On a 10 Gbps link, it reaches around 4 TB per hour, which makes it about 3 to 10 times faster than competing tools we’ve tested. On slower links, Fast Data Transfer typically achieves over 90% of the link’s theoretical maximum, while other tools may achieve substantially less.
For example, on a 250 Mbps link, the theoretical maximum throughput is about 100 GB per hour. Even with no other traffic on the link, other tools may achieve substantially less than that. In the same conditions (250 Mbps, with no competing traffic) Fast Data Transfer can be expected to transfer at least 90 GB per hour. (If there is competing traffic on the link, Fast Data Transfer will reduce its own throughput accordingly, in order to avoid disrupting your existing traffic.)
Fast Data Transfer runs on Windows and Linux. Its client-side portion is a command-line application that runs on-premises, on your own machine. A single client-side instance supports up to 10 Gbps. Its server-side portion runs on Azure VM(s) in your own subscription. Depending on the target speed, between 1 and 4 Azure VMs are required. An Azure Resource Manager template is supplied to automatically create the necessary VM(s).
Your files are very small (e.g. each file is only 10s of KB).
You have an ExpressRoute with private peering.
You want to throttle your transfers to use only a set amount of network bandwidth.
You want to load directly to the disk of a destination VM (or to a clustered file system). Most Azure data loading tools can’t send data direct to VMs. Tools such as Robocopy can, but they’re not designed for long-distance links. We have reports of Fast Data Transfer being over 10 times faster.
You are reading from spinning hard disks and want to minimize the overhead of seek times. In our testing, we were able to double disk read performance by following the tuning tips in Fast Data Transfer’s instructions.

How many transactions per second can happen in MongoDB?

I am writing a feature that might lead to us executing a few 100s or even 1000 mongodb transactions for a particular endpoint. I want to know if there is a maximum limit to the number of transactions that can occur in mongodb?
I read this old answer about SQL server Can SQL server 2008 handle 300 transactions a second? but couldn't find anything on mongo
It's really hard to find a non-biased benchmark, let alone the benchmark that your objectively reflect your projected workload.
Here is one, by makers of Cassandra (obviously, here Cassandra wins): Cassandra vs. MongoDB vs. Couchbase vs. HBase
few thousand operations/second as a starting point and it only goes up as the cluster size grows.
Once again - numbers here is just a baseline and can not be used to correctly estimate the performance of your application on your data. Not all the transactions are created equal.
Well, this isn't a direct answer to your question, but since you have quoted a comparison, I would like to share an experience with Couchbase. When it comes to Couchbase: a cluster's performance is usually limited by the network bandwidth (assuming you have given it SSD/NVMe storage which improves the storage latency). I have achieved in excess of 400k TPS on a 4 node cluster running Couchbase 4.x and 5.x. in a K/V use case.
Node specs below:
12 core x 2 Xeon on HP BL460c blades
SAS SSD's (NVMe would generally be a lot better)
10 GBPS network within the blade chassis
Before we arrived here, we moved on from MongoDB that was limiting the system throughput to a few tens of thousand at most.

SolrCloud deployment for high frequency traffic

We have been looking for a big data storage which can collect a huge pool of user information.
Also, I would like to mention that we are working on RTB platform (namely DSP side). As a result, our platform handles around 100 millions of requests per day. It is about 1-2 thousands of requests per second (depends on time). Here is a simple overview what we are going to implement:
My questions are:
Is it a good solution using Solr (SolrCloud) in Data Management Platform?
How do you think whether SolrCloud can handle high frequency traffic?
I have checked Solr performance problem and there is one option - extremely frequent commits (under Slow commits item). What is the limit?
What configuration of SolrCloud will be suitable for us? I mean amount of shards, cores and server configuration (CPU, RAM) to handle 1-2K QPS and store about 500M docs. How can we calculate this?

Performance expectations for an Amazon RDS instance

I am using Google App Engine for an app, and the app is currently hitting the datastore at a rate of around 2.5 million row writes, and 4.5 million row reads per day.
I am currently porting the app to Amazon Elastic Beanstalk and Amazon RDS due to the very high costs of running an application on GAE.
Based on the values above, how can I find out / estimate what type of RDS instance I will need for my requirements? Is the above a considerable amount of processing for, lets say a Small or Micro MySQL RDS instance to process in a day?
Totally depends on a number of factors:
Row size.
Field types and sizes.
Complexity of your queries (joins, etc).
Proper use of indexes.
Row contention and other possible bottlenecks.
Really hard to tell. But from experience, if you don't need fancy replication or sharding, the costs of the GAE datastore are usually higher as it offers total redundancy, distribution, scalability, etc.
My suggestion would be to write a quick program to benchmark a load on RDS that replicates what you are expecting. Should be easy to write if you forgo all the business rules and such and just do fake but randomized reads and writes.

Need a storage solution that is scalable, distributed and can read data extremely fast and works with .NET

I currently have a data solution in RDBMS. The load on the server will grow by 10x, and I do not believe it will scale.
I believe what I need is a data store that can provide fault tolerant, scalable and that can retrieve data extremely fast.
The Stats
Records: 200 million
Total Data Size (not including indexes): 381 GB
New records per day: 200,000
Queries per Sec: 5,000
Query Result: 1 - 2000 records
Requirements
Very fast reads
Scalable
Fault tolerant
Able to execute complex queries (conditions across many columns)
Range Queries
Distributed
Partition – Is this required for 381 GB of data?
Able to Reload from file
In-Memory (not sure)
Not Required
ACID - Transactions
The primary purpose of the data store is retrieve data very fast. The queries that will access this data will have conditions across many different columns (30 columns and probably many more). I hope this is enough info.
I have read about many different types of data stores that include NoSQL, In-Memory, Distributed Hashed, Key-Value, Information Retrieval Library, Document Store, Structured Storage, Distributed Database, Tabular and others. And then there are over 2 dozen products that implement these database types. This is a lot of stuff to digest and figure out which would provide the best solution.
It would be preferred that the solution run on Windows and is compatible with Microsoft .NET.
Base on the information above, does any one have any suggestions and why?
Thanks
So, what is your problem? I do not really see anything even nontrivial here.
Fast and scaling: Grab a database (sorry, complex queries, columns = database) and get some NICE SAN - a HP EVA is great. I have seen it, in a database, deliver 800mb of random IO reads per seconds..... using 190 SAS discs. Fast enough for you? Sorry, but THIS is scalability.
400gb database size are not remarakble by any means.
Grab a decent server. Supermicro has one with space for 24 discs in 2 rack units height.
Grab a higher end SAS raid controller - Adaptec.
Plug in ReadSSD drives in a RAID 10 configuration. YOu will be surprised - you will saturate the IO bus faster than you can see "ouch". Scalability is there with 24 discs space. And an IO bus that can handle 1.2 Gigabyte per second.
Finally, get a pro to tune your database server(s). That simple. SQL Server is a lot more complicated to properly use than "ok, I just know how a select should look" (without really knmowing).

Resources