We are currently using Solr4.3 cloud in master slave mode and have been pretty happy with our initial solr POC. We are looking to store Social Data (tweets, blogs, Facebook feed) into Solr and make it searchable, also at the same time utilize the Faceting capabilities provided by Solr.
Going by the amount of social data that comes in, we were wondering what kind of infrastructure would be required to say store 2 TB or data and query them with minimum time.
Also, give the rate in which tweets come in what would be the best indexing strategy.
I will suggest you to choose some cloud platform .AWS is one good option in this matter since you can always try and then change your machine if it does'nt suit your requirement.
So what will suit your requirement?
I think for solr since there is a high I/O I will suggest you to go with high I/O machine and good processing power
I suggest using
c3.2xlarge of AWS with 8 ecu, 15 gb ram, 2 x 80 SSD.
and attach to it an EBS volume of atleast 4 TB.
This would solve your problem.
Related
I am writing a feature that might lead to us executing a few 100s or even 1000 mongodb transactions for a particular endpoint. I want to know if there is a maximum limit to the number of transactions that can occur in mongodb?
I read this old answer about SQL server Can SQL server 2008 handle 300 transactions a second? but couldn't find anything on mongo
It's really hard to find a non-biased benchmark, let alone the benchmark that your objectively reflect your projected workload.
Here is one, by makers of Cassandra (obviously, here Cassandra wins): Cassandra vs. MongoDB vs. Couchbase vs. HBase
few thousand operations/second as a starting point and it only goes up as the cluster size grows.
Once again - numbers here is just a baseline and can not be used to correctly estimate the performance of your application on your data. Not all the transactions are created equal.
Well, this isn't a direct answer to your question, but since you have quoted a comparison, I would like to share an experience with Couchbase. When it comes to Couchbase: a cluster's performance is usually limited by the network bandwidth (assuming you have given it SSD/NVMe storage which improves the storage latency). I have achieved in excess of 400k TPS on a 4 node cluster running Couchbase 4.x and 5.x. in a K/V use case.
Node specs below:
12 core x 2 Xeon on HP BL460c blades
SAS SSD's (NVMe would generally be a lot better)
10 GBPS network within the blade chassis
Before we arrived here, we moved on from MongoDB that was limiting the system throughput to a few tens of thousand at most.
We have been looking for a big data storage which can collect a huge pool of user information.
Also, I would like to mention that we are working on RTB platform (namely DSP side). As a result, our platform handles around 100 millions of requests per day. It is about 1-2 thousands of requests per second (depends on time). Here is a simple overview what we are going to implement:
My questions are:
Is it a good solution using Solr (SolrCloud) in Data Management Platform?
How do you think whether SolrCloud can handle high frequency traffic?
I have checked Solr performance problem and there is one option - extremely frequent commits (under Slow commits item). What is the limit?
What configuration of SolrCloud will be suitable for us? I mean amount of shards, cores and server configuration (CPU, RAM) to handle 1-2K QPS and store about 500M docs. How can we calculate this?
Using AppEngine with Python and the HRD retrieving records sequentially (via an indexed field which is an incrementing integer timestamp) we get 15,000 records returned in 30-45 seconds. (Batching and limiting is used.) I did experiment with doing queries on two instances in parallel but still achieved the same overall throughput.
Is there a way to improve this overall number without changing any code? I'm hoping we can just pay some more and get better database throughput. (You can pay more for bigger frontends but that didn't affect database throughput.)
We will be changing our code to store multiple underlying data items in one database record, but hopefully there is a short term workaround.
Edit: These are log records being downloaded to another system. We will fix it in the future and know how to do so, but I'd rather work on more important things first.
Try splitting the records on different entity groups. That might force them to go to different physical servers. Read entity groups in parallel from multiple threads or instances.
Using cache mght not work well for large tables.
Maybe you can cache your records, like use Memcache:
https://developers.google.com/appengine/docs/python/memcache/
This could definitely speed up your application access. I don't think that App Engine Datastore is designed for speed but for scalability. Memcache however is.
BTW, if you are conscious about the performance that GAE gives as per what you pay, then maybe you can try setting up your own App Engine cloud with:
AppScale
JBoss CapeDwarf
Both have an active community support. I'm using CapeDwarf in my local environment it is still in BETA but it works.
Move to any of the in-memory databases. If you have Oracle Database, using TimesTen will improve the throughput multifold.
I experienced Nutch 2.1 locally without any difficulty. I have also tried on a 3 machine distributed cluster. We're now discussing whether to run it with Amazon Web Services or not. I do not have much experience with AWS. My question is that, is it possible and neccessary to try Nutch2.1 crawling and indexing parts on the cloud. What possible advantages and disadvantages we will have?
Thanks.
If you have a cluster with same capacity as that of a AWS cluster (that you plan to invest in) then there is no advantage except for #1 below.
Here are several factors that you should think about before switching to AWS:
Locality of hosts crawled: If you are sitting in Europe and the websites that you want to crawl are hosted far away ... say Australia. If you buy AWS nodes located in Australia, it would be much faster for crawling that data rather than crawling from Europe.
Cost: For using AWS machines, you need to pay then on hourly basis. Can you afford that ? If not better use your own machines
Current cluster capacity : does your current cluster has ample capacity and space to handle the amount of crawled data ? I think there wont be problem in terms of computational speed as Nutch runs on Hadoop which was designed to run on commodity hardware. Can your cluster accommodate entire data that is being fetched by the crawler.
Volume of data : What is a rough estimate of the data that is being crawled ? If its less, then it makes no sense to have an AWS cluster.
Time constraints : Is there any time bound for completion for the crawl ?
If you are doing this for a professional project, then these factors must be given a thought.
If you are doing it for fun/hobby/learning, go ahead and use free tier nodes of AWS. Those are low capacity nodes given free by Amazon. Its fun to learn new things :)
Advantages of AWS:
No need to buy machines for setting up a cluster. get started without having any hardware except a terminal PC.
Locality
No need to look after machines. If a node crashes badly, leave it (its not your problem :P). Buy a new one, add it to the cluster and go ahead.
Disadvantages of AWS:
Costly.
Copying data to any machine outside AWS cluster is charged.
Your data is NOT persisted when u give up the procured AWS nodes. If u want to persist it, pay them and use the S3 storage service.
I am using Google App Engine for an app, and the app is currently hitting the datastore at a rate of around 2.5 million row writes, and 4.5 million row reads per day.
I am currently porting the app to Amazon Elastic Beanstalk and Amazon RDS due to the very high costs of running an application on GAE.
Based on the values above, how can I find out / estimate what type of RDS instance I will need for my requirements? Is the above a considerable amount of processing for, lets say a Small or Micro MySQL RDS instance to process in a day?
Totally depends on a number of factors:
Row size.
Field types and sizes.
Complexity of your queries (joins, etc).
Proper use of indexes.
Row contention and other possible bottlenecks.
Really hard to tell. But from experience, if you don't need fancy replication or sharding, the costs of the GAE datastore are usually higher as it offers total redundancy, distribution, scalability, etc.
My suggestion would be to write a quick program to benchmark a load on RDS that replicates what you are expecting. Should be easy to write if you forgo all the business rules and such and just do fake but randomized reads and writes.