Couchdb document deletion and performance - database

Couchdb documents when deleted using DELETE HTTP request, doesn't actually delete the document ,instead the document still exists with "_deleted":true. This makes an update the document, so that view indexes should be updated (which I think is costly). So my question is if space is no concern, are there any performance gain that can be achieved by deleting documents

I did the below test a while back. It's in a database with some pretty large documents.
I replicated the database and reran a view in a design document. Each view in this document has hundred of emits per document, with a total of around 10 million emits on the database. This took around 3 hours to generate the index file from scratch.
I ran a view in another design document that only uses the doc._id field, and has only one emit per document. This took around 3 minutes.
I then deleted all documents and ran the views again, both completed in less than 2 minutes.

Related

full build solr index with large amount of data

I have a text file containing over 10 million records of web pages.
I want to build solr index with this file every day(because this file is updated daily).
Is there any effective solutions to full build solr index at once? Such as using map reduce model to accelerate building process.
I think using solr api to add document is a little bit slow.
It is not clear how much content is in those 10 million records, but it may actually be simple enough to index those in bulk. Just check your solrconfig.xml for your commit settings, you may, for example, have autoCommit configured with low maxDocs settings. In your case, you may want to disable autoCommit completely and just do it manually at the end.
However, if it is still a bit slow, before going to map-reduce, you could think about building a separate index and then swapping it with the current index.
This way, you actually have the previous collection to roll-back to and/or to compare if needed. The new collection can even be built on a different machine and/or more close to the data.

efficient way to get deleted documents

I'm searching for an efficient way to get the list of documents deleted in a Cloudant database.
Background: I have a Cloudant database containing 4 million records. The business logic allows also documents to be deleted. Data from this database is loaded daily into a SQL data warehouse and needs to be also marked as deleted.
A full reload is no option since it takes too long. Also querying the _changes stream seems not to scale well if the Cloudant database contains so many documents.
I would use the _changes feed and apply a server-side filter function (http://guide.couchdb.org/draft/notifications.html) to eliminate all documents that don't have the _deletedproperty set. Your change feed listener would therefore only be notified whenever a DELETE operation is reported and network traffic kept to a minimum.

Best strategy to index document with Solr

Im using Solr Version 4 (api spring data solr to index,get...documents) and i have to decide which strategy im going to apply for index my documents.
I hesitate between 2 strategies:
Launch a batch periodically to index all documents
Only Index the document when this one has changed
Which strategy is the best ? maybe a mix??or another..
I have some ideas about cons and dis of each but i don't have a big experience with solr.
Depends on how long indexing all your documents takes and how soon you want your index to be updated.
We have several Solr cores - some have less than 100K very small docs and a full import via data import handler (with optimize=true) runs under 1 minute. We can tolerate delays of up to 15 minutes for them, so we run a full import for this core every 15 min.
Then there are cores at the other extreme with several million docs, each of fairly large size, and full indexing will take several hours to complete. For such cores, we have a changelog table in MySQL which only records the docs that changed and we do an incremental indexing only for those docs every few min.
Finally, there are cores that are in the middle, having about 500K docs of decent size, but on these we need atomic updates every 5 to 10 min for certain fields and full document update for certain docs every few min as well. We run delta imports for these. Full index itself takes about 1.5 to 2 hours to run, which we do nightly.
So the answer to your question really depends on what your requirements are.

Handling large number of ids in Solr

I need to perform an online search in Solr i.e user need to find list of user which are online with particular criteria.
How I am handling this: we store the ids of user in a table and I send all online user id in Solr request like
&fq=-id:(id1 id2 id3 ............id5000)
The problem with this approach is that when ids become large, Solr is taking too much time to resolved and we need to transfer large request over the network.
One solution can be use of join in Solr but online data change regularly and I can't index data every time (say 5-10 min, it should be at-least an hour).
Other solution I think of firing this query internally from Solr based on certain parameter in URL. I don't have much idea about Solr internals so don't know how to proceed.
With Solr4's soft commits, committing has become cheap enough that it might be feasible to actually store the "online" flag directly in the user record, and just have &fq=online:true on your query. That reduces the overhead involved in sending 5000 id's over the wire and parsing them, and lets Solr optimize the query a bit. Whenever someone logs in or out, set their status and set the commitWithin on the update. It's worth a shot, anyway.
We worked around this issue by implementing Sharding of the data.
Basically, without going heavily into code detail:
Write your own indexing code
use consistent hashing to decide which ID goes to which Solr server
index each user data to the relevant shard (it can be a several machines)
make sure you have redundancy
Query Solr shards
Do sharded queries in Solr using the shards parameter
Start an EmbeddedSolr and use it to do a sharded query
Solr will query all the shards and merge the results, it also provides timeouts if you need to limit the query time for each shard
Even with all of what I said above, I do not believe Solr is a good fit for this. Solr is not really well suited for searches on indexes that are constantly changing and also if you mainly search by IDs than a search engine is not needed.
For our project we basically implement all the index building, load balancing and query engine ourselves and use Solr mostly as storage. But we have started using Solr when sharding was flaky and not performant, I am not sure what the state of it is today.
Last note, if I was building this system today from scratch without all the work we did over the past 4 years I would advise using a cache to store all the users that are currently online (say memcached or redis) and at request time I would simply iterate over all of them and filter out according to the criteria. The filtering by criteria can be cached independently and updated incrementally, also iterating over 5000 records is not necessarily very time consuming if the matching logic is very simple.
Any robust solution will include bringing your data close to SOLR (batch) and using it internally. NOT running a very large request during search which is low latency thing.
You should develop your own filter; The filter will cache the online users data once in a while (say, every minute). If the data changes VERY frequently, consider implementing PostFilter.
You can find a good example of filter implementation here:
http://searchhub.org/2012/02/22/custom-security-filtering-in-solr/
one solution can be use of join in solr but online data change
regularly and i cant index data everytime(say 5-10 min, it should be
at-least an hr)
I think you could very well use Solr joins, but after a little bit of improvisation.
The Solution, I propose is as follows:
You can have 2 Indexes (Solr Cores)
1. Primary Index (The one you have now)
2. Secondary Index with only two fields , "ID" and "IS_ONLINE"
You could now update the Secondary Index frequently (in the order of seconds) and keep it in sync with the table you have, for storing online users.
NOTE: This secondary Index even if updated frequently, would not degrade any performance provided we do the necessary tweaks like usage of appropriate queries during delta-import, etc.
You could now perform a Solr join on the ID field on these two Indexes to achieve what you want. Here is the link on how to perform Solr Joins between Indexes/ Solr Cores.

Sunspot with Solr 3.5. Manually updating indexes for real time search

Im working with Rails 3 and Sunspot solr 3.5. My application uses Solr to index user generated content and makes it searchable for other users. The goal is to allow users to search this data as soon as possible from the time the user uploaded it. I don't know if this qualifies as Real time search.
My application has two models
Posts
PostItems
I index posts by including data from post items so that a when a user searches based on certain description provided in a post_item record the corresponding post object is made available in the search.
Users frequently update post_items so every time a new post_item is added I need to reindex the corresponding post object so that the new post_item will be available during search.
So at the moment whenever I receive a new post_item object I run
post_item.post.solr_index! #
which according to this documentation instantly updates the index and commits. This works but is this the right way to handle indexing in this scenario? I read here that calling index while searching may break solr. Also frequent manual index calls are not the way to go.
Any suggestions on the right way to do this. Are there alternatives other than switching to ElasticSearch
try to use this gem https://github.com/bdurand/sunspot_index_queue
you will than be able to batch reindex, let's say, every minute, and it definitely will not brake an index
If you are just starting out and have the luxury to choose between Solr and ElasticSearch, go with ElasticSearch.
We use Solr in production and have run into many weird issues as the index and search volume grew. The conclusion was Solr was built/optimzed for indexing huge documents(word/pdf content) and in large numbers(billions?) but updating the index once a day or a couple of days when nobody is searching.
It was a wrong choice for consumer Rails application where documents are small, small in numbers( in millions) updates are random and continuous and the search needs to be somewhat real time( a delay of 5-10 sec is fine).
Some of the tricks we applied to tune the server.
removed all commits (i.e., !) from rails code,
use Solr auto-commit every 5/20 seconds,
have master/slave configuration,
run index optimization(on Master) every 1 hour
and more.
and we still see high CPU usage on slaves when the commit triggers. As a result some searches take a long time(> 60 seconds at times).
Also I doubt if the batching indexing sunspot_index_queue gem can remedy the high CPU issue.

Resources