I know it is a question asked over and over again, but unfortunately the various answers I have found on the net are old, fragmentary and confusing. The question is this:
Is there a better world today to synchronize and index various collections in Mongodb to Apache Solr? and use solr as a search engine!
Among other things, the data import handler seems deprecated in the next versions of solr. So what is the right way!
If possible a live example to import a database, several collections, a single collection in Apache solr!
Thanks, I think a clear answer in the year 2021 is convenient for helping a lot of people!
Related
For statistic purposes I have to save and analyse all search queries made to a server running Solr (version 8.3.1). Maybe it's just because I haven't worked with Solr until today, but I couldn't find a simpler way to access these queries except crawling the logs.
I've only found one article to help me, in which the following is stated:
I think that solr by him self doesn't store the queries (correct me if I'm wrong, about this) but you can accomplish what you want by processing the solr log (its the only way I think).
(source: https://lucene.472066.n3.nabble.com/is-it-possible-to-save-the-search-query-td4018925.html)
Is there any more convenient way to do this?
I actually found a good way to achieve this in another SO-Question. Well, at least kind of.
Note: It is only useful if you have enough resources on the same server or another server to properly handle a second Solr-Core.
Link to original answer
That SO-Question is about Elastic-Search but the methodology of it can also be applied to this case with a second Solr-Core that indexes the queries made. (One can also add additional fields like when it was last searched, total search count, ...)
The functionalities of a search auto-complete are also achievable with this solution.
In short:
The basic idea is to use a second Solr instance to provide the means necessary for quickly saving the queries (instead of a DB for instance).
Remark: I'm not going to accept this as the best answer because it is a rather special solution to the question I originally made. But I nonetheless felt it could be useful for any programmer trying to achieve this while also thinking about search auto-completion.
I would like to be able to search a CouchDB database using Solr. Are there any projects that provide such an integration?
I am also aware of CouchDB-Lucene. Is there a way to hook Solr into that?
Thanks!
It would make more sense to roll your own, given how wasy it easy. First you need to decide what kind of SOLR schema to use and how to map your CouchDB documents onto that schema. Then simple iterate through all the documents in a db Pagination in CouchDB? and generate SOLR <add> documents.
People do this all the time with all kinds of data sources. Since SOLR is essentially searching a single table, the hard work is often figuring out how to map your database format onto a single table. Read up on what you can do with the SOLR schema, and you may be surprised at how easy this is.
There is a CouchDB integration for ElasticSearch available, apart from feeding ElasticSearch with JSON on your own. Both work with schema-less JSON, so it's very easy to integrate them.
In terms of features, ElasticSearch would offer a comparable set to Solr (in addition to some unique features, of course.)
According to this
http://wiki.apache.org/couchdb/Related_Projects
there was a CouchDB-Solr2 project (scroll down to the end), which is no longer maintained.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Has anyone here have any experience deploying a real online system that had a full text search in any of the NoSQL databases?
For example, how does the full-text search compare in MongoDB, Riak and CouchDB?
Some of the metric that I am looking for is ease of deployment and maintaince and of course speed.
How mature are they? Are they any replacement for the Lucene infrastructure?
None of the existing "NoSQL" database provides a reasonable implementation of something that could be named "fulltext search". MongoDB in particular has barely nothing so far (matching using regular expressions is not fulltext search and searching using $in or $all operators on a keyword word list is just a very poor implementation of a "fulltext search"). Using Solr, ElasticSearch or Sphinx is straight forward - an implementation and integration on the application level. Your choice widely depends on you requirements and current setup.
Yes. See CouchDB-Lucene which is a CouchDB extension to support full Lucene queries of the data.
I'm involved in the development of an application using Solandra (Cassandra based Apache Solr). In my experience the system is quite stable and able to handle TB+ data. I'm personally quite happy with the software for the following reasons:
1. Automated partitioning of data due to Cassandra backend.
2. Rich querying capabilities (due to Solr and Lucene).
3. Fast read and writes (writes significantly faster than reads).
However currently Solandra, I believe does not support batch mutations. That is, I can insert 100 columns in a single insertion into Cassandra, however Solandra does not support this.
For MongoDB, there isn't a full full-text indexing feature yet, however there's possibly one in the pipeline, perhaps due in v2.2.
In the meantime, you can create a simple inverted index by using a string array field, and putting an index on it, as described here: Full Text Search in Mongo
Or, you could maintain a parallel full-text index in a dedicated Solr or Lucene index, and if you're feeling really ambitious replicate directly to your full-text store from the Mongo oplog. Otherwise, populate both and keep in sync from your application logic.
I've just finished completion of this using data that is stored in MongoDB while having my Fulltext engin in Sphinx Search. I know mongo has a votable issue for adding fulltext to a future release; however at this point they don't have it.
There are several ways of inserting your Mongo information into sphinx; however the one I've found the most luck with (and has been extremely easy) is through xmlpipe2. It took me a bit to fully understand how to use this; however this article: Sphinx xmlpipe2 in PHP has an outstanding walk through which shows (at least in PHP) how to build the document, then how to insert it into sphinx.
Essentially my config ends up looking like this:
source my_source {
type = xmlpipe
xmlpipe_command = /usr/bin/php /www/generateSphinXml.php identifierForMyTable
}
with my index then looking like this:
index my_index {
source = my_source
path = /usr/local/sphinx/var/data/my_index
docinfo = extern
min_word_len = 1
mlock = 0
morphology = stem_en
charset_type = utf-8 //<----- This is q requirement however.
enable_star = 1
html_strip = 0
min_prefix_len = 2
}
I've had excellent success with this; hopefully you can find this as useful.
Couchbase 5.0 is releasing full text search capabilities built on the open source Bleve engine. You enable indexing for full text and start using against existing JSON documents in the database.
Some slides and presentation video covering the topic, mentioning Elasticsearch and Lucene as well... https://www.slideshare.net/Couchbase/fulltext-search-how-it-works-and-what-it-can-do
If you are using PHP there is a great solution for fulltext search in No-SQL database MongoDB named as MongoLantern. http://sourceforge.net/projects/mongolantern/
Previously I was using Sphinx+MongoDB to perform fulltext search, the performance was great but result quality was very poor. With MongoLantern my current search improved a lot.
MongoLantern is also listed in MongoDB site.
Please let me know if you try it of your own.
Solr could be used with 10gen's Mongo Connector, which allows to push data there (among others)
https://github.com/10gen-labs/mongo-connector/tree/master/mongo-connector
From their example:
python mongo_connector.py -m localhost:27217 -t http://localhost:8080/solr
cLunce project. Also xapian not mentioned above. I use Sphinx and it's very good but somewhat clumsy to set up. I actually prefer piping data from Mongo into Sphinx via XMLPIPE2, instead of using Sphinx' SQL in sphinx.conf file.
Definitely Solr. It is NoSQL.
It has:
awesome performance
awesome storage options
stemmers
highligting
faceting
distributed search (SolrCloud)
perfect API
web admin
HTML, PDF, DOC indexing
many other features
I am looking for datasets that can be used for implementing recommendation system usecase of Apache Mahout. I know of only MovieLens Data Sets from GroupLens Research group.
Anyone knows any other datasets that can be used for recommendation system implementation? I am particularly interested in item-based data sets though other datasets are most welcome.
this is Sebastian from Mahout.
There is a dataset from a czech dating website available that might be of interest to you: http://www.occamslab.com/petricek/data/
Btw the term item-based refers to a special collaborative filtering approach not to the dataset itself, which is usually in the common form of user-item-rating tripels that most collaborative filtering approaches work with.
We would love to hear from your experimentation results and experiences (if you wanna share them) on our user mailinglist at user#mahout.apache.org
While searching for data sets, I found few sites that list publicly available data sets which can used for data mining. Some of these can be used for Mahout too.
Bixo Labs
UCI Datasets
KDnuggets
You can look at iPinYou RTB Bidding Data Set
Quora : http://qr.ae/OrqgM
http://contest.ipinyou.com/data-release.html
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've followed the CouchDB project with interest over the last couple of years, and see it is now an Apache Incubator project. Prior to that, the CouchDB web site was full of do not use for production code type disclaimers, so I'd done no more than keep an eye on it. I'd be interested to know your experiences if you've been using CouchDB either for a live project, or a technology pilot.
After 18 Months of prototypes, testing and waiting for CouchDb to get ready we moved an internal application over to CouchDB in December 2008. So far I'm very happy with that move. It gets rid of a lot of filesystem objects for us (PDFs and JPEGs, now stored as attachments in CouchDB). This enables us to get rid of NFS and easier cluster/replicate our frontend webservers.
To what degree CouchDB is ready for you depends very much on the culture of your organization. We have an in-house development team maintaining several internal Erlang applications. Since CouchDB is written in Erlang and the codebase is of quite decent quality we felt confident that we could fix show stopper issues in CouchDB should the need arise - or at least get our data back out. We also hired one of the CouchDB core team as an consultant - just in case.
But CouchDB for sure isn't 1.0 yet. There are crashes in the Web worker processes all the time (if you misuse them). Replication breaks for us and we don't get error messages about it. Documentation is still very lacking. Still I'm confident that it will not eat our data and development moves forward with reasonable pace.
To give you an idea about our application: currently our biggest database is about 512000 records taking 7.5 GB of diskspace.
I use the CouchDB to power a Facebook application (over 35k monthly active users). For a while it was using MySQL but after porting the entire project over from Perl to Erlang, I decided to go for the gold and migrate all of the data into CouchDB and use that instead.
CouchDB has been a great data store to work with. I think that it is on track to becoming a major player in web-based services.
I got to know one of the people (Jan) working on it a while ago (like 6 months) and have been playing with it ever since. I found the community around CouchDB to be both very knowledgable and helpful so that whenever I ran into an issue it was resolved in a matter of minutes or hours at least.
We just kicked off a project the other week which basically requires us to store data in the non-relational way and due to CouchDB's document oriented store we selected it as one of the technologies to use. So this is actually the first time that I will run it in production, but I'm still pretty confident about it. :)
Just an update here (2009-10-25):
Our first CouchDB install is 20 GB, it hosts 40 million records. It's been running in production since January 2009, and it's been great. Read (GET) speed is outstanding and we use it as a store for complex data, and then it's just pull.
Our second couchdb installment has two databases, one is 160,000,000+ documents (210 GB), and growing between 150,000-300,000 documents a day. The other is only 35,000,000 documents (7 GB). This setup has a lot more reads and writes and initial tests are performing very well.
View building on the 160,000,000 document database took roughly a week, but since then we upgraded to a larger Amazon EC2 instance and we are also getting ready to update to CouchDB 0.10.x (from 0.9.1) as this release includes a lot of performance improvements in view building.
I am using couchdb in a few scenarios, as a document store for http://devk.it (under development) and in a much larger scale as a template store for a distributed email delivery system.
CouchDB is very slick for what it does, but I was not able to get it to run at as high of a concurrency level as I would have expected. Also note that the maximum document size is fairly limiting at 1MB due to the hardcoded max input buffer size in mochiweb. You can however alter a header file and recompile to get around this limit.
I'm using CouchDB to store (and serve) article ratings on my blog. It's not exactly heavy traffic but it's been rock solid so far.
Also planning on adding comments sometime which I'll most likely also store in CouchDB.
I've found it quite easy to get started with, on OSX you can just download CouchDBX to get started quickly. I use a Sinatra backend with RestClient to interact with 'the couch' through straight HTTP verbs and such.
Great fun.
At the moment I'm working with CouchDB for a computer science thesis. I'm writing about my progresses and opinions on my blog, http://metalelf0dev.blogspot.com. I think the project is well done, but the existing documentation isn't organized as it should. A quick tutorial about the Futon web interface could be really useful for starters IMHO :)
I used couchdb twice in production. First was the wiki likes project and I think that couchdb was perfect candidate for that role. Saving the version of all docs helps a lot.
The second project was quite query loaded and idea was dumping social data first, then query it with various filters. It was looked like standard CouchDB query features looks a bit pure for our needs. But we add Lucene like a full text indexer and after that doing many queries during Lucene part. And that solution looks good enough.