How can i use Lucene to index my SQLite Database for Full-Text Search ?
You can use Hibernate-Lucene bridge, but this will require you to use Hibernate.
Lucene is "nothing more" than java libraries. That means, you have to use those libraries "with something".
One way is the Hibernate-Lucene bridge, as mindas wrote.
An other way (which I'm using) is solr.
You can use solr to index your SQLite Database.
But: you have to send the (full-text-)search-request to solr to run such a search. As far as i know, there is nor lucene integration for SQLite Databases.
You will need an proxy (like solr) and an application, which merges the world of SQLite and solr (or an other lucene "proxy") together.
Related
Hello i have already working application for searching in database. In database I have like 50M indexed documents. There is any idea to run all together i mean i don't want solr on http? what should i do? it's better to use Lucene or EmbeddedSolrServer? Or maybe you have other solution?
I have already something like on 1st diagram and i want make this in single process
And if i will go in lucene can i use my indexes from solr?
solr-5.2.1
Tomcat v8.0
It is not recommended to have one tomcat and deploy the application and solr.
If solr crashes then they are chances of getting downtime for the application. So its always better to run solr independently. Embedding solr is also not recommended.
The simplest, safest, way to use Solr is via Solr's standard HTTP interfaces. Embedding Solr is less flexible, harder to support, not as well tested, and should be reserved for special circumstances.
for reference http://wiki.apache.org/solr/EmbeddedSolr
It depends. If you want to use parts of the Solr feature set (Solr adds quite a few features on top of Lucene), you'll reimplement features that you otherwise would get for free.
You can use EmbeddedSolr to have Solr internal to your application, and then use the EmbeddedSolrServer client in SolrJ to talk to it - the rest of your application would still use Solr as it were a remote instance.
The problem with EmbeddedSolr is that you'll run into scalability issues as the index size grows, since you'll have a harder time scaling onto multiple servers and to separate concerns.
Anyone knows an alternative to Solandra in Cassandra?
I can't use "like" clause, and in my case i'll use always.
Thanks.
Datastax provides a "tweaked" version of Apache Solr (which saves data directly into Cassandra instead of flat files) to do real-time full-text search. It's called Datastax Enterprise Solution. Of course is not free.
As an alternative, you can couple Cassandra with an Elastic Search cluster but it's kind of heavy just for text search.
Last but not least, try to implement yourself a full text search using Lucene as engine and some hand-made Cassandra tables for storage, good luck though.
You have 3 options to bring advanced search capabilities with Cassandra:
Datastax Solr as already mentioned
Elassandra = ElasticSearch on Cassandra. https://github.com/strapdata/elassandra and http://www.strapdata.com/ It's a good product, we use it in my company. The community edition is free and the latest release combine Cassandra 3.11 with Elasticsearch 5.5. You will see on their website that there is some free trial hosted solution you could use to test.
Stratio lucene plugin
https://github.com/Stratio/cassandra-lucene-index It's free, it works, we also use it in my company. It's just a jar to drop in the Cassandra lib directory.
Of course, for very basic search needs, you can have a look at SASI too.
I use Lucene locally to index documents. I know how to use Lucene pretty well. I never used Solr but I want to run a web search using a Lucene index so I'm now looking into it.
Can I install Solr on EC2 let's say, and then instead of indexing documents using Solr, doing it locally using Lucene directly and then just coping the Lucene index from my machine to EC2 which Solr will be using for search?
I'm assuming it's possible as long as I keep the index on disk but would like to be sure.
Thanks!
It's certainly possible, you would only make sure to maintain the exactly the same index structure (defined by Solr schema). However, it would also mean that your configuration would be stored in two completely separate places -- e.g. each time you would change an analyzer in Lucene, you would need to synchronize this change in Solr XML configuration. I'm not sure what benefit would Solr bring in such use case.
Does anyone know / can someone point to an nosql db, which would support faceting, like in Apache SOLR, off the shelf?
I have read, that in Sphinx they don't support facet search out of the box, but one can implement it in a form of plugin.
Upd: I'm only interested in enterprise level systems.
CouchDB (Eearlang) and RavenDB (.NET) both are based on Lucene, so it should be possible to make them both support Faceted Search. RavenDB already partially supports that.
And Sphinx isn't a NoSQL DB.
Turnes out, that sphinx does have facetted search feature. It has similar features of a "linguistic" pipeline, distributed search, tokenization etc as the ones in Apache SOLR/Lucene. It is an interesting option to Apache SOLR in the sense, that it is written in C++, yet is language independent on client side, same way as SOLR. It is OS as SOLR/Lucene, so customizing code is possible.
Can I use a MapReduce framework to create an index and somehow add it to a distributed Solr?
I have a burst of information (logfiles and documents) that will be transported over the internet and stored in my datacenter (or Amazon). It needs to be parsed, indexed, and finally searchable by our replicated Solr installation.
Here is my proposed architecture:
Use a MapReduce framework (Cloudera, Hadoop, Nutch, even DryadLinq) to prepare those documents for indexing
Index those documents into a Lucene.NET / Lucene (java) compatible file format
Deploy that file to all my Solr instances
Activate that replicated index
If that above is possible, I need to choose a MapReduce framework. Since Cloudera is vendor supported and has a ton of patches not included in the Hadoop install, I think it may be worth looking at.
Once I choose the MatpReduce framework, I need to tokenize the documents (PDF, DOCx, DOC, OLE, etc...), index them, copy the index to my Solr instances, and somehow "activate" them so they are searchable in the running instance. I believe this methodolgy is better that submitting documents via the REST interface to Solr.
The reason I bring .NET into the picture is because we are mostly a .NET shop. The only Unix / Java we will have is Solr and have a front end that leverages the REST interface via Solrnet.
Based on your experience, how does
this architecture look? Do you see
any issues/problems? What advice can
you give?
What should I not do to lose faceting search? After reading the Nutch documentation, I believe it said that it does not do faceting, but I may not have enough background in this software to understand what it's saying.
Generally, you what you've described is almost exactly how Nutch works. Nutch is an crawling, indexing, index merging and query answering toolkit that's based on Hadoop core.
You shouldn't mix Cloudera, Hadoop, Nutch and Lucene. You'll most likely end up using all of them:
Nutch is the name of indexing / answering (like Solr) machinery.
Nutch itself runs using a Hadoop cluster (which heavily uses it's own distributed file system, HDFS)
Nutch uses Lucene format of indexes
Nutch includes a query answering frontend, which you can use, or you can attach a Solr frontend and use Lucene indexes from there.
Finally, Cloudera Hadoop Distribution (or CDH) is just a Hadoop distribution with several dozens of patches applied to it, to make it more stable and backport some useful features from development branches. Yeah, you'd most likely want to use it, unless you have a reason not to (for example, if you want a bleeding edge Hadoop 0.22 trunk).
Generally, if you're just looking into a ready-made crawling / search engine solution, then Nutch is a way to go. Nutch already includes a lot of plugins to parse and index various crazy types of documents, include MS Word documents, PDFs, etc, etc.
I personally don't see much point in using .NET technologies here, but if you feel comfortable with it, you can do front-ends in .NET. However, working with Unix technologies might feel fairly awkward for Windows-centric team, so if I'd managed such a project, I'd considered alternatives, especially if your task of crawling & indexing is limited (i.e. you don't want to crawl the whole internet for some purpose).
Have you looked at Lucandra https://github.com/tjake/Lucandra a Cassandra based back end for Lucense/Solr which you can use Hadoop to populate the Cassandra store with the index of your data.