How to integrate Kibana with Apache solr instead of using elastic search.
If it cannot be done.
What are the alternatives to Kibana for Solr
At LucidWorks, we have ported Kibana to work with Solr and released it as open source.
If you want a bundled package, you can download that at http://www.lucidworks.com/lucidworks-silk/.
Our port for Kibana for Solr is bundled with Solr 4.7.0 and can be used as a query engine to build dashboards from indexes within the bundled Solr instance and/or located on other Solr instances.
The source code is available at https://github.com/LucidWorks/banana.
We have also included Solr Output Writer for LogStash with that bundle; however, you can use any ETL and indexing mechanism to get time series data into Solr. Links to this github repository are available on the LucidWorks link above.
HUE is an alternative search UI for Solr, while it is not good as Kibana for search at the moment.
You can use SiLK for sure but you are better off using the fully integrated dashboards module that comes with Lucidworks Fusion. Fusion will save you a ton of time and make it easier to focus on the search stuff that matters - like building a recommender engine, creating data-driven user experience, driving data enrichment with entity recognition and integrating with Big Data software like Hadoop.
Related
I am going to use apache solr for first time. I just want to write a basic code for apache solr in java with eclipse. Can you please suggest me that how to get familiar with apache solr.
Basically, SOLR is a full text search engine. You can use it as a data store to keep data of unstructured nature. You can use SolrJ to communicate to SOLR from Java.
We have bunch of pdf documents available in EMC Documentum
We have a requirement we have to integrate Apache solr with Documentum, so that we can search for a specific document in Solr, and we can get the documents from Documentum
I looked into below link which is not sufficient information
https://community.emc.com/docs/DOC-6520
Help is really appriciated
The link you have posted would get you a working solution. That author proposes to write a custom crawler that connects to the Documentum repository and then use Apache Tika to perform the content extraction for Solr.
However I would suggest you to use
Apache ManifoldCF to act as crawler that gets the content from Documentum to Solr. You should not write this by hand, as it already has been done and tested.
Apache ManifoldCF is an effort to provide an open source framework for connecting source content repositories like Microsoft Sharepoint and EMC Documentum, to target repositories or indexes, such as Apache Solr, Open Search Server, or ElasticSearch. Apache ManifoldCF also defines a security model for target repositories that permits them to enforce source-repository security policies.
Apache Tika to perform the content extraction (PDF to text) so that the content of the documents is searchable in Solr later on.
The Apache Tika™ toolkit detects and extracts metadata and text from over a thousand different file types (such as PPT, XLS, and PDF). All of these file types can be parsed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more.
I have built my own connecter to extract data from Documentum and insert in Elasticsearch or solr and I am willing to share. please contact me
We need to integrate a search engine in our plataform Catalog management software in Share point. The information is stored in multiple databases and a storage of files ( doc , ppt , pdf .....). Our dev platform is Asp.Net and we have done some pre-liminary work on Lucene, found it to be good. However, we just came to know of Solr.
We need to continue using lucene, but we need to defend her the solr.
Please any help is accepted.
And sorry for my english.
Lucene is a full-text search library used to provide search functionalities to an application. It can't be used as an application by itself. Solr is a complete search engine built around Lucene providing its search functionalities and others. Solr is a web application that can be used by itself without any development around it.
If you need a search engine to be called by your application I recommend you to use Solr.
I am trying to implement solr into sitecore but could not find any way for creating a Solr instance for the same. I have few PDFs from SDN I could find any way to create Solr instance in any. Considering that I am new to CMS I hope I could get some help here. Thank you
There are lots of resources available for setting up Solr, and integrating Sitecore.
Essentially Sitecore is ignorant with respects to how you setup Solr (barring a few exceptions), so you need to follow standard methods to set Solr up. If you are doing this on your local machine, then I recommend you simply download Solr and get it running through the provided Jetty App Server.
Once Solr is running, download the Solr Extensions from SDN, then follow the search scaling guide to integrate Solr. This really only boils down to the following;
Remove Lucene config files
Add Solr config files and binaries
Add Solr endpoint into relevant config
Generate Solr Schema via Sitecore -> Control Panel -> Search (within Sitecore)
Add Schema file to Solr Core configuration
et voila
There is a great guide here: http://www.dansolovay.com/2013/05/setting-up-solr-with-sitecore-7.html
I would like use a search engine for my application with Apache Derby database (contatin around 10k rows with text).
For web projects I used an elasticsearch, but it a desktop application. I would like to use an elasticksearch or solr for this app. It possible run elasticsearch or solr for it? It's require many memory or run in other (not application) thread, or have an alternative for derby?
There is a Lucene-Derby integration as part of this summer's release of Derby: https://issues.apache.org/jira/browse/DERBY-590
The development is complete; it is in testing. You can get access to it now, and give feedback to the development team about how it works for you.