I'm new to Apache solr and I want to index data from kafka into solr. Can anyone give simple example of doing this ?
The easiest way to get started on this would probably be to use Kafka Connect.
Connect is part of the apache Kafka package, so should already be installed on your Kakfa node(s). Please refer to the quickstart for a brief introduction on how to run connect.
For writing data to Solr there are two connectors that you could try:
https://github.com/jcustenborder/kafka-connect-solr
https://github.com/MSurendra/kafka-connect-solr
While I don't have any experience with either of them, I'd probably try Jeremy's first based on latest commit and the fact that he works for Confluent.
Related
I am looking for integrating Apache Atlas with Apache Flink to capture Job lineage. I found some references around it from Cloudera (CDP) and they are using Atlas-flink hook , But not able to find any documentation or implementation without CDP and Atlas-Flink connection JAR as well.
Let me know if anyone has done integration between them?
There is a requirement where we get a stream of data from Kafka Stream and our objective is to push this data to SOLR.
We did some reading but we could find there are lot of Kafka Connect solutions available in the market, but the problem is we do not know which is the best solution and how to achieve.
The options are:
Use Solr connector to connect with Kafka.
Use Apache Storm as it directly provides support for integrating with Solr.
There is no much documentation or in depth information provided for the above mentioned options.
Will anyone be kind enough to let me know
How we can use a Solr connector and integrate with Kafka stream without using Confluent?
Solr-Kafka Connector: https://github.com/MSurendra/kafka-connect-solr
Also, With regard to Apache Storm,
will it be possible for Apache Storm to accept the Kafka Stream and push it to Solr, though we would need some sanitization of data before pushing it to Solr?
I am avoiding Storm here, because the question is mostly about Kafka Connect
CAVEAT - This Solr Connector in the question is using Kakfa 0.9.0.1 dependencies, therefore, it is very unlikely to work with the newest Kafka API's.
This connector is untested by me. Follow at your own risk
The following is an excerpt from Confluent's documentation on using community connectors, with some emphasis and adaptations. In other words, written for Kafka Connects not included in Confluent Platform.
1) Clone the GitHub repo for the connector
$ git clone https://github.com/MSurendra/kafka-connect-solr
2) Build the jar with maven
Change into the newly cloned repo, and checkout the version you want. (This Solr connector has no releases like the Confluent ones).
You will typically want to checkout a released version.
$ cd kafka-connect-solr; mvn package
From here, see Installing Plugins
3) Locate the connector’s uber JAR or plugin directory
We copy the resulting Maven output in the target directory into one of the directories on the Kafka Connect worker’s plugin path (the plugin.path property).
For example, if the plugin path includes the /usr/local/share/kafka/plugins directory, we can use one of the following techniques to make the connector available as a plugin.
As mentioned in the Confluent docs, the export CLASSPATH=<some path>/kafka-connect-solr-1.0.jar option would work, though plugin.path will be the way moving forward (Kafka 1.0+)
You should know which option to use based on the result of mvn package
Option 1) A single, uber JAR file
With this Solr Connector, we get a single file named kafka-connect-solr-1.0.jar.
We copy that file into the /usr/local/share/kafka/plugins directory:
$ cp target/kafka-connect-solr-1.0.jar /usr/local/share/kafka/plugins/
Option 2) A directory of dependencies
(This does not apply to the Solr Connector)
If the connector’s JARs are collected into a subdirectory of the build’s target directories, we can copy all of these JARs into a plugin directory within the /usr/local/share/kafka/plugins, for example
$ mkdir -p /usr/local/share/kafka/plugins/kafka-connect-solr
$ cp target/kafka-connect-solr-1.0.0/share/java/kafka-connect-solr/* /usr/local/share/kafka/plugins/kafka-connect-solr/
Note
Be sure to install the plugin on all of the machines where you’re running Kafka Connect distributed worker processes. It is important that every connector you use is available on all workers, since Kafka Connect will distribute the connector tasks to any of the worker
4) Running Kafka Connect
If you have properly set plugin.path or did export CLASSPATH, then you can use connect-standalone or connect-distributed with the appropriate config file for that Connect project.
Regarding,
we would need some sanitization of data before pushing it to Solr
You would need to do that with a separate process like Kafka Streams, Storm, or other process prior to Kafka Connect. Write your transformed output to a secondary topic. Or write your own Kafka Connect Transform process. Kafka Connect has very limited transformations out of the box.
Also worth mentioning - JSON seems to be the only supported Kafka message format for this Solr connector
I am going to use apache solr for first time. I just want to write a basic code for apache solr in java with eclipse. Can you please suggest me that how to get familiar with apache solr.
Basically, SOLR is a full text search engine. You can use it as a data store to keep data of unstructured nature. You can use SolrJ to communicate to SOLR from Java.
I want to create a search box in my web app using Apache Lucene and Apache Solr.I am using postgres database and have to do it with java.
As I new to these concepts (solr,lucene), I am struggling with this. I already installed and configured apache Solr with glassfish.Now I dont know how to start with this, Whether I have to cretae a java project in eclipse or I have to use Solr admin GUI.
can any one help me on this?
Thanks in Advance.....
In order to make data searchable, you have to first index your data. You can use one of the following ways to index data.
By using Solr clients such as Solrj
If you store your data in relational DB then you can use DataImportHandler
By posting XML or Json messages. Check here for documentation.
When new data added you can index it using Solr clients (Solrj). You can also search your data using Solrj or any other client libraries.
You can find other client libraries here.
You can start with Solr DIH to index the data from postgres to Solr.
For more detailed understanding you can refer to :-
how-to-import-data-from-sql-databases-part-1
how-to-import-data-from-sql-databases-part-2
how-to-import-data-from-sql-databases-part-3
Recently I got involved in a task, and part of it require to use Apache Solr ( for Document Search) ,and Apache Tika ( to Extract the meta-text or plain text from documents)
I have n't integrated Solr and tika yet ,But I have worked with both of them individually I might have set of questions related to Apache Solr and Apache Tika , It might be at beginners level or average.
Following types of practical I did with Solr e.g. created a dummy database, wrote a program, configured - schema.xml things, ran Solr sever, and program which fetches documents from database and store in Solr Document Index , Made a Simple client to fetch data from Solr via JSON Interface, Made a Program which keeps MySQL Database to sync with Apache’s Solr document Index.
Following types of practical I did with tika e.g. compiled and Installed Tika, understood its document parsing capablities.
..
My Sample Task statement:
Part of my project require to store around 100,000 of documents (Data of these 100,000 (Doc,PDF,Txt) docs are fetched by Apache tika and pushed to MySql’s Database and later that pushed to apache Solr’s Document Database)for Full Text Search and search them those via a client interface (Browser)
In simple programmatical level this task will get done,
I would like to understand the challenges related to managing the index or something else in Solr e.g.
** In advanced level does it require optimizing the Solr’s Open Source Code?
** While Solr works in proper way, does it provide any specific challenges?
** What Key things need to consider initially so that, Solr should work in a proper way.
** Do you think any extra tool to developed to monitor Solr’s working ?
Hope you got the idea related to questions I have ?
** Also I would like to know If you have any experience of using apache Tika with apache Solr, and any challenges or key things to consider ?
Would you like to recommend and specific sources Or If you have any document or anything which you feel to be helpful.