Index text file using apache solr - solr

I want to index text files in solr. However my data is large so I want to index it paragraph by paragraph. Using apache tika I am not able to do it. Can someone help me do it using DataImportHandler. I am currently using Solr-6.5.1

Related

What are the benefits of applying Apache Tika to Solr instead of Nutch

I am trying to crawl data with Apache Nutch and index it with Apache Solr.
As part of this I want to parse the content as well. I am trying to figure out is it better to apply Tika to Nutch , to Solr or both.
Apply it as early as you can but make sure to keep the original, full-fidelity, document somewhere as well.
There is no point passing a binary file around if you know that in the end you are going to reduce it to a set of metadata fields and get rid of the rest.

How to index text files in apache solr

I have some information in a text file. I want to index it on solr. What should be the procedure. Any tool that can be used for indexing in solr ? Please guide me in details as I am not familiar with solr too mutch?
I'd refer you to Solr DataImportHandler Page, it has a comprehensive tutorial on how to import data from various source. Importing text files is under FileDataSource
One way would be to convert the plain text into CSV file. You can then use the CSV file uploading process to index data in Solr. Check the documentation here for more configurations
Here

How to index text files using apache solr

I wanted to index text files. After searching a lot I got to know about Apache tika. Now in some sites where I studied Apache tika, I got to know that Apache tika converts the text it into XML format and then sends it to solr. But while converting it creates only one tag example
.......
Now the text file I wish to index is a tomcat local host access file. This file is in GB's. I cannot store it and a single index. I want each line to have line-id
.......
So that i can easily retrieve the matching line.
Can this be done in Apache Tika?
Solr with Tika supports extraction of data from multiple file formats.
The complete list of supported file formats can be found # link
You can provide as an input any of the above file formats and Tika would be able to autodetect the file format and extract text from the files and provide it to Solr for indexing.
Edit :-
Tika does not convert the text file to XML before sneding it to Solr.
Tika would just extract the metadata and the content of the file and populate fields in Solr as per the mapping defined.
You either have to feed the entire file as input to solr, which would be indexed as a single document OR you have to read the file line by line and provide it to Solr as a seperate document.
Solr and Tika would not handle this for you.
You may want to look at DataImportHandler to parse the file into lines or entries. It is a better match than running Tika on something that already has internal structure.

tika installation

I integrated Tika with Solr following the instructions provided in this link
Correct me if I am wrong, it seems to me that it can index the document files(pdf,doc,audio) located on my own system (given the path of directory in which those files are stored), but cannot index those files, located on internet, when I crawl some sites using nutch.
Can I index the documents files(pdf,audio,doc,zip) located on the web using Tika?
There are basically two ways to index binary documents within Solr, both with Tika:
Using Tika on the client side to extract information from binary files and then manually indexing the extracted text within Solr
Using ExtractingRequestHandler through which you can upload the binary file to the Solr server so that Solr can do the work for you. This way tika is not required on the client side.
In both cases you need to have the binary documents on the client side. While crawling, nutch should be able to download binary files, use Tika to generate text content out of them and then index data in Solr as it'd normally do with text documents. Nutch already uses Tika, I guess it's just a matter of configuring the type of documents you want to index changing the regex-urlfilter.txt nutch config file by removing from the following lines the file extensions that you want to index.
# skip some suffixes
-\.(swf|SWF|doc|DOC|mp3|MP3|WMV|wmv|txt|TXT|rtf|RTF|avi|AVI|m3u|M3U|flv|FLV|WAV|wav|mp4|MP4|avi|AVI|rss|RSS|xml|XML|pdf|PDF|js|JS|gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$
This way you would use the first option I mentioned. Then you need to enable the Tika plugin on nutch within your nutch-site.xml, have a look at this discussion from the nutch mailing list.
This should theoretically work, let me know if it doesn't.

SOLR - Tika - Store binary version of file

I am using Tika integrated in SOLR to index documents and allow search on said documents. This works pretty smoothly (right now my setup is exactly the same as the example as the example that ships with SOLR) and I can indeed index and search documents. As well as indexing the document I would like to store the binary version in SOLR so that when a search returns a result I can return a full PDF/Word/etc. document for download. Is this possible?
Nope.
Solr is full Text search engine and does not provide any out of the box implementation for storing the binary files.
Instead, you can easily host the binary files outside and have them rendered through http linked through id.

Resources