I have a large number of documents (mainly PDFs) that I want to index and query on.
I want to store all these docs in a filesystem structure by year.
I currently have this setup in Solr. But i have to run scripts to extract meta from the PDFs, then update the index.
Is there a product out there that basically lets me pop a new PDF into a folder and its auto indexed by Solr.
I have seen Alfresco does this, but its got some drawbacks - is there anything else along these lines.
Or would I use nutch to crawl my filesystem and post updates to Solr? Im not sure about how I should do this?
Solr is a search server not a crawler. As you noted, Nutch can do this (I have used it for a similar usecase, indexing a knowledgebase dump).
Essentially, you would host a webserver with the root of the folder structure as Document root. Then allow directory listing at this webserver. Nutch could then crawl the top level url of this document dump.
Once you have this Nutch created index, you can then expose it through solr as well.
Related
We are upgrading Sitecore 8 to 9.3 for that we upgraded Lucene to solr
Can we compare Lucene and Solr index files so that we will be able to know the newly generated solr index files have the same data or not
It seem technically possible as you could use Luke to explore the content of the Lucene index folder.
While Solr data can be queried via either Sitecore UI, or Solr admin.
No. The indexes are very different even though the underlying technology is similar. What I find best is to have an old and new version of the same site with the same data. Then you can compare site search pages and any part of the site that runs on search.
I'm using Nutch (2.2.1) to crawl and index a set of web pages. These pages contain many .zip files, and each .zip contains many documents. I'll be searching the crawled data with Solr (4.7), and, within Solr, I'd like each document (within each zip) to have its own record.
Can anyone suggest a good way to set this up?
Is it possible to decompress .zip files in Nutch, and to get Nutch send multiple records to Solr, one for each file inside the .zip? If so, how? Would I need to write a plugin, or can this be done through configuration options alone?
On the other hand, would it make more sense to expand and index the zip files outside of Nutch, using a separate app?
Any advice would be much appreciated.
Thanks!
I integrated Tika with Solr following the instructions provided in this link
Correct me if I am wrong, it seems to me that it can index the document files(pdf,doc,audio) located on my own system (given the path of directory in which those files are stored), but cannot index those files, located on internet, when I crawl some sites using nutch.
Can I index the documents files(pdf,audio,doc,zip) located on the web using Tika?
There are basically two ways to index binary documents within Solr, both with Tika:
Using Tika on the client side to extract information from binary files and then manually indexing the extracted text within Solr
Using ExtractingRequestHandler through which you can upload the binary file to the Solr server so that Solr can do the work for you. This way tika is not required on the client side.
In both cases you need to have the binary documents on the client side. While crawling, nutch should be able to download binary files, use Tika to generate text content out of them and then index data in Solr as it'd normally do with text documents. Nutch already uses Tika, I guess it's just a matter of configuring the type of documents you want to index changing the regex-urlfilter.txt nutch config file by removing from the following lines the file extensions that you want to index.
# skip some suffixes
-\.(swf|SWF|doc|DOC|mp3|MP3|WMV|wmv|txt|TXT|rtf|RTF|avi|AVI|m3u|M3U|flv|FLV|WAV|wav|mp4|MP4|avi|AVI|rss|RSS|xml|XML|pdf|PDF|js|JS|gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$
This way you would use the first option I mentioned. Then you need to enable the Tika plugin on nutch within your nutch-site.xml, have a look at this discussion from the nutch mailing list.
This should theoretically work, let me know if it doesn't.
1) I referred https://github.com/evolvingweb/ajax-solr/wiki/reuters-tutorial for Ajax-Solr setup.
I want to know that although ajax-solr is running but it's searching under only reuters data. If I want to crawl the web using nutch and integrate it with solr,then i have to replace solr's schema.xml file with nutch's schema.xml file which will not be according to ajax-solr configuration. By replacing the schema.xml files, ajax-solr wont work(correct me if I am wrong)!!!
How would I now integrate Solr with Nutch along with Ajax-Solr so ajax-Solr can search other data on the web as well??
2) I would like to ask whether there are any front end API for Solr searching,except Ajax-Solr, which would help in efficient searching of the crawled web?
Look at Solr with multiple cores, it's better not to try mix documents with different nature in one collection
There are many APIs for SOLR, such as SOLRJ for Java (http://wiki.apache.org/solr/Solrj), SolPHP for PHP (http://wiki.apache.org/solr/SolPHP) and so on.
I would like to know where the indexed document is saved in solr search.
I have installed solr server at C:\solr and using solr 1.4. By making
necessary changes in the configuration files i am able to search data
using solr client.
Just wondering where that indexed document is saved.
Indexed documents are saved in index, which is located in solr/data/index folder.
Here you can find more details about those files.
From LuceneFAQ:
The index database is composed of 'segments' each stored in a separate
file. When you add documents to the index, new segments may be
created. These are periodically merged together.
EDIT:
If you want to examine contents of your index and tweak or troubleshoot your schema (analysis), see instructions about the greatest Lucene tool ever, called Luke in this recent post.