I want to sending multiple files to solr using curl.How i can do it ?
I can done with only one file with command for example:
curl
"http://localhost:8983/solr/update/extract?literal.id=paas2&commit=true"
-F "file=#cloud.pdf"
Anyone can help me,
Tks
The api does not support passing multiple files for extraction.
Usually the last file will be the only one thats gets uploaded and added.
You can have individual files indexed as separate entities in Solr.
OR One way to upload multiple files is to zip these files and upload the zip file.
There is one issue with Solr indexing zip files and you can try the SOLR-2332 Patch
i using apache solr 4.0 Beta which have capability to upload multiple file and generate id for each file uploaded using post.jar and It's very helpfull for me.
Let'see on :
http://wiki.apache.org/solr/ExtractingRequestHandler#SimplePostTool_.28post.jar.29
Thanks all :)
my problem have solved :)
Related
I want to upload lots of source files (say, java) to solr to allow indexed search on them.
They should be posted as plain text files.
No special parsing is required.
When trying to upload one java file I get "Unknown Source" related error.
java.lang.NoClassDefFoundError: com/uwyn/jhighlight/renderer/XhtmlRendererFactory
When I rename the file adding .txt in the end, it is uploaded successfully.
I have thousands of files to upload on a daily basis and need to keep original names.
How do I tell solr to treat all files in the directory as .txt?
Advanced thanks!
For googlers, concerning the Solr error:
java.lang.NoClassDefFoundError: com/uwyn/jhighlight/renderer/XhtmlRendererFactory
You can correct this by adding the jar "jhighlight-1.0.jar" in Solr. To do so:
Download the old solr 4.9. In recent version, jhighlight is not present.
Extract solr-4.9.0\contrib\extraction\lib\jhighlight-1.0.jar
Copy jhighlight-1.0.jar to the solr installation under solr/server/lib/ext/
Restart the server.
You can achieve the same by integrating solr with tika.
Apache will help you to extract the text of the source files.
It has a source code parser which supports c,c++ and Java.
Here is the link which will give you more details.
https://googleweblight.com/?lite_url=https://tika.apache.org/1.12/formats.html&lc=en-IN&s=1&m=972&host=www.google.co.in&ts=1461564865&sig=APY536wBFFAcFH7yUyvhh2TFslPz6LeClA
All:
I wonder what is a best way to upload a folder of pdf files into solr for indexing?
Right now, what I am doing is generate a files list, and for each file I initiate a request to solr for indexing, but it seems waste a lot of overload, so I wondering if I can use one request to upload all those files?
Thanks
If you are worried about the performance, your best bet is to run Apache Tika on your client side and just send the final extracted content document to Solr. That's the most efficient way and then you could batch multiple extractions together.
Solr extract code just runs Tika under the covers.
I have some information in a text file. I want to index it on solr. What should be the procedure. Any tool that can be used for indexing in solr ? Please guide me in details as I am not familiar with solr too mutch?
I'd refer you to Solr DataImportHandler Page, it has a comprehensive tutorial on how to import data from various source. Importing text files is under FileDataSource
One way would be to convert the plain text into CSV file. You can then use the CSV file uploading process to index data in Solr. Check the documentation here for more configurations
Here
I'm using Nutch (2.2.1) to crawl and index a set of web pages. These pages contain many .zip files, and each .zip contains many documents. I'll be searching the crawled data with Solr (4.7), and, within Solr, I'd like each document (within each zip) to have its own record.
Can anyone suggest a good way to set this up?
Is it possible to decompress .zip files in Nutch, and to get Nutch send multiple records to Solr, one for each file inside the .zip? If so, how? Would I need to write a plugin, or can this be done through configuration options alone?
On the other hand, would it make more sense to expand and index the zip files outside of Nutch, using a separate app?
Any advice would be much appreciated.
Thanks!
Is there a consistent code-base that allows me to upload a zip file to both GAE and Tomcat-based servers, extract the contents (plain-text files), and process them?
Both supports Java, so you can just use Java :)
In all seriousness, processing file uploads can be done with Apache Commons FileUpload and extracting them can be done with java.util.zip API.
See also the answers on those similar questions which are asked last two days, probably by your classmates/friends:
JSP/Servlets: How do I Upload a zip file, unzip it and extract the CSV fileā¦
Upload a zip file, unzip and read file