How to handle solr delta-import in file based datasource - solr

I am trying to implement delta-import in solr indexing its working fine,in case when i am indexing data from database.But i want to implement it on filebased datasource.
My data-config.xml file is like
dataSource type="com.solr.datasource.DataSource" name="SuggestionsFile"/>
<document name="suggester">
<entity name="file" dataSource="SuggestionsFile">
<field column="suggestion" name="suggestion" />
</entity>
and i am using DataImportHandler in solrconfig.xml file.i am not able to post my config file,i tried to post,but i don't know why not its showing.
My DataSource class read the text file and return list of data,that solr index .Its working fine in case of full-import but not working in case of delta-import.Pls suggest what else i need to do.

The FileDataSourceEntityProcessor supports filtering the list based on the "newerThan" attribute:
<entity
name="fileimport"
processor="FileListEntityProcessor"
newerThan="${dataimporter.last_index_time}"
.. other options ..
>
...
</entity>
There's a complete example available online.

Related

Solr Data Import Handler for Html

TLDR
How do I configure solr Data Import Handler so it will import html similar to solr's "post" utility ?
Context
We're doing a small project where code will export a set pages from wiki/confluence to 'straight html' (for availability in a DR data center--straight html pages will not depend on a database, etc)
We want to index the html pages in solr.
We "have it working" using the solr-shipped "post utility"
post -c OPERATIONS -recursive -0 -host solr $(find . -name '*.html')
This is fine.....However, we would like to leverage the Data Import Handler (DIH), i.e. replace the shell command with a single http call to the DIH endpoint ('/dataimport')
Question
How do I configure the tika "data config xml" file to get "similar functionality" as the solr "post command" ?
when I configure with data-config.xml, solr document only ends up with an "id" and "version" fields (i.e. where id is the untokenized file name)
correction: i had originally wrote '"id" and "title" field..."'
"id":"database_operations_2019.html",
"_version_":1650836000296927232},
however when I use "bin/post" the document has these fields, i.e. including tokenized title:
"id":"/usr/local/html/OPERATIONS_2019_1119_1500/./database_operations_2019.html",
"stream_size":[54115],
"x_parsed_by":["org.apache.tika.parser.DefaultParser",
"org.apache.tika.parser.html.HtmlParser"],
"stream_content_type":["text/html"],
"dc_title":["Database Operations 2019 Guidebook"],
"content_encoding":["UTF-8"],
"content_type_hint":["text/html; charset=UTF-8"],
"resourcename":["/usr/local/html/OPERATIONS_2019_1119_1500/./database_operations_2019.html"],
"title":["Database Operations 2019 Guidebook"],
"content_type":["text/html; charset=UTF-8"],
"_version_":1650834641083432960},
Some Points
I've tried RTM'ing, but do not follow how "field" maps to the "html body"
Parsing a directory-full-ofHTML is a circa-1999 problem, so I don't expect a lot of people
I've looked at the SimplePostTool.java (implementation of bin/post)...no real anwer.
Data Config Xml File
<dataConfig>
<dataSource type="BinFileDataSource"/>
<document>
<entity name="file" processor="FileListEntityProcessor"
dataSource="null"
htmlMapper="true"
format="html"
baseDir="/usr/local/var/www/confluence/OPERATIONS"
fileName=".*html"
rootEntity="false">
<field column="file" name="id"/>
<entity name="html" processor="TikaEntityProcessor"
url="${file.fileAbsolutePath}" format="text">
<field column="title" name="title" meta="true"/>
<field column="dc:format" name="format" meta="true"/>
<field column="text" name="text"/>
</entity>
</entity>
</document>
</dataConfig>
I ended up writing a few lines of code to parse the html files (jsoup) and ditched the solr data import handler (DIH).
Very straightforward using Spring and solr and jsoup html parser.
One caveat: my java "bean" object to store the solr fields needed a "text" field for the out-of-the-box default-search-field to work (i.e. with the solr docker instance)

How to store information in solr?

I recently began to learn solr, for me some things remain incomprehensible, I will explain what I'm trying to do, please tell me which way to go.
I need a web application in which it will be possible to save data, some fields from which will be in the form of text, some in the form of a file, how to add fields in the form of text is understandable, it is impossible to add files, or their contents as text, in this case I do not know where to store the file itself?
If you need to find a file and it will be known only a couple of words from the entire file, I want all the files to appear in which there are these words, should I add a separate database in this case? If so, where to store the files? if not, the same question.
I would be very pleased and understandable to look at it on some example, maybe you have a link?
This is far too wide and non-specific to give an answer you can just implement; in general you'd submit the documents together with an id to Solr (through Tika in the Extracting Request Handler / Solr Cell).
The documents itself will have to be stored somewhere else, as Solr doesn't handle document storage for you. They can be stored on a cloud service, on a network drive or a local disk - this will depend on your web application.
Your application will then receive the file from the user, store a database row assigning the file to the user, store the file somewhere (S3/GoogleCloudStorage/Local path) under a well-known name (usually the id of the row from the database) and submit the content to Solr for indexing - together with metadata (such as the user id) and the file id.
Searching will then give you the id back and you can retrieve the document from wherever you stored it.
As MatsLindh already mentioned a approach to achieve what you are looking for.
Here are some step by which you can index the files with known location.
Update the solrConfig.xml with below lines
<!-- Load Data Import Handler and Apache Tika (extraction) libraries -->
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar"/>
<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar"/>
<requestHandler name="/dataimport" class="solr.DataImportHandler">
<lst name="defaults">
<str name="config">tika-data-config.xml</str>
</lst>
</requestHandler>
Create a file named tika-data-config.xml under the G:\Solr\TikaConf\conf folder. with below configuration. This location could be different for you.
<dataConfig>
<dataSource type="BinFileDataSource"/>
<document>
<entity name="file" processor="FileListEntityProcessor" dataSource="null"
baseDir="G:/Solr/solr-7.7.2/example/exampledocs" fileName=".*xml"
rootEntity="false">
<field column="file" name="id"/>
<entity name="pdf" processor="TikaEntityProcessor"
url="${file.fileAbsolutePath}" format="text">
<field column="text" name="text"/>
</entity>
</entity>
</document>
</dataConfig>
Add the below fields in your schema.xml
<field name="text" type="text_general" indexed="true" stored="true" multiValued="false"/>
Update the solrConfig xml file as below in order to disable the schemaless mode
<!-- The update.autoCreateFields property can be turned to false to disable schemaless mode -->
<updateRequestProcessorChain name="add-unknown-fields-to-the-schema" default="${update.autoCreateFields:false}"
processor="uuid,remove-blank,field-name-mutating,parse-boolean,parse-long,parse-double,parse-date,add-schema-fields">
<processor class="solr.LogUpdateProcessorFactory"/>
<processor class="solr.DistributedUpdateProcessorFactory"/>
<processor class="solr.RunUpdateProcessorFactory"/>
</updateRequestProcessorChain>
Go to the solr admin page and select the core you created and click on data import.
Once data is imported or indexed, you can verify the same by querying it.
If you file location is dynamic, means you are retrieving the file location from the database and then that would be your first entity which is retrieving the information from your database about the files metadata like id,name,author and the file path etc..In the second entity which is TikaEntityProcessor, pass the file path and get the content of the file indexed...

Solr DataImportHandler - indexing multiple, related XML documents

Let's say I have two XML document types, A and B, that look like this:
A:
<xml>
<a>
<name>First Number</name>
<num>1</num>
</a>
<a>
<name>Second Number</name>
<num>2</num>
</a>
</xml>
B:
<xml>
<b>
<aKey>1</aKey>
<value>one</value>
</b>
<b>
<aKey>2</aKey>
<value>two</value>
</b>
</xml>
I'd like to index it like this:
<doc>
<str name="name">First Name</str>
<int name="num">1</int>
<str name="spoken">one</str>
</doc>
<doc>
<str name="name">Second Name</str>
<int name="num">2</int>
<str name="spoken">two</str>
</doc>
So, in effect, I'm trying to use a value from A as a key in B. Using DataImportHandler, I've used the following as my data config definition:
<dataConfig>
<dataSource type="FileDataSource" encoding="UTF-8" />
<document>
<entity name="document" transformer="LogTransformer" logLevel="trace"
processor="FileListEntityProcessor" baseDir="/tmp/somedir"
fileName="A.*.xml$" recursive="false" rootEntity="false"
dataSource="null">
<entity name="a"
transformer="RegexTransformer,TemplateTransformer,LogTransformer"
logLevel="trace" processor="XPathEntityProcessor" url="${document.fileAbsolutePath}"
stream="true" rootEntity="true" forEach="/xml/a">
<field column="name" xpath="/xml/a/name" />
<field column="num" xpath="/xml/a/num" />
<entity name="b" transformer="LogTransformer"
processor="XPathEntityProcessor" url="/tmp/somedir/b.xml"
stream="false" forEach="/xml/b" logLevel="trace">
<field column="spoken" xpath="/xml/b/value[../aKey=${a.num}]" />
</entity>
</entity>
</entity>
</document>
</dataConfig>
However, I encounter two problems:
I can't get the XPath expression with the predicate to match any rows; regardless of whether I use an alternative like /xml/b[aKey=${a.num}]/value, or even hardcoded value for aKey.
Even when I remove the predicate, the parser goes through the B file once for every row in A, which is obviously inefficient.
My question is: how, in light of the problems listed above, do I index the data correctly and efficiently with the DataImportHandler?
I'm using Solr 3.6.2 .
Note: This is a bit similar to this question, but it deals with two XML document types instead of a RDBMS and an XML document.
I have very bad experiences using DataImportHandler for that kind of data. A simple python script to merge your data would probably be smaller than your current configuration and much more readable. Depending on your requirements and data size, you could create a temporary xml file or you could directly pipe results to SOLR. If you really have to use the DataImportHandler, you could use a URLDataSource and setup a minimal server which generates your xml. Obvioulsy I'm a Python fan, but it's quite likely that it's also an easy job in Ruby, Perl, ...
I finally went with another solution due to an additional design requirement I didn't originally mention. What follows is the explanation and discussion. So....
If you only have one or a couple of import flow types for your Solr instances:
Then it might be best to go with Achim's answer and develop your own importer - either, as Achim suggests, in your favorite scripting language, or, in Java, using SolrJ's
ConcurrentUpdateSolrServer.
This is because the DataImportHandler framework does have a sudden spike in its learning curve once you need to define more complex import flows.
If you have a nontrivial number of different import flows:
Then I would suggest you consider staying with the DataImportHandler since you will probably end up implementing something similar anyway. And, as the framework is quite modular and extendable, customization isn't a problem.
This is the additional requirement I mentioned, so in the end I went with that route.
How I solved my particular quandary was indexing the files I needed to reference into separate cores and using a modified SolrEntityProcessor to access that data. The modifications were as follows:
applying the patch for the sub-entity problem,
adding caching (quick solution using Guava, there's probably a better way using an available Solr API for accessing other cores locally, but I was in a bit of a hurry at that point).
If you don't want to create a new core for each file, an alternative would be an extension of Achim's idea, i.e. creating a custom EntityProcessor that would preload the data and enable querying it somehow.

Need help indexing XML files into Solr using DataImportHandler

I don't know java, I don't know XML, and I don't know Lucene. Now that that's out of the way. I have been working to create a little project using apache solr/lucene. My problem is that I am unable to index the xml files. I think I understand how its supposed to work but I could be wrong. I am not sure what information is required for you to help me so I will just post the code.
<dataConfig>
<dataSource type="FileDataSource" encoding="UTF-8" />
<document>
<!-- This first entity block will read all xml files in baseDir and feed it into the second entity block for handling. -->
<entity name="AMMFdir" rootEntity="false" dataSource="null"
processor="FileListEntityProcessor"
fileName="^*\.xml$" recursive="true"
baseDir="C:\Documents and Settings\saperez\Desktop\Tomcat\apache-tomcat-7.0.23\webapps\solr\data\AMMF_New"
>
<entity
processor="XPathEntityProcessor"
name="AMMF"
pk="AcquirerBID"
datasource="AMMFdir"
url="${AMMFdir.fileAbsolutePath}"
forEach="/AMMF/Merchants/Merchant/"
transformer="DateFormatTransformer, RegexTransformer"
>
<field column="AcquirerBID" xpath="/AMMF/Merchants/Merchant/AcquirerBID" />
<field column="AcquirerName" xpath="/AMMF/Merchants/Merchant/AcquirerName" />
<field column="AcquirerMerchantID" xpath="/AMMF/Merchants/Merchant/AcquirerMerchantID" />
</entity>
</entity>
</document>
Example xml file
<?xml version="1.0" encoding="utf-8"?>
<AMMF xmlns="http://tempuri.org/XMLSchema.xsd" Version="11.2" CreateDate="2011-11-07T17:05:14" ProcessorBINCIB="422443" ProcessorName="WorldPay" FileSequence="18">
<Merchants Count="153">
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029881</AcquirerBID>
<AcquirerName>WorldPay</AcquirerName>
<AcquirerMerchantID>*</AcquirerMerchantID>
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029882</AcquirerBID>
<AcquirerName>WorldPay2</AcquirerName>
<AcquirerMerchantID>Hello World!</AcquirerMerchantID>
</Merchant>
</Merchants>
I have this in schema.
<field name="AcquirerBID" type="string" indexed="true" stored="true" required="true" />
<field name="AcquirerName" type="string" indexed="true" stored="true" />
<field name="AcquirerMerchantID" type="string" indexed="true" stored="true"/>
I have this in config.
<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler" default="true" >
<lst name="defaults">
<str name="config">AMMFconfig.xml</str>
</lst>
</requestHandler>
The sample XML is not well formed. This might explain errors indexing the files:
$ xmllint sample.xml
sample.xml:13: parser error : expected '>'
</Merchants>
^
sample.xml:14: parser error : Premature end of data in tag Merchants line 3
sample.xml:14: parser error : Premature end of data in tag AMMF line 2
Corrected XML
Here's what I think your sample data should look like (Didn't check the XSD file)
<?xml version="1.0" encoding="utf-8"?>
<AMMF xmlns="http://tempuri.org/XMLSchema.xsd" Version="11.2" CreateDate="2011-11-07T17:05:14" ProcessorBINCIB="422443" ProcessorName="WorldPay" FileSequence="18">
<Merchants Count="153">
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029881</AcquirerBID>
<AcquirerName>WorldPay</AcquirerName>
<AcquirerMerchantID>*</AcquirerMerchantID>
</Merchant>
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029882</AcquirerBID>
<AcquirerName>WorldPay2</AcquirerName>
<AcquirerMerchantID>Hello World!</AcquirerMerchantID>
</Merchant>
</Merchants>
</AMMF>
Alternative solution
I know you said you're not a programmer, but this task is significantly simpler, if you use the solrj interface.
The following is a groovy example which indexes your example XML
//
// Dependencies
// ============
import org.apache.solr.client.solrj.SolrServer
import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer
import org.apache.solr.common.SolrInputDocument
#Grapes([
#Grab(group='org.apache.solr', module='solr-solrj', version='3.5.0'),
])
//
// Main
// =====
SolrServer server = new CommonsHttpSolrServer("http://localhost:8983/solr/");
def i = 1
new File(".").eachFileMatch(~/.*\.xml/) {
it.withReader { reader ->
def ammf = new XmlSlurper().parse(reader)
ammf.Merchants.Merchant.each { merchant ->
SolrInputDocument doc = new SolrInputDocument();
doc.addField("id", i++)
doc.addField("bid_s", merchant.AcquirerBID)
doc.addField("name_s", merchant.AcquirerName)
doc.addField("merchantId_s", merchant.AcquirerMerchantID)
server.add(doc)
}
}
}
server.commit()
Groovy is a Java scripting language that does not require compilation. It would be just as easy to maintain as a DIH config file.
To figure out how DIH XML import works, I suggest you first carefully read this chapter in DIH wiki: http://wiki.apache.org/solr/DataImportHandler#HttpDataSource_Example.
Open the Slashdot link http://rss.slashdot.org/Slashdot/slashdot in your browser, then right click on the page and select View source. There's the XML file used in this example.
Compare it with XPathEntityProcessor configuration in DIH example and you'll see how easy it is to import any XML file in Solr.
If you need more help just ask...
Often the best thing to do is NOT use the DIH. How hard would it be to just post this data using the API and a custom script in a language you DO know?
The benefit of this approach is two-fold:
You learn more about your system, and know it better.
You don't spend time trying to understand the DIH.
The downside is that you're re-inventing the wheel a bit, but the DIH is quite a thing to understand.

Example solr xml working fine but my own xml files not working

Please find below necessary steps that executed.
Iam following same structure as mentioned by you, and checked results in the admin page by clicking search button, samples are working fine.
Ex:Added monitor.xml and search for video its displaying results----- search content is displaying properly
Let me explain you the problem which iam facing:
step 1: I started Apache tomcat
step2 : Indexing Data
java -jar post.jar myfile.xml
Here is my XML content:
<add>
<doc>
<field name="id">11111</field>
<field name="name">Youth to Elder</field>
<field name="Author"> Integrated Research Program</field>
<field name="Year">2009</field>
<field name="Publisher"> First Nation</field>
</doc>
<doc>
<field name="id">22222</field>
<field name="name">Strategies </field>
<field name="Author">Implementation Committee </field>
<field name="Year">2001</field>
<field name="Publisher">Policy</field>
</doc>
</add>
Step 4 : i did
java -jar post.jar myfile.xml
output of above one:
SimplePostTool: version 1.2
SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8, othe
r encodings are not currently supported
SimplePostTool: POSTing files to http://localhost:8983/solr/update..
SimplePostTool: POSTing file curnew.xml
SimplePostTool: FATAL: Solr returned an error: Bad Request
Request to help me on this.
You need to configure your schema. The default schema doesn't have any Author, Year or Publisher fields.

Resources