Solr Data Import Handler for Html - solr

TLDR
How do I configure solr Data Import Handler so it will import html similar to solr's "post" utility ?
Context
We're doing a small project where code will export a set pages from wiki/confluence to 'straight html' (for availability in a DR data center--straight html pages will not depend on a database, etc)
We want to index the html pages in solr.
We "have it working" using the solr-shipped "post utility"
post -c OPERATIONS -recursive -0 -host solr $(find . -name '*.html')
This is fine.....However, we would like to leverage the Data Import Handler (DIH), i.e. replace the shell command with a single http call to the DIH endpoint ('/dataimport')
Question
How do I configure the tika "data config xml" file to get "similar functionality" as the solr "post command" ?
when I configure with data-config.xml, solr document only ends up with an "id" and "version" fields (i.e. where id is the untokenized file name)
correction: i had originally wrote '"id" and "title" field..."'
"id":"database_operations_2019.html",
"_version_":1650836000296927232},
however when I use "bin/post" the document has these fields, i.e. including tokenized title:
"id":"/usr/local/html/OPERATIONS_2019_1119_1500/./database_operations_2019.html",
"stream_size":[54115],
"x_parsed_by":["org.apache.tika.parser.DefaultParser",
"org.apache.tika.parser.html.HtmlParser"],
"stream_content_type":["text/html"],
"dc_title":["Database Operations 2019 Guidebook"],
"content_encoding":["UTF-8"],
"content_type_hint":["text/html; charset=UTF-8"],
"resourcename":["/usr/local/html/OPERATIONS_2019_1119_1500/./database_operations_2019.html"],
"title":["Database Operations 2019 Guidebook"],
"content_type":["text/html; charset=UTF-8"],
"_version_":1650834641083432960},
Some Points
I've tried RTM'ing, but do not follow how "field" maps to the "html body"
Parsing a directory-full-ofHTML is a circa-1999 problem, so I don't expect a lot of people
I've looked at the SimplePostTool.java (implementation of bin/post)...no real anwer.
Data Config Xml File
<dataConfig>
<dataSource type="BinFileDataSource"/>
<document>
<entity name="file" processor="FileListEntityProcessor"
dataSource="null"
htmlMapper="true"
format="html"
baseDir="/usr/local/var/www/confluence/OPERATIONS"
fileName=".*html"
rootEntity="false">
<field column="file" name="id"/>
<entity name="html" processor="TikaEntityProcessor"
url="${file.fileAbsolutePath}" format="text">
<field column="title" name="title" meta="true"/>
<field column="dc:format" name="format" meta="true"/>
<field column="text" name="text"/>
</entity>
</entity>
</document>
</dataConfig>

I ended up writing a few lines of code to parse the html files (jsoup) and ditched the solr data import handler (DIH).
Very straightforward using Spring and solr and jsoup html parser.
One caveat: my java "bean" object to store the solr fields needed a "text" field for the out-of-the-box default-search-field to work (i.e. with the solr docker instance)

Related

solr : How to Clear the baseDir folder after the DIH import

Solr Version :: 6.6.1
I am able to import the pdf files into the Solr system using the DIH and performs the indexing as expected. But i wish to clear the folder C:/solr-6.6.1/server/solr/core_K2_Depot/Depot after the successful finish of the indexing process.
Please suggest, if there is a way to delete all the files from the folder via the DIH data-config.xml or by another easier way.
<!--Local filesystem-->
<dataConfig>
<dataSource type="BinFileDataSource"/>
<document>
<entity name="K2FileEntity" processor="FileListEntityProcessor" dataSource="null"
recursive = "true"
baseDir="C:/solr-6.6.1/server/solr/core_K2_Depot/Depot" fileName=".*pdf" rootEntity="false">
<field column="file" name="id"/>
<field column="fileLastModified" name="lastmodified" />
<entity name="pdf" processor="TikaEntityProcessor" onError="skip"
url="${K2FileEntity.fileAbsolutePath}" format="text">
<field column="title" name="title" meta="true"/>
<field column="dc:format" name="format" meta="true"/>
<field column="text" name="text"/>
</entity>
</entity>
</document>
</dataConfig>
Usuaully, in production you want to run DIH proces via shell scripts, which are at first copying needed files for ftp, http, s3, etc, than runs full-import or delta-import and later track the status of the indexing via status command as soon as it will successfully ends you just need to execute rm command
while flag; do
curl -XGET // get status of the DIH
if finished change flag to false
rm files -rf // removing not needed files for indexing
There are no any support of deleting external files in Solr

How to handle solr delta-import in file based datasource

I am trying to implement delta-import in solr indexing its working fine,in case when i am indexing data from database.But i want to implement it on filebased datasource.
My data-config.xml file is like
dataSource type="com.solr.datasource.DataSource" name="SuggestionsFile"/>
<document name="suggester">
<entity name="file" dataSource="SuggestionsFile">
<field column="suggestion" name="suggestion" />
</entity>
and i am using DataImportHandler in solrconfig.xml file.i am not able to post my config file,i tried to post,but i don't know why not its showing.
My DataSource class read the text file and return list of data,that solr index .Its working fine in case of full-import but not working in case of delta-import.Pls suggest what else i need to do.
The FileDataSourceEntityProcessor supports filtering the list based on the "newerThan" attribute:
<entity
name="fileimport"
processor="FileListEntityProcessor"
newerThan="${dataimporter.last_index_time}"
.. other options ..
>
...
</entity>
There's a complete example available online.

Getting Exception while reading file content using solr line entity processor

I need to search inside the file contents for which I am using Solr Data Import Handler. The response should show the content line where the search word is appearing. So for processing line by line I am using Line Entity Processor. My data-config file is
<dataConfig>
<dataSource type="BinFileDataSource" name = "fds"/>
<document>
<entity name="filelist" processor="FileListEntityProcessor" fileName="sample.docx"
rootEntity="false" baseDir="C:\SampleDocuments" >
<entity name="fileline" processor="LineEntityProcessor"
url="${filelist.fileAbsolutePath}" format="text">
<field column="linecontent" name="rawLine"/>
</entity>
</entity>
</document>
The schema.xml is having entry or rawLine.
<field name="rawLine" type="text" indexed="true" stored="true"/>
But when I am running the command for full-import, its throwing an exception
DataImportHandlerException:java.lang.ClassCastException: java.io.FileInputStream cannot be cast to java.io.Reader
Please help me on this as I have spend few days on this problem.
BinFileDataSource works with InputStream while FileDataSource.
You can try using the FileDataSource instead to check for the Casting issue.
<dataSource type="FileDataSource" name = "fds"/>

Need help indexing XML files into Solr using DataImportHandler

I don't know java, I don't know XML, and I don't know Lucene. Now that that's out of the way. I have been working to create a little project using apache solr/lucene. My problem is that I am unable to index the xml files. I think I understand how its supposed to work but I could be wrong. I am not sure what information is required for you to help me so I will just post the code.
<dataConfig>
<dataSource type="FileDataSource" encoding="UTF-8" />
<document>
<!-- This first entity block will read all xml files in baseDir and feed it into the second entity block for handling. -->
<entity name="AMMFdir" rootEntity="false" dataSource="null"
processor="FileListEntityProcessor"
fileName="^*\.xml$" recursive="true"
baseDir="C:\Documents and Settings\saperez\Desktop\Tomcat\apache-tomcat-7.0.23\webapps\solr\data\AMMF_New"
>
<entity
processor="XPathEntityProcessor"
name="AMMF"
pk="AcquirerBID"
datasource="AMMFdir"
url="${AMMFdir.fileAbsolutePath}"
forEach="/AMMF/Merchants/Merchant/"
transformer="DateFormatTransformer, RegexTransformer"
>
<field column="AcquirerBID" xpath="/AMMF/Merchants/Merchant/AcquirerBID" />
<field column="AcquirerName" xpath="/AMMF/Merchants/Merchant/AcquirerName" />
<field column="AcquirerMerchantID" xpath="/AMMF/Merchants/Merchant/AcquirerMerchantID" />
</entity>
</entity>
</document>
Example xml file
<?xml version="1.0" encoding="utf-8"?>
<AMMF xmlns="http://tempuri.org/XMLSchema.xsd" Version="11.2" CreateDate="2011-11-07T17:05:14" ProcessorBINCIB="422443" ProcessorName="WorldPay" FileSequence="18">
<Merchants Count="153">
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029881</AcquirerBID>
<AcquirerName>WorldPay</AcquirerName>
<AcquirerMerchantID>*</AcquirerMerchantID>
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029882</AcquirerBID>
<AcquirerName>WorldPay2</AcquirerName>
<AcquirerMerchantID>Hello World!</AcquirerMerchantID>
</Merchant>
</Merchants>
I have this in schema.
<field name="AcquirerBID" type="string" indexed="true" stored="true" required="true" />
<field name="AcquirerName" type="string" indexed="true" stored="true" />
<field name="AcquirerMerchantID" type="string" indexed="true" stored="true"/>
I have this in config.
<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler" default="true" >
<lst name="defaults">
<str name="config">AMMFconfig.xml</str>
</lst>
</requestHandler>
The sample XML is not well formed. This might explain errors indexing the files:
$ xmllint sample.xml
sample.xml:13: parser error : expected '>'
</Merchants>
^
sample.xml:14: parser error : Premature end of data in tag Merchants line 3
sample.xml:14: parser error : Premature end of data in tag AMMF line 2
Corrected XML
Here's what I think your sample data should look like (Didn't check the XSD file)
<?xml version="1.0" encoding="utf-8"?>
<AMMF xmlns="http://tempuri.org/XMLSchema.xsd" Version="11.2" CreateDate="2011-11-07T17:05:14" ProcessorBINCIB="422443" ProcessorName="WorldPay" FileSequence="18">
<Merchants Count="153">
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029881</AcquirerBID>
<AcquirerName>WorldPay</AcquirerName>
<AcquirerMerchantID>*</AcquirerMerchantID>
</Merchant>
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029882</AcquirerBID>
<AcquirerName>WorldPay2</AcquirerName>
<AcquirerMerchantID>Hello World!</AcquirerMerchantID>
</Merchant>
</Merchants>
</AMMF>
Alternative solution
I know you said you're not a programmer, but this task is significantly simpler, if you use the solrj interface.
The following is a groovy example which indexes your example XML
//
// Dependencies
// ============
import org.apache.solr.client.solrj.SolrServer
import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer
import org.apache.solr.common.SolrInputDocument
#Grapes([
#Grab(group='org.apache.solr', module='solr-solrj', version='3.5.0'),
])
//
// Main
// =====
SolrServer server = new CommonsHttpSolrServer("http://localhost:8983/solr/");
def i = 1
new File(".").eachFileMatch(~/.*\.xml/) {
it.withReader { reader ->
def ammf = new XmlSlurper().parse(reader)
ammf.Merchants.Merchant.each { merchant ->
SolrInputDocument doc = new SolrInputDocument();
doc.addField("id", i++)
doc.addField("bid_s", merchant.AcquirerBID)
doc.addField("name_s", merchant.AcquirerName)
doc.addField("merchantId_s", merchant.AcquirerMerchantID)
server.add(doc)
}
}
}
server.commit()
Groovy is a Java scripting language that does not require compilation. It would be just as easy to maintain as a DIH config file.
To figure out how DIH XML import works, I suggest you first carefully read this chapter in DIH wiki: http://wiki.apache.org/solr/DataImportHandler#HttpDataSource_Example.
Open the Slashdot link http://rss.slashdot.org/Slashdot/slashdot in your browser, then right click on the page and select View source. There's the XML file used in this example.
Compare it with XPathEntityProcessor configuration in DIH example and you'll see how easy it is to import any XML file in Solr.
If you need more help just ask...
Often the best thing to do is NOT use the DIH. How hard would it be to just post this data using the API and a custom script in a language you DO know?
The benefit of this approach is two-fold:
You learn more about your system, and know it better.
You don't spend time trying to understand the DIH.
The downside is that you're re-inventing the wheel a bit, but the DIH is quite a thing to understand.

Example solr xml working fine but my own xml files not working

Please find below necessary steps that executed.
Iam following same structure as mentioned by you, and checked results in the admin page by clicking search button, samples are working fine.
Ex:Added monitor.xml and search for video its displaying results----- search content is displaying properly
Let me explain you the problem which iam facing:
step 1: I started Apache tomcat
step2 : Indexing Data
java -jar post.jar myfile.xml
Here is my XML content:
<add>
<doc>
<field name="id">11111</field>
<field name="name">Youth to Elder</field>
<field name="Author"> Integrated Research Program</field>
<field name="Year">2009</field>
<field name="Publisher"> First Nation</field>
</doc>
<doc>
<field name="id">22222</field>
<field name="name">Strategies </field>
<field name="Author">Implementation Committee </field>
<field name="Year">2001</field>
<field name="Publisher">Policy</field>
</doc>
</add>
Step 4 : i did
java -jar post.jar myfile.xml
output of above one:
SimplePostTool: version 1.2
SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8, othe
r encodings are not currently supported
SimplePostTool: POSTing files to http://localhost:8983/solr/update..
SimplePostTool: POSTing file curnew.xml
SimplePostTool: FATAL: Solr returned an error: Bad Request
Request to help me on this.
You need to configure your schema. The default schema doesn't have any Author, Year or Publisher fields.

Resources