how to index files over ftp ,
the FTP repo contain all my documents in different format, i am able to do this task for system folder but it doesn't work with ftp
i have this configuration via (DIH)
<dataConfig>
<dataSource type="BinFileDataSource" />
<dataSource type="BinURLDataSource" name="binSource" baseUrl="ftp://localhost:21/" onError="skip" user="solr_ftp" password="solr_ftp_pass" />
<document>
<!-- baseDir: path to the folder that containt the files (pdf | doc | docx | ...) -->
<entity name="files" dataSource="binSource" baseDir="ftp://localhost" rootEntity="false" processor="FileListEntityProcessor" fileName=".*\.(doc)|(pdf)|(docx)|(txt)|(rtf)|(html)|(htm)" onError="skip" recursive="true">
<field column="fileAbsolutePath" name="filePath" />
<field column="resourceName" name="resourceName" />
<field column="fileSize" name="size" />
<field column="fileLastModified" name="lastModified" />
<!-- tika -->
<entity name="documentImport" processor="TikaEntityProcessor" url="${files.fileAbsolutePath}" format="text">
<field column="title" name="title" meta="true"/>
<field column="subject" name="subject" meta="true"/>
<field column="description" name="description" meta="true"/>
<field column="comments" name="comments" meta="true"/>
<field column="Author" name="author" meta="true"/>
<field column="Keywords" name="keywords" meta="true"/>
<field column="category" name="category" meta="true"/>
<field column="xmpTPg:NPages" name="Page-Count" meta="true"/>
<field column="text" name="content"/>
</entity>
</entity>
</document>
</dataConfig>
Error:
failed:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException: 'baseDir' value: ftp://localhost is not a directory Processing Document # 1
Related
how to index pdf files in apache solr (version 8) with xml Documents
example:
<add>
<doc>
<field name="id">filePath</field>
<field name="title">the title</field>
<field name="description">description of the pdf file</field>
<field name="Creator">jhone doe</field>
<field name="Language">English</field>
<field name="Publisher">Publisher_name</field>
<field name="tags">some_tag</field>
<field name="is_published">true</field>
<field name="year">2002</field>
<field name="file">path_to_the_file/file_name.pdf</field>
</doc>
</add>
UPDATE
how to set literal.id to filePath
OK, this is what i did
i am using solr DHI
in solrconfig.xml
<requestHandler name="/dataimport_fromXML" class="org.apache.solr.handler.dataimport.DataImportHandler">
<lst name="defaults">
<str name="config">data.import.xml</str>
<str name="update.chain">dedupe</str>
</lst>
</requestHandler>
and the data.import.xml file
<dataConfig>
<dataSource type="BinFileDataSource" name="data"/>
<dataSource type="FileDataSource" name="main"/>
<document>
<!-- url : the url for the xml file that holde the metadata -->
<entity name="rec" processor="XPathEntityProcessor" url="${solr.install.dir:}solr/solr_core_name/filestore/docs_metaData/metaData.xml" forEach="/docs/doc" dataSource="main" transformer="RegexTransformer,DateFormatTransformer">
<field column="resourcename" xpath="//resourcename" name="resourceName" />
<field column="title" xpath="//title" name="title" />
<field column="subject" xpath="//subject" name="subject"/>
<field column="description" xpath="//description" name="description"/>
<field column="comments" xpath="//comments" name="comments"/>
<field column="author" xpath="//author" name="author"/>
<field column="keywords" xpath="//keywords" name="keywords"/>
<!-- baseDir: path to the folder that containt the files (pdf | doc | docx | ...) -->
<entity name="files" dataSource="null" rootEntity="false" processor="FileListEntityProcessor" baseDir="${solr.install.dir:}solr/solr_core_name/filestore/docs_folder" fileName="${rec.resourcename}" onError="skip" recursive="false">
<field column="fileAbsolutePath" name="filePath" />
<field column="resourceName" name="resourceName" />
<field column="fileSize" name="size" />
<field column="fileLastModified" name="lastModified" />
<!-- for etch file extracte metadata if not in the xml metadata file -->
<entity name="file" processor="TikaEntityProcessor" dataSource="data" format="text" url="${files.fileAbsolutePath}" onError="skip" recursive="false">
<field column="title" name="title" meta="true"/>
<field column="subject" name="subject" meta="true"/>
<field column="description" name="description" meta="true"/>
<field column="comments" name="comments" meta="true"/>
<field column="Author" name="author" meta="true"/>
<field column="Keywords" name="keywords" meta="true"/>
</entity>
</entity>
</entity>
</document>
</dataConfig>
after that all what you have to do is create xml file (metaData.xml)
<docs>
<doc>
<resourcename>fileName.pdf</resourcename>
<title></title>
<subject></subject>
<description></description>
<comments></comments>
<author></author>
<keywords></keywords>
</doc>
</docs>
and put all your file in one folder
"${solr.install.dir:}solr/solr_core_name/filestore/docs_folder"
the ${solr.install.dir:} is solr home folder
for the update in the question
how to set literal.id to filePath
in data.import.xml map the fileAbsolutePath to the id
<field column="fileAbsolutePath" name="id" />
one last thing
in this example the id is auto generated i am using
<updateRequestProcessorChain name="dedupe">
witch create a unique id based on the hash of the content for avoid duplication
Im successfully able to index pdf,doc,ppt,etc files using the Data Import Handler in solr 4.3.0 .
My data-config.xml looks like this -
<dataConfig>
<dataSource name="bin" type="BinFileDataSource" />
<document>
<entity name="f" dataSource="null" rootEntity="false"
processor="FileListEntityProcessor"
baseDir="C:\Users\aroraarc\Desktop\Impdo"
fileName=".*\.(DOC)|(PDF)|(pdf)|(doc)|(docx)|(ppt)|(pptx)|(xls)|(xlsx)|(txt)" onError="skip"
recursive="true">
<field column="fileAbsolutePath" name="path" />
<field column="fileSize" name="size" />
<field column="fileLastModified" name="lastmodified" />
<field column="file" name="fileName"/>
<entity name="tika-test" dataSource="bin" processor="TikaEntityProcessor"
url="${f.fileAbsolutePath}" format="text" onError="skip">
<field column="Author" name="author" meta="true"/>
<field column="title" name="title" meta="true"/>
<field column="text" name="content"/>
</entity>
</entity>
</document>
</dataConfig>
However in the fileName field i want to insert the pure file name without the extension. Eg - Instead of 'HelloWorld.txt' I want only 'HelloWorld' to be inserted in the fileName field. How do I achieve this?
Thanks in advance!
Check ScriptTransformer to replace or change the value before it is indexed.
Example -
Data Config - Add custom field -
<script><![CDATA[
function changeFileName(row){
var fileName= row.get('fileName');
// Replace or remove the extension .. e.g. from last index of .
file_name_new = file_name.replace ......
row.put(fileName, row.get('file_name_new'));
return row;
}
]]></script>
Entity mapping -
<entity name="f" transformer="script:changeFileName" ....>
......
</entity>
I have an existing collection, to which I want to add an RSS importer. I've copied what I could gleam from the example-DIH/solr/rss code.
The details are below, but the bottom line is that everything seems to run, but it always says "Fetched: 0" (and I get no documents). There are no exceptions in the tomcat log.
Questions:
Is there a way to turn up debugging on rss importers?
Can I see solr's actual request and response?
What would cause the request to succeed, but no rows to be fetched?
Is there a tutorial for adding an RSS DIH to an existing collection?
Thanks!
My solrconfig.xml file contains the requestHandler:
<requestHandler name="/dataimport"
class="org.apache.solr.handler.dataimport.DataImportHandler">
<lst name="defaults">
<str name="config">rss-data-config.xml</str>
</lst>
</requestHandler>
And rss-data-config.xml:
<dataConfig>
<dataSource type="URLDataSource" />
<document>
<entity name="slashdot"
pk="link"
url="http://rss.slashdot.org/Slashdot/slashdot"
processor="XPathEntityProcessor"
forEach="/rss/channel | /rss/item"
transformer="DateFormatTransformer">
<field column="source_name" xpath="/rss/channel/title" commonField="true" />
<field column="title" xpath="/rss/item/title" />
<field column="link" xpath="/rss/item/link" />
<field column="body" xpath="/rss/item/description" />
<field column="date" xpath="/rss/item/date" dateTimeFormat="yyyy-MM-dd'T'HH:mm:ss" />
</entity>
</document>
</dataConfig>
and from schema.xml:
<fields>
<field name="title" type="text_general" required="true" indexed="true" stored="true"/>
<field name="link" type="string" required="true" indexed="true" stored="true"/>
<field name="source_name" type="text_general" required="true" indexed="true" stored="true"/>
<field name="body" type="text_general" required="false" indexed="false" stored="true"/>
<field name="date" type="date" required="true" indexed="true" stored="true" />
<field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
<field name="_version_" type="long" indexed="true" stored="true"/>
<fields>
When I run the dataimport from the admin web page, it all seems to go well. It shows "Requests: 1" and there are no exceptions in the tomcat log:
Mar 12, 2013 9:02:58 PM org.apache.solr.handler.dataimport.DataImporter maybeReloadConfiguration
INFO: Loading DIH Configuration: rss-data-config.xml
Mar 12, 2013 9:02:58 PM org.apache.solr.handler.dataimport.DataImporter loadDataConfig
INFO: Data Configuration loaded successfully
Mar 12, 2013 9:02:58 PM org.apache.solr.handler.dataimport.DataImporter doFullImport
INFO: Starting Full Import
Mar 12, 2013 9:02:58 PM org.apache.solr.handler.dataimport.SimplePropertiesWriter readIndexerProperties
INFO: Read dataimport.properties
Mar 12, 2013 9:02:59 PM org.apache.solr.handler.dataimport.DocBuilder execute
INFO: Time taken = 0:0:0.693
Mar 12, 2013 9:02:59 PM org.apache.solr.update.processor.LogUpdateProcessor finish
INFO: [articles] webapp=/solr path=/dataimport params={optimize=false&clean=false&indent=true&commit=false&verbose=true&entity=slashdot&command=full-import&debug=true&wt=json} {} 0 706
Your problem here is due to your rss-data-config.xml and the defined xpaths.
If you open the url http://rss.slashdot.org/Slashdot/slashdot in Internet Explorer and hit F12 for developer tools it will show you the structure of the HTML.
You can see that the node <item> is a child of <channel> and not <rss>. So your config should look as follows:
<?xml version="1.0" encoding="UTF-8" ?>
<dataConfig>
<dataSource type="URLDataSource" />
<document>
<entity name="slashdot"
pk="link"
url="http://rss.slashdot.org/Slashdot/slashdot"
processor="XPathEntityProcessor"
forEach="/rss/channel | /rss/channel/item"
transformer="DateFormatTransformer">
<field column="source_name" xpath="/rss/channel/title" commonField="true" />
<field column="title" xpath="/rss/channel/item/title" />
<field column="link" xpath="/rss/channel/item/link" />
<field column="body" xpath="/rss/channel/item/description" />
<field column="date" xpath="/rss/channel/item/date" dateTimeFormat="yyyy-MM-dd'T'HH:mm:ss" />
</entity>
</document>
</dataConfig>
Which Solr version are you using ?
For 3.X you have the debug feature with DIH which will help you debug step by step.
Its missing in 4.X probably check SOLR-4151
The following data-config.xml file does the work for Slashdot (Solr 4.2.0)
<dataConfig>
<dataSource type="HttpDataSource" />
<document>
<entity name="slashdot"
pk="link"
url="http://rss.slashdot.org/Slashdot/slashdot"
processor="XPathEntityProcessor"
forEach="/rss/channel/item"
transformer="DateFormatTransformer">
<field column="title" xpath="/rss/channel/item/title" />
<field column="link" xpath="/rss/channel/item/link" />
<field column="description" xpath="/rss/channel/item/description" />
<field column="creator" xpath="/rss/channel/item/creator" />
<field column="item-subject" xpath="/rss/channel/item/subject" />
<field column="slash-department" xpath="/rss/channel/item/department" />
<field column="slash-section" xpath="/rss/channel/item/section" />
<field column="slash-comments" xpath="/rss/channel/item/comments" />
<field column="date" xpath="/rss/channel/item/date" dateTimeFormat="yyyy-MM-dd'T'hh:mm:ss'Z'" />
</entity>
</document>
Notice the extra 'Z' on the dateTimeFormat, which is necessary according to "schema.xml"
Quoting schema.xml
The format for this date field is of the form 1995-12-31T23:59:59Z, and
is a more restricted form of the canonical representation of dateTime
http://www.w3.org/TR/xmlschema-2/#dateTime
The trailing "Z" designates UTC time and is mandatory.
Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
All other components are mandatory.
Update your rss-data-config.xml as below
'<dataConfig>
<dataSource type="URLDataSource" />
<document>
<entity name="slashdot"
pk="link"
url="http://rss.slashdot.org/Slashdot/slashdot"
processor="XPathEntityProcessor"
forEach="/RDF/channel | /RDF/item"
transformer="DateFormatTransformer">
<field column="source" xpath="/RDF/channel/title" commonField="true" />
<field column="source-link" xpath="/RDF/channel/link" commonField="true" />
<field column="subject" xpath="/RDF/channel/subject" commonField="true" />
<field column="title" xpath="/RDF/item/title" />
<field column="link" xpath="/RDF/item/link" />
<field column="description" xpath="/RDF/item/description" />
<field column="creator" xpath="/RDF/item/creator" />
<field column="item-subject" xpath="/RDF/item/subject" />
<field column="slash-department" xpath="/RDF/item/department" />
<field column="slash-section" xpath="/RDF/item/section" />
<field column="slash-comments" xpath="/RDF/item/comments" />
<field column="date" xpath="/RDF/item/date" dateTimeFormat="yyyy-MM-dd'T'hh:mm:ss" />
</entity>
</document>
</dataConfig>'
It worked for me
I'm developing search engine using Solr and I've been successful in indexing data from one table using DIH (Dataimport Handler). What I need is to get search result from 5 different tables. I couldn't do this without help.
if we assume x table with x rows, there should be x * x documents from each table, which lead to, 5x documents if I have 5 tables as total. In dataconfig.xml, I created 5 seperate entities in single document as shown below. the result from indexed data when I query *:* is only 6 of the entity users and 3 from entity classes which is the number of users total rows which is 9.
Clearly, this way didn't work for me, so how can I achieve this using only single core?
note: I followed DIHQuickStart and DIH tutorial which didn't help me.
<document>
<!-- Users -->
<entity dataSource="JdbcDataSource" name=" >
<field column="name" name="name" sourceColName="name" />
<field column="username" name="username" sourceColName="username"/>
<field column="email" name="email" sourceColName="email" />
<field column="country" name="country" sourceColName="country" />
</entity>
<!-- Classes -->
<entity dataSource="JdbcDataSource" name="classes" >
<field column="code" name="code" sourceColName="code" />
<field column="title" name="title" sourceColName="title" />
<field column="description" name="description" sourceColName="description" />
</entity>
<!-- Schools -->
<entity dataSource="JdbcDataSource" name="schools" >
<field column="school_name" name="school_name" sourceColName="school_name" />
<field column="country" name="country" sourceColName="country" />
<field column="city" name="city" sourceColName="city" />
</entity>
<!-- Resources -->
<entity dataSource="JdbcDataSource" name="resources" >
<field column="title" name="title" sourceColName="title" />
<field column="description" name="description" sourceColName="description" />
</entity>
<!-- Tasks -->
<entity dataSource="JdbcDataSource" name="tasks" >
<field column="title" name="title" sourceColName="title" />
<field column="description" name="description" sourceColName="description" />
</entity>
</document>
you need to look at the structures of your tables then either create queries with joins or creat nested entities like this for example
<dataConfig>
<dataSource driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:/temp/example/ex" user="sa" />
<document name="schools">
<entity name="school" query="select * from schools s ">
<field column="school_name" name="school_Name" />
<entity name="school_class" query="select * from schools_classes sc where sc.school_name = '${school.school_name}'">
<field column="class_code" name="class_code" />
<entity name="class" query="select class_name from classes c where c.class_name= '${school_class.class_code}'">
<field column="class_name" name="class_name" />
</entity>
</entity>
</entity>
</document>
</dataConfig>
I am trying to scan all pdf/doc files in a directory. This works fine and I am able to scan all documents.
The next thing i'm trying to do is also receiving the filename of the file in the search results. The filename however never shows up. I tried a couple of things, but the documentation is not very helpfull about how to do this.
I am using the solr configuration found in the solr distribution: apache-solr-3.1.0/example/example-DIH/solr/tika/conf
This is my dataConfig:
<dataConfig>
<dataSource type="BinFileDataSource" name="bin"/>
<document>
<entity name="f" processor="FileListEntityProcessor" recursive="true"
rootEntity="false" dataSource="null" baseDir="C:/solrtestsmall"
fileName=".*\.(DOC)|(PDF)|(pdf)|(doc)" onError="skip">
<entity name="tika-test" processor="TikaEntityProcessor"
url="${f.fileAbsolutePath}" format="text" dataSource="bin"
onError="skip">
<field column="Author" name="author" meta="true"/>
<field column="title" name="title" meta="true"/>
<field column="text" name="text"/>
</entity>
<field column="fileName" name="fileName"/>
</entity>
</document>
</dataConfig>
I am interested in the way how to configure this correctly, and also the any other places I can find specific documentation.
You should use file instead of fileName in column
<field column="file" name="fileName"/>
Don't forget to add the 'fileName' to the schema.xml in the fields section.
<field name="fileName" type="string" indexed="true" stored="true" />