Foreign key references in Solr dataImportHandler - solr

I've just started using Solr. In my database I have a collection of folders containing two kinds of entities, lets call them barrels and monkeys. Folders contain barrels and barrels contain monkeys. Users should be able to search for barrels and monkeys, but they are only allowed to see certain folders and the search should not return barrels or monkeys in folders they are not allowed to see. I have a filter query which does this fine for the barrels, but I'm having trouble getting the data import handler to import the folder ids for the monkeys. My data-config file looks like this:
<dataConfig>
<dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost/myDB" user="myUser" password="pass"/>
<document name="item">
<entity name="barrels" query="select * from barrels where is_deleted=0" transformer="TemplateTransformer"
deltaQuery="select barrel_id from barrels where last_modified > '${dataimporter.last_index_time}'">
<field column="itemType" template="barrels" name="itemType"/>
<field column="barrel_id" name="id" pk="true" template="barrel-${barrels.barrel_id}"/>
<!--Other fields-->
<field column="folder_id" name="folder_id"/>
</entity>
<entity name="monkeys" query="select * from monkeys where is_deleted=0" transformer="TemplateTransformer"
deltaQuery="select monkey_id from monkeys where last_modified > '${dataimporter.last_index_time}'">
<field column="itemType" template="monkeys" name="itemType"/>
<field column="monkey_id" name="id" pk="true" template="monkey-${monkeys.monkey_id}"/>
<field column="barrel_id" name="barrel_id"/>
<!--Other fields-->
<entity name="barrels"
query="select folder_id from barrels where barrel_id='${monkeys.barrel_id}'">
<field name="folder_id" column="folder_id" />
</entity>
</entity>
</document>
</dataConfig>
When I change the '${monkeys.barrel_id}' in the foreign key query to 28, it works, but when I try and get it to use the correct id, it doesn't import anything.
Can anyone spot what I'm doing wrong, or tell me a good way to debug this kind of thing? E.g. how can I get it to tell me what value it has for '${monkeys.barrel_id}' ? All the relevant fields are defined in schema.xml. Since having this problem I've made sure the documents all have the same names as the tables, and tried changing various bits of query to upper case, but everything's in lower case in the database and it doesn't seem to help.

Having asked the question, I did manage to figure it out eventually. Here's what I learnt:
1) Getting it to tell you the query is very useful, and it is just a matter of setting the logging level to fine. You have to set it to fine in all the relevant places though. So for my Standalone.xml (in WildFly), in addition to the
<logger category="org.apache.solr">
<level name="FINE"/>
</logger>
bit, I needed to set the file logger and another logging bit to fine. Really should have realised that earlier...
2) The single quotes are not part of the expression evaluation syntax, they are just quotes. So you don't need them when dealing with ints. I guess the example that comes with solr uses string ids rather than int ids and that's why it has the quotes?
3) Once I'd got rid of the quotes, changing the case did make a difference. For my database its preferred case was Barrel_ID for some reason. I hadn't tried it much with capitals at both ends but not in the middle, but that's what worked. So I guess the moral of the story is that it is worthwhile to try lots of different cases even if they seem silly.

Related

Efficiency aspect of delta import in solr

I have data of about 2100000 rows. The time taken for full-import is about 2 minutes. For any updates in table I'm using delta import to index the updates. The time taken for delta import is 6 minutes.
Considering the efficiency aspect it is better to do full import rather than delta import. So, what is the need of delta import? Is there any better way to use delta import to increase it's efficiency?
I followed the steps in documentation.
data-config.xml
<dataConfig>
<dataSource type="JdbcDataSource" driver="com.dbschema.CassandraJdbcDriver" url="jdbc:cassandra://127.0.0.1:9042/test" autoCommit="true" rowLimit = '-1' batchSize="-1"/>
<document name="content">
<entity name="test" query="SELECT * from person" deltaImportQuery="select * from person where seq=${dataimporter.delta.seq}" deltaQuery="select seq from person where last_modified > '${dataimporter.last_index_time}' ALLOW FILTERING" autoCommit="true">
<field column="seq" name="id" />
<field column="last" name="last_s" />
<field column="first" name="first_s" />
<field column="city" name="city_s" />
<field column="zip" name="zip_s" />
<field column="street" name="street_s" />
<field column="age" name="age_s" />
<field column="state" name="state_s" />
<field column="dollar" name="dollar_s" />
<field column="pick" name="pick_s" />
</entity>
</document>
The usual way of setting up delta indexing (like you did), runs 2 queries instead of a single one. So in some cases it might not be optimal.
I prefer to setup delta like this, so there is single query to maintain, it's cleaner, and delta runs in a single query. You should try it, it might improve things. The downside is the deletes, you either do some soft-deleting or you still need the usual delta configuration for that (I favour the first).
Also, of course, make sure the last_modified column is properly indexed. I am not familiar with Cassandra jdbc driver, you should double check.
Last thing, if you are using Datastax Entreprise Edition, you can query it via Solr if you configured for that. In this case you could also try indexing off SolrEntityProcessor and with some request param trick you can do full and delta indexing too. I used it succesfully in the past.

Dynamic TableName SOLR data import handler

I'm looking to configure SOLR to query a table based on certain data.
I unfortunately have to work with how the Database is setup, but here's what I'm after.
I have a table named Company that will contain a certain "prefix" value.
I want to use that prefix value to determine what tables I should query for the DIH.
As a quick sample:
<entity name="company" query="Select top 1 prefix from Company">
<field name="prefix" column="prefix"/>
<entity name="item" query="select * from ${company.prefix}item">
<field column="ItemID" name="id"/>
<field column="Description" name="description/>
</entity>
</entity>
However I only ever seem to get 1 document processed despite that table containing over 200,000 rows.
what am I doing wrong?
I think you could achieve this by:
using an stored procedure. You can call a sp from DIH as seen here
inside the stored procedure, you can do the table lookup as needed, and then return the results from the real query.
Depending on how good you are with MSSql-s SQL, you might be able to just put everything into a single SQL query and use that directly in DIH, but not sure about that.

Solr: split category data and product data over different cores/instances?

I have a webshop with multiple different productcategories.
For each category I have a description, metadata, image and some more category specific data.
Right now, my data-config.xml looks as below.
However, I think this way I'm indexing all category specific data for each product individually, so taking up a lot more space than needed.
I'm now considering to move the indexing and storing of category specific data to a separate solr core/instance, this way I have basically separated the product specific data and the category data.
Is this reasoning correct? Is it better to move the category specific data outside this core/instance?
<document name="shopitems">
<entity name="shopitem" pk="id" query="select * from products" >
<field name="id" column="ID" />
<field name="articlenr" column="articlenr" />
<field name="title" column="title" />
<entity name="catdescription" query="select
pagetitle_de as cat_pagetitle_de,pagetitle_en as cat_pagetitle_en
,description as cat_description
,metadescription as cat_metadescription
FROM products_custom_cat_descriptions where articlegroup = '${shopitem.articlegroup}'">
</entity>
</entity>
</document>
Generally speaking, your implementation will be easier if you flatten (de-normalize) everything, as you did. If you spin off the categories in a different core, Solr becomes harder to use - you will need extra queries, extra client code, faceting won't work so easily, etc - all of which will result in a performance hit, on top of the extra implementation difficulties.
From the numbers you give (staying under 1GB index size? it's not that big), I would definitely not go the way of splitting out the category data, it will make your life harder, for not much practical gain.

Can fields be nested in Solr?

I need to have fields nested inside of fields, does solr provide that ability ?
For example : I need to have a multivalued field called Products, and each Product needs to in-turn have a multivalued field Properties. I need there to be nesting, so that in case, I search for a property, it only returns the corresponding product info and not all products
Currently, I find that if I have 10 products which each have 10 properties in each doc, upon searching for a property, all the products in that doc(which holds that property) would be returned. And now again I'd have to manually sort out which product had that property, by comparing the array indices. So if property 53 is returned, it would be the 6th product. Thisgets worse when not all products have an equal number of properties.
Is there no easier way ?
Thanks in advance for your replies.
Yes, recent Solr supports nested document. Though, there are some tradeoffs. Mostly, that you had to index and delete the whole parent+children block together. But it should not be a problem for your case.
After that, you can search them in a couple of different ways using BlockJoins.
Not sure if it is useful in your situation but this is what I am doing in my data-config.xml
<document>
<entity name="paper" query="SELECT * FROM papers">
<field column="title" name="title"/>
<field column="title" name="title_unstem"/>
<field column="year" name="publish_date"/>
<entity name="person" query="SELECT * FROM papers_people PA, people A WHERE PA.person_id = A.id AND PA.paper_id='${paper.id}'">
<field column="id" name="author_id"/>
<field column="first_name" name="first_name"/>
<field column="last_name" name="last_name"/>
<field column="full_name" name="author"/>
</entity>
<entity name="volume" query="SELECT * FROM volumes WHERE id='${paper.volume_id}'">
<field column="id" name="volume_id"/>
<field column="title" name="volume_title"/>
<field column="anthology_id" name="volume_anthology"/>
</entity>
</entity>
</document>
Basically as you can see my Paper has many Authors and belongs to a Volume. I am doing this on Ruby on Rails with the Blacklight gem so if you have any questions just ask me.
If this is your key requirements and you haven't invested much in solr, then, I suggest you look at elasticsearch. http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html
Otherwise, blockjoin is the only out of the box way to do it in solr and it looks more like a hack.

How do I index rich-format documents contained as database BLOBs with Solr 4.0+?

I've found a few related solutions to this problem. The related solutions will not work for me as I'll explain. (I'm using Solr 4.0 and indexing data stored in an Oracle 11g database.)
Jonck van der Kogel's related solution (from 2009) is explained here. He describes creating a custom Transformer, sort of like the ClobTransformer that ships with Solr. This is going down the elegant path but is not using Tika which is now integrated with Solr. (He uses external PDFBox and FontBox.) This creates multiple maintenance / upgrade dependencies. Also, I need to be able to index Word documents in addition to PDF.
Since Kogel's solutions seems to be on the right path, is there a way to use the Tika classes included with Solr in a custom Transformer? That would allow all the Tika functionality with Kogel's elegant database solution.
Another related solution is the ExtractingRequestHandler (ERH) that ships with Solr. However, as the name suggests, this is a request handler, such as to handle HTTP posts of rich-text documents. To extract documents from the database this way has performance and security problems. I would have to make the database BLOBs accessible via HTTP. I've found no discussion of using ERH for direct ingest from a database BLOB. Is it possible to directly ingest from database BLOBs with Solr Cell?
Another related solution is to write a Transformer (like Kogel's above) to convert a byte[] to a string (from DataImportHandler FAQ). With true binary documents this is going to feed junk into the index and not properly extract the text elements like Tika does. Won't work.
A final related solution is UpdateRichDocuments offered by the RichDocumentHandler. This is deprecated and no longer available in Solr. The page refers you to the ExtractingRequestHandler (discussed above).
It seems like the right solution is to use DataImportHandler and a customer Transformer using the Tika class. How does this work?
Many hours later... First, there is a lot of misleading, wrong and useless information on this problem. No page seemed to provide everything in one place. All of the information is well intentioned but between differing versions and some going over my head, it didn't solve the problem. Here is my collection of what I learned and the solution. To reiterate, I'm using Solr 4.0 (on Tomcat) + Oracle 11g.
Solution overview: DataImportHandler + TikaEntityProcessor + FieldStreamDataSource
Step 1, make sure you update your solrconfig.xml so that solr can find the TikaEntityProcessor + DataImportHandler + Solr Cell stuff.
<lib dir="../contrib/dataimporthandler/lib" regex=".*\.jar" />
<!-- will include extras (where TikaEntPro is) and regular DIH -->
<lib dir="../dist/" regex="apache-solr-dataimporthandler-.*\.jar" />
<lib dir="../contrib/extraction/lib" regex=".*\.jar" />
<lib dir="../dist/" regex="apache-solr-cell-\d.*\.jar" />
Step 2, modify your data-config.xml to include your BLOB table. This is where I had the most trouble since the solutions to this problems have changed a lot as versions have changed. Plus, using multiple data sources and plugging them together correctly was not intuitive to me. Very sleek once it's done though. Make sure to replace your IP, SID name, username, password, table names, etc.
<dataConfig>
<dataSource name="dastream" type="FieldStreamDataSource" />
<dataSource name="db" type="JdbcDataSource"
driver="oracle.jdbc.OracleDriver"
url="jdbc:oracle:thin:#192.1.1.1:1521:sid"
user="username"
password="password"/>
<document>
<entity
name="attachments"
query="select * from schema.attachment_table"
dataSource="db">
<entity
name="attachment"
dataSource="dastream"
processor="TikaEntityProcessor"
url="blob_column"
dataField="attachments.BLOB_COLUMN"
format="text">
<field column="text" name="body" />
</entity>
</entity>
<entity name="unrelated" query="select * from another_table" dataSource="db">
</entity>
</document>
</dataConfig>
Important note here. If you're getting "No field available for name : whatever" errors when you attempt to import, the FieldStreamDataSource is not able to resolve the data field name you gave. For me, I had to have the url attribute with the lower-case column name, and then the dataField attribute with outside_entity_name.UPPERCASE_BLOB_COLUMN. Also, once I had the column name wrong and that will cause the problem as well.
Step 3, you need to modify your schema.xml to add the BLOB-column field (and any other column you need to index/store). Modify according to your needs.
<field name="body" type="text_en" indexed="false" stored="false" />
<field name="attach_desc" type="text_general" indexed="true" stored="true" />
<field name="text" type="text_en" indexed="true" stored="false" multiValued="true" />
<field name="content" type="text_general" indexed="false" stored="true" multiValued="true" />
<copyField source="body" dest="text" />
<copyField source="body" dest="content" />
With that you should be well on your way to saving many hours getting your binary, rich-text documents (aka rich documents) that are stored as BLOBs in a database column indexed with Solr.
The Integration of Tika and DIH is already provided with Solr via TikaEntityProcessor
Integration - SOLR-1358
Blob Handling - SOLR-1737
You need to just find the right combination.

Resources