Solr facets on unique values of fields - solr

I have a document structure in Solr that looks something like this (irrelevant fields excluded):
<field name="review_id" type="int" indexed="true" stored="true"/>
<field name="product_id" type="int" indexed="true" stored="true"/>
<field name="product_category" type="string" indexed="true" stored="true" multiValued="true"/>
product_id here is one-to-many wrt review_id
I can get a faceted count of reviews in each category by doing:
/select?q=*:*&rows=0&facet=true&facet.field=product_category
I want to be able to do faceting on the product_category, but get the number of distinct product_id:s instead of the number of review_id:s. Is this possible to do in Solr?

There is no one-to-many in a Solr index. It's not a relational database. The index is either about reviews or about products, and that depends on what you'll be searching for. To quote the Solr wiki about schema design:
Solr provides one table. Storing a set database tables in an index generally requires denormalizing some of the tables. Attempts to avoid denormalizing usually fail.
So the first step is fixing the schema design. Only after that (and always keeping the fact above in mind) can you design facets and other stuff.

Related

How to decide the dynamic filed in solr using data type and without any suffix or prefix

I am indexing the RDBMS data into solr from my java application. For each row of a table I am creating a java bean and adding to solr server.(While creating a bean which is nothing but one solr document, I am using table's column name as field name of solr doc and corresponding value as solr field's value). But we support to index data from any number of tables , where each table will have different column names and data types. To, handle this we are using dynamic fields in schema.xml as below
<dynamicField name="*" type="string" indexed="true" stored="true" multiValued="true"/>
But the problem with this configuration is all the fields type is String , but I want to use numeric types for numeric data types in RDBMS and String for Varchar data type. Please suggest me how can I achieve this. I can't use suffix or prefix to field name while creating solr doc because I want to index and retrieve the docs using field name same as column name of table.
Any suggestions are appreciated.

Where are dynamicFields created in solr

In my schema.xml
<dynamicField name="attributes_*" type="integer" indexed="true" stored="true" omitNorms="true"/>
<dynamicField name="itemAttributes_*" type="integer" indexed="true" stored="true" omitNorms="true"/>
after I insert the record with dynamic fields then , where these fields are created on disk?
The schema is "only" used for validation / querying / etc. by Solr, meaning that the content is compared (and field types applied) to the schema when a field is being queried (to get the field type and analysis chain) or when it's being inserted. The schema is a Solr concept, while Lucene is the thing that makes Solr work behind the scenes.
Since the actual storage of data is not connected to the schema, and a Lucene document is a collection of field names and associated values, the field name doesn't have to exist in the schema to be stored in a Lucene document - just for Solr to accept it for storage into its Lucene index.
The fields are created in the index in the same way as any field explicitly named in the index.

It is possible to update uniqueKey in Solr 4?

My uniqueKey is defined as:
<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
<uniqueKey>id</uniqueKey>
I load several docs into Solr with its corresponding "id" field, what i need now is UPDATE "id" value, It is possible?
When I try to do that I get this error:
Document contains multiple values for uniqueKey field
I am using Apache Solr 4.3.0
It's not directly possible. Before I get into how you can do it indirectly, I need to explain a couple of things.
The value in the uniqueKey field is how Solr handles document updating/replacing. When you send a document in for indexing, if an existing document with the same uniqueKey value already exists, Solr will delete its own copy before indexing the new one.
The atomic update functionality is slightly different. It lets an update add, change, or remove any field in the document except the uniqueKey field - because that's the way that Solr can identify the document.
What you need to do is basically index a new document with all the data from the old document, and delete the old document. If all the fields in the document are available to the indexing process, then you can just index the new document, either before or after deleting the old one. Otherwise, you can query the existing doc out of Solr, make a new one and index it, and then delete the old one.
In order to use the existing Solr document to index a new one, all fields must be stored, unless they are copyField destinations, in which case they must NOT be stored. Atomic updates (discussed above) have the same requirement. If one or more of these fields is not stored, then the search result will not contain that field and the data will be lost.

Do I need to make fields in schema.xml to use DataImportHandler

I have many different column names and types that I want to import. Do I need to change my schema.xml to have entries for each of these specific field types, or is there a way for the importhandler to generate the schema.xml from the underlying SQL data?
You need to define the fields you need to import in the schema.xml.
The DIH does not autogenerate the fields and it is better to create the fields if the amount the fields are less.
Solr also allows you to define Dynamic fields, where the fields need not be explicitly defined but just needs to match the regex pattern.
<dynamicField name="*_i" type="integer" indexed="true" stored="true"/>
You can also define a catch field with Solr, however the behaviour cannot be control as same analysis would be applied to all the fields.

Denormalize datasource input for Solr

I have a MySql database from which I need to fetch data into Solr that is normalized in MySql over several tables. For example, I have an 'articles' table that have a 'companyId' column. 'companyIds' are linked to 'companyName' in a second table 'company'. So in order to be able to find articles by company name using Solr I need to denormalize when building the Solr index.
What is the easiest way to do this? Can denormalization be done in the data source configuration or do I need to denormalize prior to creating the index?
Feeding data using Solrj and normalizing while doing it seems to be the easiest method I can come up with at the moment (although it seems unnecessary if Solr has those features).
Ah, I found what I was looking for in the documentation for the data import handler. Queries on tables holding values of references found in the current table can be extracted using queries of 'child entities' like below.
The category name of the item is resolved by selecting from the category table using the category_id from the parent entity/query:
<entity name="item_category" query="select category_id from item_category where item_id='${item.id}'">
<entity name="category" query="select description from category where id = '${item_category.category_id}'">
<field column="description" name="cat" />
</entity>
</entity>
XML from here:
http://wiki.apache.org/solr/DataImportHandler#Full_Import_Example

Resources