Metadata field duplication in Solr - solr

I am trying to index millions of strings, that are associated with metadata objects.
Each metadata object can have n thousands of strings.
I need to be able to search both string content, and the associated object metadata.
Currently this means that I am indexing the copies of relevant metadata fields with each string, which leads to ridiculous amounts of duplication and incredibly large index sizes.
In a relational db model, i could just store one copy of the metadata and join the tables to be able to filter and search by the combined fields, but I can’t see any way of eliminating this duplication in Solr.
Is there something obvious I am missing, or is Solr just the wrong tool for the job?

Solr has support for join, which behaves more like subquery than join in relational database terms, but might do what you want. You can have Solr return metadata objects that have one or more strings that match your query. With another non-join query, you can also find out which strings are matched. (Note: This SO question explains why you cannot get both the metadata objects and the matched strings with one query yet.) If your metadata objects and the strings have a 1-to-N relationship, then you should also look into block join, which is designed for such relationship. You can index the metadata objects as parent documents, and the strings as child documents.

Related

Apache Solr Querying by search term from multiple tables and in all columns

I am new to Apache Solr and have worked with single table and importing it in Solr to get data using query.
Now I want to do following.
query from multiple tables ..... Like if I find by a word, it should return all occurances in multiple tables.
Search in all fields of table ....like I query by word in all fields in single table too.
Do I need to create single document by importing data from multiple tables using joins in data-config.xml? And then querying over it?
Any leads and guidance is welcome.
TIA.
Do I need to create single document by importing data from multiple tables using joins in data-config.xml? And then querying over it?
Yes. Solr uses a document model (rather than a relational model) and the general approach is to index a single document with the fields that you need for searching.
From the Apache Solr guide:
Solr’s basic unit of information is a document, which is a set of data
that describes something. A recipe document would contain the
ingredients, the instructions, the preparation time, the cooking time,
the tools needed, and so on. A document about a person, for example,
might contain the person’s name, biography, favorite color, and shoe
size. A document about a book could contain the title, author, year of
publication, number of pages, and so on.

Indexing EAV model using Solr

The database I have at hand uses the EAV model describing all objects one can find in a house. Good or bad isn't the question, there is no choice but to keep and use this model. 6.000+ items point to 3.000+ attributes and 150.000+ attribute-values.
My task is to get this data into a Solr index for quick searching/sorting/faceting.
In Solr, using DIH, a regular SQL query is used to extract data. Each column name returned from the query is a 'field' (defined or not in a schema), and each row of the query's resultset is a 'document'.
Because the EAV model uses rows for attributes instead of columns, a simple query will not work, I need to flatten each item row. What should my SQL query look like in order to extract all items from the DB ? Is there a special Solr/DIH configuration which I should consider ?
There are some similar questions on SO, but none really helped.
Any pointers are much appreciated!

Querying Solr multiple indexes with different schema in single query

We have a situation where we are keeping two indexes with different schemas.
For example: suppose we have an index for seller where the key value is seller id and other attributes are seller information. Now another index is book where book id is unique key and it keeps book related information.
Is it possible to query both these indexes in a single query and get collective results?
I have checked Solr but as per my findings we can do this through distributed search in Solr but it works on same kind of schema being distributed in at max 3 indexes.
I am a newbie to Solr so please ignore if this is a stupid question.
You need to think about what makes sense for a search query but there are some rules.
The first requirement is that the unique keys need to have the same name and be unique across collections or Solr cannot collate results.
If you are then hoping to get some kind of sensible ranking of your results you need some common fields. For example I have two collections: one of product data and one containing product related documents. I have a unique key: id and I have common title and contents fields for when I want to query across the two collections. I also have an advanced search interface where I can query on specific fields like product id.
A "unification core" is a typical way of handling search across two or more cores, see this Stack Overflow answer on how to set that up
Query multiple collections with different fields in solr
Other techniques are to use federated search with something like Carrot or to issue two queries and show the results in different tabs in the search results.

Datomic table model

I have an application that requires a database containing a set of products where each product can have a set of tables. The end-user should be able to add new products and define new tables for a product. So each table has a set of columns that are specified by the user. The user can then fill the tables with rows of data. Each table belongs to exactly one product.
The end-user should also be able to view the tables as they were at a specific point in time (at a certain transaction).
How would I go about making a schema for this in Datomic so that querying it would be as efficient as possible?
I would go with 4 entity types: products, tables, columns, and rows.
The relationship between products and tables is best handled by a :table/product to-one ref attribute, but a :product/tables to-many component ref attribute could also work (the latter does not enforce the one-to-many relationship).
Likewise, I would use either a :column/table or :table/columns attribute. I would also have a :column/name string attribute and maybe a :column/type enumerated attribute.
The hardest part is to model rows.
One tempting solution is to just create an attribute per column - I actually think it's bad idea, Datomic attributes are not intended for such a dynamic use. In particular, schema attributes are stored in a cache on the Peer that's not meant to grow big. (I may be wrong about this, so it'd be nice if someone in the Datomic team could confirm.)
Instead, I would have a few dozens reusable :row/cell-0, :row/cell-1, :row/cell-2, etc. 'cell position' attributes, that are shared across all tables. Each actual column would be mapped to a at creation time by a to-one :column/position attribute.
If the rows can have several data types, it's a bit more difficult, you'd have to basically make an attribute for each (type,position) pair.
Then each row basically consist of a :row/table attribute and the above cell position attributes.
Here's a Datalog query that would let you read the whole table
[:find ?row ?column-name ?val :in $ ?table :where
[?column :column/table ?table]
[?row :row/table ?table]
[?row ?pos ?val]
[?column :column/position ?pos]
[?column :column/name ?column-name]]
Note that all of the above is only useful if you want to query the table with Datalog directly against your Datomic db. But it can be also completely fine to serialize your tables and store them as blobs - especially if they're small; later, you pull out the blob, deserialize it, then you can query with Datalog too. And if tables are to coarse for this use, maybe you can do it with rows.

Solr - How to index on multiple entities?

I have two tables contacts and inventory. These two tables are not related. I want to index these two tables and search using Solr.
Is this possible?
If some part of your application needs to search for contacts, and another one needs to search in the inventory, create two separate indices. Storing wildly different data in the same index is almost never a good idea, it complicates things unnecessarily. As the Solr wiki wisely says:
The more heterogeneous (different
kinds of data) you have in one field
or in one index, the less useful it
is.
You don't need to have multiple Solr instances to accomodate multiple indices, you can easily manage this with multi-core.
I found a very helpful answer to this question here, including some guidance on using "multiple indexes" vs. "multiple document types in one index". The post also links to example code on github that I found very useful.
Yes, you can do that. Simply create a Solr schema, that contains all fields necessary for both tables and add another field, that contains the table name. During indexing, add the table name property to the fields you want to index. During searching also always include a query parameter for the table name field.
As an alternative, you can setup multiple instances of Solr. But you should do this only, if we are talking about massive amounts of data here (like millions of table rows).

Resources