Know indexing time for a document in Solr - solr

Is it possible to know the indexing time of a document in solr. Like there is a implicit field for "score" which automatically gets added to a document, is there a field that stores value of indexing time?
I need it to know the date when a document got indexed.
Thanks

Solr does not automatically add a create date to documents. You could certainly index one with the document though, using Solr's DateField. In earlier versions or Solr ( < 4.2 ), there was a commented timestamp field in the example schema.xml, which looked like:
<field name="timestamp" type="date" indexed="true" stored="true" default="NOW" multiValued="false"/>
Also, I think it bears noting that there is no implicit "score" field. Scores are calculated at query time, rather than being tied to the document. Different queries will generate different scores for the same document. There are norms stored with the document that are factored into scores, but they aren't really fields.

femtoRgon give you a correct solution but you must be carefull with partial document update.
If you do not do partial document update you can stop reading now ;-)
If you partially update your document, SolR will merge the existing value with your partial document and the timestamp will not be updated. The solution is to not store the timestamp, then SolR will not be able to merge this value. The drawback is you cannot retrieve the timestamp with your search result.

Related

Indexing PDF files with Solr 6.6 while allowing highlighting matched text with context

I am new to Solr and I need to implement a full-text search of some PDF files. The indexing part works out of the box by using bin/post. I can see search results in the admin UI given some queries, though without the matched texts and the context.
Now I am reading this post for the highlighting part. It is for an older version of Solr when managed schema was not available. Before fully understand what it is doing I have some questions:
He defined two fields:
<field name="content" type="text_general" indexed="false" stored="true" multiValued="false"/>
<field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
But why are there two fields needed? Can I define a field
<field name="content" type="text_general" indexed="true" stored="true" multiValued="true"/>
to capture the full text?
How are the fields filled? I don't see relevant information in TikaEntityProcessor's documentation. The current text extractor should already be Tika (I can see
"x_parsed_by":
["org.apache.tika.parser.DefaultParser","org.apache.tika.parser.pdf.PDFParser"]
in the returned JSON of some query). But even I define the fields as he said I cannot see them in the search results as keys in JSON.
The _text_ field seems a concatenation of other fields, does it contain the full text? Though it does not seem to be accessible by default.
To be brief, using The Elements of
Statistical Learning as an example, how to highlight the relevant texts for the query "SVM"? And if changing the file name into "The Elements of Statistical Learning - Trevor Hastie.pdf" and post it, how to highlight "Trevor Hastie" for the query "id:Trevor Hastie"?
Before I get started on the questions let me just give a brief how solr works. Solr in its core uses lucene when simply put is a matching engine. It creates inverted indexes of document with the phrases. What this means is for each phrase it has a list of documents which makes it so fast. Getting to your questions:
Solr does not convert your pdf to text,well its the update processor configured in the handler which does it ,again this can be configured in solrconfig.xml or write your own handler here.
Coming back why are there two fields. To simply put the first one(content) is a stored field which stores the data as it is. And the second one is a copyfield which copies the data for each document as per the configuration in schema.xml.
We do this because we can then choose the indexing strategy such as we add a lowercase filter factory to text field so that everything is indexed in lower case. Then "Sam" and "sam" when searched returns the same results.Or remove certain common occurring words such as "a","the" which will unnecessarily increase your index size. Which uses a lot of memory when you are dealing with millions of records, then you want to be careful which fields to index to better utilise the resources.
The field "text" is a copyfield which copies data from certain fields as mentioned in the schema to text field. Then when searching in general one does not need to fire multiple queries for each field. As everything thing is copied into "text" field and you get the result. This is the reason it's "multivaled". As it can stores an array of data. Content is a stored field and text is not,and opposite for indexed because when you return your result to the end user you show him what ever you saved not the stripped down data that you just did with the text field applying multiple filters(such as removing stop words and applying case filters,stemming etc).
This is the reason you do not see "text" field in the search result as this is used solr.
For highlighting see this.
For more these are some great blog yonik and joel.
Hope this helps. :)

How to query a specific document by id

From a previous query I already have the document ID (the uniqueKey in this schema is 'track_id') of the document I'm interested in.
Then I would like to query a sequence of words on that document while highlighting the match.
I can't seem to be able to combine the search parameters in a successful way (all my google searches return purple links :\ ), although I've already tried many combinations these past few days. I also know the field where the matches will be if that's any use in terms of improving match speed.
I'm guessing it should be something like this:
/select?q=track_id:{key_i_already_have} AND/&/{part_I_dont_know} word1 word2 word3
Currently, since I can't combine these two search parameters, I'm only querying the words and thus getting several results from several documents.
Thanks in advance.
From Solr 4 you can use the realtime get, which is much more faster than searching the index by id.
http://localhost:8983/solr/get?ids=id1,id2,id3
For index updates to be visible (searchable), some kind of commit must reopen a searcher to a new point-in-time view of the index. The realtime get feature allows retrieval (by unique-key) of the latest version of any documents without the associated cost of reopening a searcher. This is primarily useful when using Solr as a NoSQL data store and not just a search index.
You may try applying Filter Query for id. So it will filter your search query to that id, and then search in that document for all the keywords, and highlight them.
Your query will look like:
/select?fq=track_id:DOC_ID&q=word1 word2 word3
Just make sure your "id" field in schema.xml is defined of the type string to apply filter queries on it.
<field name="id" type="string" indexed="true" stored="true" required="true" />

Solr index vs stored

I am a little confused as to what the behaviour of the index and stored attibutes of the Solr fields is.
For example if I have the following in the Schema.xml
<field name="test1" type="text" indexed="false"
stored="false" required="false" />
Will the field test1 be not stored in the Solr document even if I create a document with that field in it and set a value to that field and commit the document to Solr. As I have the stored=false attribute, does it mean that the value of the field is lost in Solr and not persisted?
That is correct. Typically you will want your field to be either indexed or stored or both. If you set both to false, that field will not be available in your Solr docs (either for searching or for displaying). See Alexandre's answer for the special cases when you will want to set both to false.
As stated here : indexed=true makes a field searchable (and sortable and facetable). For eg, if you have a field named test1 with indexed=true, then you can search it like q=test1:foo, where foo is the value you are searching for. If indexed=false for field test1 then that query will return no results, even if you have a document in Solr with test1's value being foo.
stored=true means you can retrieve the field when you search. If you want to explicitly retrieve the value of a field in your query, you will use the fl param in your query like fl=test1 (Default is fl=* meaning retrieve all stored fields). Only if stored=true for test1, the value will be returned. Else it will not be returned.
The main point of having both set to false is to explicitly skip that particular field.
For example, if you have a storing/indexing dynamicField mapping and you want to ignore one particular name that would otherwise fall under dynamicField's pattern.
Alternatively you could use dynamicField to ignore a whole set of fields with same prefix/suffix that comes from a 3rd party. For example, Tika will send you a whole bunch of metadata fields which you may just want to ignore. See this defined in Solr's example schema.xml and used in solrconfig.xml
In the later versions of Solr, you could also use IgnoreFieldUpdateProcessorFactory (see full list for others) instead, which will get rid of those fields even earlier in the indexing process.
Quoting from this response in the Solr's mail thread:
"indexed" and "stored" are independent, orthogonal attributes - you can use
any of the four combinations of true and false. "indexed" is used for search
or query, the "lookup" portion of processing a query request. Once the
search/query/lookup is complete and a set of documents is selected, "stored"
is the set of fields whose values are available for display or return with
the Solr response.
Part of the reason for the separation is that Solr/Lucene "analyzes" or
transforms the input data into a more efficient form for faster and more
relevant search/lookup. Unfortunately, that analyzed/transformed data is
frequently no longer suitable for display and human consumption. In other
words the analysis/transformation is not bidirectional/reversible. Setting
"stored=true" guarantees that the original data can be retrieved in its
original form.
If both are false you loose your data in that field. If indexed true, the data are searchable but it can not be displayed. If you set stored true you will not be able to search on that field but it can be displayed (in this case you can write copyfield rule to copy the info from that field to the default searchable field). Both set as true -> you can search and display.
indexed = true means that this field can be used in the search.
For example, if I set the item field as follows and I try to perform the field in a search
<field name="item" type="text_general" uninvertible="true" indexed="false" stored="true"/>
fq = item: "Tennis" will mark an error.
stored = true means that this field can be retrieved in the list of fields displayed after a query.
For example, if the item field is defined as follows
<field name="item" type="text_general" uninvertible="true" indexed="true" stored="false"/>
You will be able to search fq = item: "Tennis" correctly, but it will not return the item field in the results.
Regards

How to view non-stored fields per document?

I have a field like this:
<field name="status" type="string" indexed="true" stored="false" required="false" />
Using LukeRequestHandler I can view only statistics of the indexed terms, I can view indexed terms per document if stored="true". TermsComponent can show only frequencies of terms, I cannot view terms per document.
Is it possibly to look inside the inverted index without setting stored="true" and reindexing Solr?
In order to view the indexed terms for a single document, you need to use the full Luke application, not the LukeRequestHandler. You would need to copy the index folder from your Solr data directory to another location, then open it in Luke.
There is however a workaround within solr itself - do a search that will return just the one document, and facet on the field you want to examine. Every term in the index for that field on that document will be an entry in the facet output. Here is a full sample URL for this kind of search:
http://localhost:8983/solr/core/select?q=id:1234&facet.field=status&facet.limit=-1&facet.mincount=1&facet=true&facet.method=enum
If you decide to go the Luke route, you can step through your index (or search for an individual document) and view just one document.
The official Luke page is here, but it only supports up through 4.0-ALPHA:
http://code.google.com/p/luke/
You can find Luke for versions beyond 4.0-ALPHA here:
https://java.net/projects/opengrok/downloads
There is an effort underway to absorb Luke into the Lucene/Solr source code as a module, so it will always be current and released at the same time as each Lucene/Solr version.

How to enforce an exact match to get the highest priority?

I am indexing and searching 5 fields, which are tokenized/filtered in various ways.
BUT, I would like that when I search, if the query I entered matches a value in field 1, it will be the top result I get back.
How would I define:
The field
The query in such a way this field gets priority IF there is 100% match
In my schema, I have the field
<field name="na_title" type="text_names" indexed="true" stored="false" required="true" />
text_names is :<fieldType name="text_names" class="solr.StrField" />
I have ONLY one entry with na_title="somthing is going on here". But, when I search
text_names:somthing is going on here I get many results.
Just to point it out, there are no analyzers nor filters on that field, both for query and index actions.
From the manual:
Lucene allows influencing search results by "boosting" in more than
one level:
Document level boosting - while indexing - by calling
document.setBoost() before a document is added to the index.
Document's Field level boosting - while indexing - by calling
field.setBoost() before adding a field to the document (and before
adding the document to the index).
Query level boosting - during
search, by setting a boost on a query clause, calling
Query.setBoost().
You'll need to index the field twice -- once analyzed and once not. Then you can boost the matches in the nonanalyzed fields over the others.
A shortcut could be to index all those fields as strings and use copyfield to copy them as text into a catch-all field. That would simplify the query a little and decrease the number of duplicate fields.

Resources