I’m using Lucence Solr (Lucid works ) for search to index and its a schema less search engine to index outlook files.
I have been using it since version 2.0.* and have an issue with querying dates.
The older versions were mapping( date_created, mail from_date etc) to date datatype making date range queries easier.
After upgrading to version 3.0* all the date fields are mapped to text_general data type .
All my queries for dates are failing.
Please help !!! Any helpful resources are greatly appreciated as I’m a newbie.
Thanks in advance
Related
I upgraded my Hybris project from version 2005 to version 2105.
But there is a problem in the Product List Page.
In the category-based page, the "facet" data is empty.
I have done all the necessary Solr indexing and I can see the products with their data in Solr. So actually my Solr indexes are working successfully.
Since the facet data is empty, no filter comes and there is no filtering process.
"Facet" data is not coming, but the products are coming with pagination.
In the photo I added below, you can see the facet is empty, but the result, that is, the products are available.
Is there some setting that needs to be done?
I would be very grateful if you could help with this issue. Thank you very much in advance.
Solution :
The root of the problem is in 2105
It was because the classes in search-and-navigation->solrfacetsearch were changed in the new package.
(FacetSearchResultFacetsPopulator facet data is null due to the change made in this class).
To solve this problem, I replaced the content of search-and-navigation->solrfacetsearch with the file from the 2011 version, so the problem was solved.
Unfortunately there seems to be no backward compatibility for Faceting in 2105 as it uses JSON facet API from Solr, if you have customized the behavior of either the FacetSearchQueryFacetsPopulator or the FacetSearchResultFacetsPopulator, you will need to upgrade your implementation to adapt the 2105 paradigm.
We had the same issue migrating from 1905 to 2205 and we upgraded our custom implementation to resolve this.
Facts
Solr is installed and has indexed the user core.
I've added the field "ssotoken" to the schema.xml.
I've re-indexed the Solr core after adding the "ssotoken" field to the user core.
I can see the query contain "ssotoken" but the search criteria is not returning anything.
Please note that other field criterias return a result-set in xml format.
Please let me know what other details you need.
I have struggled with this for a couple of days now. Any help is greatly appreciated.
Hey so I started researching about Solr and have a couple of questions on how Solr works. I know the schema defines what is stored and indexed in the Solr application. But I'm confuse as to how Solr knows that the "content" is the content of the site or that the url is the url?
My main goal is I'm trying to extract phone numbers from websites and I want Solr to nicely spit out 1234567890.
You need to define it in Solr schema.xml by declaring all the fields and its field type. You can then query Solr for any field to search.
Refer this: http://wiki.apache.org/solr/SchemaXml
Solr will not automatically index content from a website. You need to tell it how to index your content. Solr only knows the content you tell it to know. Extracting phone numbers sounds pretty simple so writing an update script or finding one online should not be an issue. Good luck!
I am new to Solr and have a couple of questions to ask help from more experienced people:
I am able to get example running, however what is exactly the start.jar?
I know by running "java -jar start.jar", i can start solr. But do i run this command after i index my own data, not the given sample data? if not, what should i do to run my own solr instance with my own indexed data?
I do need to index my own sample data, not related to the given example solr thing at all. How exactly should i do it? Should i copy the example directory then modify the fields in sechema.xml? should i then run the post.sh accordingly to index the data like what i did to set up the example solr?
Thanks a lot for your help!
Steps:
Decide what will be the document structure u store in SOLR. (Somewhat like creating the schema of a relational DB for one table).
remove the example core and create your own core with that schema
once the schema works with no errors (you check the server logs that hosts the SOLR app) You can start feed the data you have into SOLR. You POST it via HTTP in a specific structure which is documented in the SOLR Wiki. Various frameworks have some classes to handle that.
Marked as Wiki as this is too broad an answer for someone who did not bother to RTFM...
Dear custom indexing is not a difficult task as I have worked on it just a few days ago. First you need to write your documnet is xml,csv or json( format supported in solr) containing fields according to your schema.xml, then run following command in example/exampledocs
For a document mydoc.xml
./post.sh mydoc.xml
if in output, status value is 0 then indexing is successful and you can search your document in solr
Reference:http://www.solrtutorial.com/solr-in-5-minutes.html
Though the question is old, but I am writing for new visitors with same issue. The question can't be answered in few words. You must understand what Solr is, whats Solr Admin UI, why we need Solr instead a relational database. Then you can understand how to import sample data. I have recently published two articles i.e. Solr Introduction and Importing Sample Data, these might be helpful for you.
http://www.devtrainings.com/2017/03/apache-solr-introduction-and-server.html
http://www.devtrainings.com/2017/03/apache-solr-index-data-and-run-search.html
At the moment I am researching what the best configuration for Solr is to fit the scope of my application. It involves a lot of testing and I was wondering if I can display what Solr saves as index. I.e. I want to see the tokenized, stemmed, lower cased, etc. version of my documents. Is there any way Solr will provide this information?
Thank you
Jan
Have a look at Luke: http://www.getopt.org/luke/
Solr also has a Luke handler built-in: https://wiki.apache.org/solr/LukeRequestHandler
You can use the Solr Analysis which is provided on Solr admin interface. http://wiki.apache.org/solr/SolrAdminGUI
When on the analysis page, just putting the 'field type' or 'field name' you want the analysis on and put in any field value. Solr Analysis will show you what each Filter/Tokenizer is doing and how exactly does your content look after each step. Its great for testing and debugging.
You can do the same on a query if you have set such analyzers (tokenizers/filters) on your query as well in the schema.
Hope this helps.