solr - synonyms.txt vs managed sy - solr

I realize there are two ways to add synonyms:
1. using synonyms.txt and SynonymFilterFactory
2. using rest api by using ManagedSynonymFilterFactory.
The question is - can both of these be used together? If so, if a new entry is added to synonyms.txt, will it be returned when fetching synonyms using rest api and vice versa?
We currently use synonyms.txt but want to use the rest apis to update synonyms on the fly without having to restart solr. At the same time, we want to retain synonyms added using synonyms.txt and also retain ability to add new words using the txt file.
Also, if we add synonyms using the txt file, should solr always require a restart before changes are reflected? Or just a core reload is supposed to do it? - if the latter, it for some reason doesnt' work for us.

Related

Solr REST API - force POST to replace managed resources

https://lucene.apache.org/solr/guide/8_2/managed-resources.html
In docs there is info:
PUT/POST is used to add terms to an existing list instead of replacing the list entirely. This >is because it is more common to add a term to an existing list than it is to replace a list >altogether, so the API favors the more common approach of incrementally adding terms especially >since deleting individual terms is also supported.
Is there some way to force POST request to replace all managed resource on endpoint, instead of adding terms?
Solr version 8.2
As an answer to myslef - no there is not.
You can fork and change it in source.

Searching PDF files stored in database using SOLR

I have a lot of PDF files stored in a database (MSSQL) I need to search. They are stored as BLOB. I need a walk through on how to search them using SOLR.
I have a DB, lets call it "fred". Inside Fred is a table, we'll call it pdffiles. pdffiles has a column named pdfdata, of type BLOB.
The pdfs are stored in this table, with the binary data stored in the column. What steps do I take to get SOLR to extract this data and index it?
I'm guessing it involves the TikaEntityProcessor but having the pdfs stored in the database rather than just being regular files adds a level of complexity. I have previously worked with SOLR and have it running in production.
Sample dataconfig and schema files would be very useful.
What steps do I take to get SOLR to extract this data and index it?
create a new file called tika-data-config.xml which will have database configurations and the query to get the data.
You need to update the solrconfig.xml in a text editor and add the following within the config tags:
You need to mention the libs related to data-import handler.
Provide the respective database jar file.
Do the changes in the schema.xml file by mentioning your field. Add the proper fieldType for your field depending on your search requirement.
Once the setup is ready then you can request solr for indexing
using http://localhost:8983/solr/collection1/dataimport?command=full-import
Please refer the link at solr for more detailed...Configure DIH

Highlighting Solr search results with bin/post and managed schema

I've got Solr 6.6.1 installed. I run bin/post to fetch and index some documents into a new core. I'd like to add a text field and highlight on that field. I notice that in server/solr/myCore/conf that there is a file, managed-schema, with a warning that tells me not to edit the file.
What's the supported way to use bin/post AND enable highlighting on a text field?
Solr implicitly uses a ManagedIndexSchemaFactory, which is by default "mutable" and keeps schema information in a managed-schema file.
You have several choices:
Go back to <schemaFactory class="ClassicIndexSchemaFactory"/>, so you will be able to change schema file manually.
Stay with managed Schema API and just modification operations via HTTP to add new field, which you will use for highlighting.
I would recommend to stick with #2, but it's totally up to you. Official documentation will help you to choose which schema options for your text fields you need to get the best out of highlighting.

Index content of PDFs with Solr and Tika

The problem briefly: I would like Sitecore to index the contents of PDFs using Solr's built in functionality (supplied by Tika). I'm not sure how to configure Sitecore's indexing to use this feature in Solr(Tika). (I think I need to write a custom indexer.)
I'm working with Sitecore 7 (7.1 Update 1) and want to index content from PDFs (or other rich media types). I'd like to index this data for search purposes.
I have Solr (4.6.1) installed and working with Sitecore 7. When I index my site it saves all of the documents to the correct Solr core, and I can successfully retrieve these documents for display.
Using curl, I can send a PDF to my Solr instance and get it indexed.
curl "http://localhost:8983/solr/update/extract?literal._id=doc1&uprefix=attr_&fmap.content=attr_content&commit=true" -F "myfile=#sample.pdf"
This works, and I can read this content in my Sitecore web project and display it in views, so I know I can get access to this data. However, I would like the data to be attached to the items that I have uploaded in Sitecore.
I'd like something like this to happen when I upload a PDF to the Sitecore Media Library and publish the item, or at least when I re-index the site.
I'm currently walking through the following tutorial to learn some things about writing custom indexing (here is a link to part 1):
http://www.sitecore.net/Community/Technical-Blogs/Getting-to-Know-Sitecore/Posts/2013/04/Sitecore-7-Search-Provider-Part-1-Manually-Triggered-Indexing.aspx
Thanks for you patience.
For Sitecore, when handling media data, Lucene and Solr needed to index the content in a consistent way (so that you could switch between them if needed and still get data indexed in the same way). As Tika integration is very much a Solr thing it was decided that both should use the general windows concept of IFilters for indexing (http://en.wikipedia.org/wiki/IFilter)
This means that as long as you have the correct IFilter for that mime type installed on the machine that is doing the indexing, then the '_content' computedfield will be populated with the output.
This doesn't mean you can't use the Solr Tika integration but it is not supported by default and would be a customization.
It would be very simple to:
Disable the '_content' computed field
Set up a publish pipeline processor that looks at each item being published
Check if it is a media item
Check to see if it is a PDF
Issues a command to push the content to your Solr server for indexing by Tika.
You may want to see what results you get by using an IFilter, if the results are close enough to what you want then you can go with that, if Tika is producing better results for you then you should be able to switch to that, although you would probably have your media content indexed in a separate Solr core so you would lose any Sitecore specific metadata around the document.
Some blog posts that might be helpful:
http://www.samjgriffin.com/blog/2013/11/06/sitecore-7-pdf-and-document-content-search/
http://www.sitecore.net/Community/Technical-Blogs/John-West-Sitecore-Blog/Posts/2013/04/Sitecore-7-Indexing-Media-with-IFilters.aspx
I second the recommendation to use Sitecore's built-in MediaItemContentExtractor + IFilter approach, unless you've already ruled that out for some reason (IFilter difficulties, perhaps). If an IFilter is not an option, or if you're interested in the other approach regardless, I would integrate Tika a little differently than Stephen suggested, though.
The tutorial you referenced deals with writing your own search provider - i.e. replacing the built-in provider entirely. You should be able to leverage Sitecore's Solr provider and accomplish what you need with something lighter: a computed index field. The built-in media extractor mentioned above uses this approach, which enables you to put just about anything into your index during the normal indexing process. Here is a blog post from John West that walks through creating a basic computed index field: Sitecore 7: Computed Index Fields.
In short, write a class that implements IComputedIndexField and represents content extracted from PDFs or other rich documents. In your implementation of the ComputeFieldValue method:
Call GetMediaStream() on the document.
Pass the stream to Solr in an extract-only command and capture the result.
Return that result to store it in the computed index field.
If you need the media content to be in a particular existing index field, then configure the computed field as a copy field (see 3.3.3) into the existing field. Otherwise, configure your search to reference the computed field.
The main drawback here is the expense of passing the extracted content back and forth, rather than committing it directly to the index in a single step. Depending on your index size and contents, that may not be an issue for you.
One other potential option would be a post-rebuild task to add media content to the existing indexed documents. I am not certain this would work. It depends on knowing the IDs of the media items' documents and committing the rich document content in partial document updates, which this person was unsuccessful in attempting. If you try this, be sure to execute it in the indexing:end event prior to HTML cache clearing.
Whatever approach you take, if you want to work with Tika on a higher level than cURL, have a look at SolrNet's implementation of ExtractCommand and related classes.
If you could upgrade your site into sitecore 7.2 in which the media items content will be indexed automatically and there is no need to install the related IFilter, You should read the following:
Media Content Indexing in sitecore 7.2

Point to different data dirs in solrj?

I am using solrj for a distributed search. Based on certain user preferences, I will have to search through a particular set of indexes. Is there any way through which I can programatically(duh!) specify the datadir for that particular search query?
I did snoop through the documentation, but couldn't find anyway other than having the datasets in different cores, is there a better way?
Edit 1 : All the set of indexes have the same schema format.
You can change the dataDir almost dynamically using the CREATE CoreAdmin action, and specifying a dataDir parameter.
Creates a new core based on preexisting
instanceDir/solrconfig.xml/schema.xml, and registers it. If
persistence is enabled (persist=true), the configuration for this new
core will be saved in 'solr.xml'. If a core with the same name exists,
while the "new" created core is initalizing, the "old" one will
continue to accept requests. Once it has finished, all new request
will go to the "new" core, and the "old" core will be unloaded.
Anyway you'll need to load a new core, it isn't possible changing the data directory without doing it.
You can use solrj to invoke CoreAdmin action using the CoreAdminRequest and CoreAdminResponse classes.

Resources