Prevent Solr from creating default copy fields - solr

When I add any field in Solr and then index some data, Solr creates a copy field for this field.
For example I added a field named app_id and after indexing there are data both in app_id and another field named app_id_str.
Is there any way to prevent creating these copy fields ?

I am assuming you are using a reasonably new Solr version. (I do not have enough reputation to comment on the problem yet) You can prevent Solr from automatically creating copy fields during index time. You just have to configure the "add-schema-fields" update processor not to create copy fields on the fly. Here is how,
Open the solrconfig.xml file of the core you wish to disable adding copy fields automatically.
Comment out the configuration to disable the copy field creation on text fields (or any type of field that is configured to generate a copy field).
Save and restart the Solr instance.
Index the documents.

Schema.xml
Search for copyField definitions using wildcards in their glob pattern in schema.xml.
The copyField command can use a wildcard (*) character in the dest
parameter only if the source parameter contains one as well. copyField
uses the matching glob from the source field for the dest field name
into which the source content is copied.
You need to comment anything that looks like this :
<copyField source="*" dest="*_str"/>
You may also have some dynamicField definitions like the following that would create any copied fields (otherwise you would perhaps remember having explicitly defined such fields like app_id_str) :
<dynamicField name="*_str" type="string"/>
SchemaLess Mode
Internally, the Schema API and the Schemaless Update Processors both
use the same Managed Schema functionality.
If you are using Solr in "schemaless mode", you can do the same either by using the Schema API :
Delete a Copy Field Rule
Delete a Dynamic Field Rule
Or by reconfiguring the dedicated update processor in solrconfig.xml as stated by Kusal.
See the paragraph titled You Can Still Be Explicit below this section.

Related

Apache solr search for multiple fields without specifying field name

Apache solr search for multiple fields without specifying field name in solr 7.7.2 version. Created copy field for all fields and assigning it to dest=“text” which is field of text type. But it doesn’t give any output. It works for only one field where df=fieldName.
It has managed schema which automatically override the changes after indexing, please let me know what would be the issue.

Documents lost contents after updating it

I posted 3 documents from post.jar and they successfully posted and i also searched any word of those documents so it returns correct document but when i partial update the document means just update one field then then after updating i once again searched for a word but it doesn't reply successfully.means after partial update it lost the contents of the documents. the fields which i updated are defined by me manually means out of those fields which build itself by post.jar.
so what is the solution that after partial update it remains same
Assuming by "partial update" you are talking about the Atomic Update feature, then this will apply:
In order for Atomic Update to not lose data, all fields in your schema that are not copyField destinations must have stored="true". All fields that ARE copyField destinations must have stored="false".
Further details required for proper Atomic Update operation: The information in copyField destinations must only originate from copyField sources. If some information in copyField destinations originates from the indexing source and some of it comes from copyField, then the information that originated from indexing will be lost when Atomic Update is used.
Also see the "Field Storage" section found on this page from the Solr documentation:
https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents#UpdatingPartsofDocuments-AtomicUpdates
I solved by problem by making stored=false to all dynamic fields and remove copy field of text
As all fields are copied in to text field so after doing these changes my problem becomes solve.

Does copy field requires data re-upload

I have solr instance having lacks of data uploaded. I want to create a new copyfield which is concatenation of existing two fields.
Do I need to repopulate my data?
Yes. From the solr documentation
Fields are copied before analysis is done
By analysis in copyfield context they mean index analizer, which executed when a document is indexed.

Does Solr have an API to read schema.xml?

Is there any Solr API to read the Solr schema.xml?
The reason I need it is that Solr faceting is not backwards compatible. If the index doesn't define field A, but the program tries to generate facets for field A, all the facets will fail. Therefore I need to check in the runtime what fields we have in the index, and generate the facets dynamically.
Since Solr 4.2 the Schema REST API allows you to get the schema with :
http://localhost:8983/solr/schema
or with a core name :
http://localhost:8983/solr/mycorename/schema
Since Solr 4.4 you may also modify your schema.
more details on the Solr Wiki page
You can get the schema with http://localhost:8983/solr/admin/file/?contentType=text/xml;charset=utf-8&file=schema.xml
It's the raw xml, so have to parse it to get the information you need.
However, if your program generates an invalid facet, maybe you should just fix the program instead of trying to work around this.
One alternative is to use LukeRequestHandler. It is modeled after Luke tool which is used to diagnose the content of Lucene Index. The query /admin/luke?show=schema, will show you the schema. However, you will need to define it in solrconfig.xml like so :
<requestHandler name="/admin/luke" class="org.apache.solr.handler.admin.LukeRequestHandler" />
Documentation of LukeRequestHandler link
Actually you have the Schema API for that.
The Solr schema API allows using a REST API to get information about the schema.xml
In Solr 4.2 and 4.3, it only allows GET (read-only) access, but in
Solr 4.4, new fields and copyField directives may be added to the schema. Future Solr releases will extend this functionality to allow more schema
elements to be updated
API Entry Points
/collection/schema: retrieve the entire schema
/collection/schema/fields: retrieve information about all defined fields, or create new fields with optional copyField directives
/collection/schema/fields/name: retrieve information about a named field, or create a new named field with optional copyField directives
/collection/schema/dynamicfields: retrieve information about dynamic field rules
/collection/schema/dynamicfields/name: retrieve information about a named dynamic rule
/collection/schema/fieldtypes: retrieve information about field types
/collection/schema/fieldtypes/name: retrieve information about a named field type
/collection/schema/copyfields: retrieve information about copy fields, or create new copyField directives
/collection/schema/name: retrieve the schema name
/collection/schema/version: retrieve the schema version
/collection/schema/uniquekey: retrieve the defined uniqueKey
/collection/schema/similarity: retrieve the global similarity definition
/collection/schema/solrqueryparser/defaultoperator: retrieve the default operator
Examples
Input
Get a list of all fields.
curl http://localhost:8983/solr/collection1/schema/fields?wt=json
Input
Get the entire schema in JSON.
curl http://localhost:8983/solr/collection1/schema?wt=json
More info here: apache-solr-ref-guide-4.5.pdf (search for Schema API)

solr indexing and reindexing

I have the schema with 10 fields. One of the fields is text(content of a file) , rest all the fields are custom metadata. Document doesn't chnages but the metadata changes frequently .
Is there any way to skip the Document(text) while re-indexing. Can I only index only custom metadata? If I skip the Document(text) in re-indexing , does it update the index file by removing the text field from the Index document?
To my knowledge there's no way to selectively update specific fields. An update operation performs a complete replace of all document data. Since Solr is open source, it's possible that you could produce your own component for this if really desired.

Resources