I'm using atomic update with Solrj. It works perfectly, but I don’t know how to delete a field within an existing document.
In the Solr tutorial (http://wiki.apache.org/solr/UpdateXmlMessages) they explain how to do it with the xml:
<add>
<doc>
<field name="employeeId">05991</field>
<field name="skills" update="set" null="true" />
</doc>
</add>
Does anyone knows how to do it from SolrJ?
Thanks!
Ok. So apparently this is the way to do so -
SolrInputDocument inputDoc = new SolrInputDocument();
Map<String, String> partialUpdateNull = new HashMap<String, String>();
partialUpdateNull.put("set", null);
inputDoc.setField("FIELD_YOU_WANT_TO_DELETE", partialUpdateNull);
Thanks anyways!
Related
This question is similar to Solr doesn't overwrite - duplicated uniqueKey entries, but I am in a situation where I have a large body of existing documents that have already been added to the collection with no child documents, and I am using (standalone not cloud) Solr 6.4 rather than 5.3.1. We recently enabled child documents so that we could store richer data.
We use SolrJ to load data into and query Solr, but to isolate the issue we're seeing, I used the command line Solr post tool to upload the following document:
<add>
<doc>
<field name="id">1</field>
<field name="solr_record_type">1</field>
<field name="title">Fabulous Book</field>
<field name="author">Angelo Author</field>
</doc>
</add>
Search results were as expected:
Using q=id:1 and
fl=id,title,index_date,[child parentFilter="solr_record_type:1"]
"response":{"numFound":1,"start":0,"docs":[
{
"id":"1",
"title":"Fabulous Book",
"index_date":"2019-01-16T23:06:57.221Z"}]
}
Then I updated the document by posting the following:
<add>
<doc>
<field name="id">1</field>
<field name="solr_record_type">1</field>
<field name="title">Fabulous Book</field>
<field name="author">Angelo Author</field>
<doc>
<field name="id">1-1</field>
<field name="solr_record_type">2</field>
<field name="contributor_name">Polly Math</field>
<field name="contributor_type">3</field>
</doc>
</doc>
</add>
Then, repeating my search, I got the following duplicate result, searching on the unique id field, which is undesirable.
"response":{"numFound":2,"start":0,"docs":[
{
"id":"1",
"title":"Fabulous Book",
"index_date":"2019-01-16T23:06:57.221Z",
"_childDocuments_":[
{
"id":"1-1",
"solr_record_type":2,
"contributor_name":"Polly Math",
"contributor_type":3,
"index_date":"2019-01-16T23:09:29.142Z"}]},
{
"id":"1",
"title":"Fabulous Book",
"index_date":"2019-01-16T23:09:29.142Z",
"_childDocuments_":[
{
"id":"1-1",
"solr_record_type":2,
"contributor_name":"Polly Math",
"contributor_type":3,
"index_date":"2019-01-16T23:09:29.142Z"}]}]
}
Going the other way, if I start with a document that was loaded initially with a child document, like the following:
<add>
<doc>
<field name="id">2</field>
<field name="solr_record_type">1</field>
<field name="title">Wonderful Book</field>
<field name="author">Andy Author</field>
<doc>
<field name="id">2-1</field>
<field name="solr_record_type">2</field>
<field name="contributor_name">Polly Math</field>
<field name="contributor_type">3</field>
</doc>
</doc>
</add>
And then I update it with a document with no children:
<add>
<doc>
<field name="id">2</field>
<field name="solr_record_type">1</field>
<field name="title">Wonderful Book</field>
<field name="author">Andy Author</field>
</doc>
</add>
The result still has the child:
"response":{"numFound":1,"start":0,"docs":[
{
"id":"2",
"title":"Wonderful Book",
"index_date":"2019-01-16T23:09:39.389Z",
"_childDocuments_":[
{
"id":"2-1",
"title_id":2,
"title_instance_id":2,
"solr_record_type":2,
"contributor_name":"Polly Math",
"contributor_type":3,
"index_date":"2019-01-16T23:07:04.861Z"}]}]
}
This is strange because if I update a document with 2 child documents with a replacement document with only 1 child document, it does drop one child document. But in this case, it is not dropping the child document.
Updates of documents with no child documents that don't add child documents, and updates of documents with child documents that don't remove all child documents both seem to work as I'd expect.
I have a large body of existing documents that don't have children, which I may be adding children to, and eventually I may have a lot of child-having documents that might drop their children. Given that, what is the best way to update these records without generating duplicate records or losing updates?
I would strongly advise avoiding Solr parent/child relationships. We decided to use them in Solr 5.3.1 and it turns out that although much of the functionality is there, there are a number of nasty bugs present in Solr since 4.x that remain unfixed including
SOLR-6096: Support Update and Delete on nested documents
SOLR-5211: updating parent as childless makes old children orphans (UPDATE: fixed in 8.0)
SOLR-6596: Atomic update and adding child doc not working together
SOLR-5772: duplicate documents between solr "block join" documents and "normal" document
SOLR-10030: SolrClient.getById() method in Solrj doesn't retrieve child documents
For those reasons, if at all possible, I strongly recommend AVOID using child documents. Even if those issues don't hit you now they will in the future at some point and it's clear, given that they have not been fixed in 3 to 4 major versions, that there is no real support in the product for child documents. Sorry to be the bearer of bad news but hopefully someone can learn from our experience.
This question already has an answer here:
Update specific field in Solr
(1 answer)
Closed 6 years ago.
Suppose, I have a Solr index with current structure:
<field name="id" type="string" indexed="true" stored="true" required="true"/>
<field name="field_1" type="string" indexed="true" stored="true"/>
<field name="field_2" type="string" indexed="true" stored="true"/>
which already has some data. I want to replace data in field "field_1" but data in field "field_2" has to be stay untouched.
For a while I have been using curl whith json file for this task. The example of json file is
[
"{"id":1,"field_1":{"set":"some value"}}"
]
Data in this file replace value only in field "field_1".
Now I have to the same with solrj library.
There are some code snippets in order explain my attempts.
SolrInputDocument doc = new SolrInputDocument();
doc.addField("field_1", "some value");
documents.add(doc);
server = new ConcurrentUpdateSolrClient(solrServerUrl, solrQueueSize, solrThreadCount);
UpdateResponse resp = server.add(documents, solrCommitTimeOut);
When I run this code value of the "field_1" became "some value", but the value of "field_2" became is null.
How can avoid replacing value in field "field_2"?
Because you are doing a full update, what you are doing is overwriting the entire previous document with a new one, which does not have field2.
You need to do a partial update as explained here (scroll down to SOLRJ comment):
https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents
SolrJ code for Atomic Update
String solrBaseurl = "http://hostname:port/solr";
String collection = "mydocs";
SolrClient client = new HttpSolrClient(solrBaseurl);
SolrInputDocument doc = new SolrInputDocument();
doc.addField("id", "test");
Map<String, String> cmd1 = new HashMap<>();
Map<String, String> cmd2 = new HashMap<>();
cmd1.put("set", "newvalue");
cmd2.put("add", "additionalvalue");
doc.addField("field1", cmd1);
doc.addField("field2", cmd2);
client.add(collection, doc);
I want to update a particular field of a document in Solr. But after updating , the field *_coordinate was converted from tdouble to array. How can I fix it ? I use Apache Solr version 6.2.1.
This is my dynamicField in the schema file:
<!-- Type used to index the lat and lon components for the "location" FieldType -->
<dynamicField name="*_coordinate" type="tdouble" indexed="true" stored="true" useDocValuesAsStored="false" />
This is the code that I used for update a field:
String solrID = (String) currentDoc.getFieldValue("id");
SolrInputDocument solrDocToIndex = new SolrInputDocument();
solrDocToIndex.addField("id", solrID);
Map<String, String> partialUpdate = new HashMap<>();
partialUpdate.put("add", "Solr Demo");
solrDocToIndex.addField("tags", partialUpdate);
I have a field geoloc_0_coordinate. Before do update it has value 12.123456. But after running the code update, it changed to [12.123456,12.123456]
I am using Solr for searching my corpus of web page data. My solr-indexer will create several fields and corresponding values. However some of these fields I want to update more often, like for example the number of clicks on that page. These fields need not be indexable and I don't need to perform a search on these field values. However I do want to fetch them and update them often.
I am a newbie in solr so a more descriptive answer with perhaps some running example/code would help me better.
If you are on Solr 4+, yes you can push a Partial Update to Solr index.
For partial update, all fields in your schema.xml need to be stored.
This is how your fields section should look like:
<fields>
<field name="id" type="string" indexed="true" stored="true" required="true" />
<field name="title" type="text_general" indexed="true" stored="true"/>
<field name="description" type="text_general" indexed="true" stored="true" />
<field name="body" type="text_general" indexed="true" stored="true"/>
<field name="clicks" type="integer" indexed="true" stored="true" />
</fields>
Now when you send a partial update to one of the fields, eg: in your case the "clicks"; in the background Solr will go and fetch values for all other fields for that document, such as title, description, body, delete old document and will push new updated document to Solr index.
localhost:8080/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"id":"1","clicks":{"set":100}}]
Here is a good documentation on partial updates: http://solr.pl/en/2012/07/09/solr-4-0-partial-documents-update/
Sample SOLR- partial update code:
Prerequisites: The fields need to be stored.
You need to configure update log path under direct update handler
<updateHandler class="solr.DirectUpdateHandler2">
<!-- Enables a transaction log, used for real-time get, durability, and
and solr cloud replica recovery. The log can grow as big as
uncommitted changes to the index, so use of a hard autoCommit
is recommended (see below).
"dir" - the target directory for transaction logs, defaults to the
solr data directory. -->
<updateLog>
<str name="dir">${solr.ulog.dir:}</str>
</updateLog>
</updateHandler>
Code:
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.apache.solr.client.solrj.SolrServer;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer;
import org.apache.solr.client.solrj.impl.HttpSolrServer;
import org.apache.solr.common.SolrInputDocument;
public class PartialUpdate {
public static void main(String args[]) throws SolrServerException,
IOException {
SolrServer server = new HttpSolrServer("http://localhost:8080/solr");
SolrInputDocument doc = new SolrInputDocument();
Map<String, String> partialUpdate = new HashMap<String, String>();
// set - to set a field.
// add - to add to a multi-valued field.
// inc - to increment a field.
partialUpdate.put("set", "peter"); // value that need to be set
doc.addField("id", "122344545"); // unique id
doc.addField("fname", partialUpdate); // value of field fname corresponding to id 122344545 will be set to 'peter'
server.add(doc);
}
}
I don't know java, I don't know XML, and I don't know Lucene. Now that that's out of the way. I have been working to create a little project using apache solr/lucene. My problem is that I am unable to index the xml files. I think I understand how its supposed to work but I could be wrong. I am not sure what information is required for you to help me so I will just post the code.
<dataConfig>
<dataSource type="FileDataSource" encoding="UTF-8" />
<document>
<!-- This first entity block will read all xml files in baseDir and feed it into the second entity block for handling. -->
<entity name="AMMFdir" rootEntity="false" dataSource="null"
processor="FileListEntityProcessor"
fileName="^*\.xml$" recursive="true"
baseDir="C:\Documents and Settings\saperez\Desktop\Tomcat\apache-tomcat-7.0.23\webapps\solr\data\AMMF_New"
>
<entity
processor="XPathEntityProcessor"
name="AMMF"
pk="AcquirerBID"
datasource="AMMFdir"
url="${AMMFdir.fileAbsolutePath}"
forEach="/AMMF/Merchants/Merchant/"
transformer="DateFormatTransformer, RegexTransformer"
>
<field column="AcquirerBID" xpath="/AMMF/Merchants/Merchant/AcquirerBID" />
<field column="AcquirerName" xpath="/AMMF/Merchants/Merchant/AcquirerName" />
<field column="AcquirerMerchantID" xpath="/AMMF/Merchants/Merchant/AcquirerMerchantID" />
</entity>
</entity>
</document>
Example xml file
<?xml version="1.0" encoding="utf-8"?>
<AMMF xmlns="http://tempuri.org/XMLSchema.xsd" Version="11.2" CreateDate="2011-11-07T17:05:14" ProcessorBINCIB="422443" ProcessorName="WorldPay" FileSequence="18">
<Merchants Count="153">
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029881</AcquirerBID>
<AcquirerName>WorldPay</AcquirerName>
<AcquirerMerchantID>*</AcquirerMerchantID>
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029882</AcquirerBID>
<AcquirerName>WorldPay2</AcquirerName>
<AcquirerMerchantID>Hello World!</AcquirerMerchantID>
</Merchant>
</Merchants>
I have this in schema.
<field name="AcquirerBID" type="string" indexed="true" stored="true" required="true" />
<field name="AcquirerName" type="string" indexed="true" stored="true" />
<field name="AcquirerMerchantID" type="string" indexed="true" stored="true"/>
I have this in config.
<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler" default="true" >
<lst name="defaults">
<str name="config">AMMFconfig.xml</str>
</lst>
</requestHandler>
The sample XML is not well formed. This might explain errors indexing the files:
$ xmllint sample.xml
sample.xml:13: parser error : expected '>'
</Merchants>
^
sample.xml:14: parser error : Premature end of data in tag Merchants line 3
sample.xml:14: parser error : Premature end of data in tag AMMF line 2
Corrected XML
Here's what I think your sample data should look like (Didn't check the XSD file)
<?xml version="1.0" encoding="utf-8"?>
<AMMF xmlns="http://tempuri.org/XMLSchema.xsd" Version="11.2" CreateDate="2011-11-07T17:05:14" ProcessorBINCIB="422443" ProcessorName="WorldPay" FileSequence="18">
<Merchants Count="153">
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029881</AcquirerBID>
<AcquirerName>WorldPay</AcquirerName>
<AcquirerMerchantID>*</AcquirerMerchantID>
</Merchant>
<Merchant ChangeIndicator="A" LocationCountry="840">
<AcquirerBID>10029882</AcquirerBID>
<AcquirerName>WorldPay2</AcquirerName>
<AcquirerMerchantID>Hello World!</AcquirerMerchantID>
</Merchant>
</Merchants>
</AMMF>
Alternative solution
I know you said you're not a programmer, but this task is significantly simpler, if you use the solrj interface.
The following is a groovy example which indexes your example XML
//
// Dependencies
// ============
import org.apache.solr.client.solrj.SolrServer
import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer
import org.apache.solr.common.SolrInputDocument
#Grapes([
#Grab(group='org.apache.solr', module='solr-solrj', version='3.5.0'),
])
//
// Main
// =====
SolrServer server = new CommonsHttpSolrServer("http://localhost:8983/solr/");
def i = 1
new File(".").eachFileMatch(~/.*\.xml/) {
it.withReader { reader ->
def ammf = new XmlSlurper().parse(reader)
ammf.Merchants.Merchant.each { merchant ->
SolrInputDocument doc = new SolrInputDocument();
doc.addField("id", i++)
doc.addField("bid_s", merchant.AcquirerBID)
doc.addField("name_s", merchant.AcquirerName)
doc.addField("merchantId_s", merchant.AcquirerMerchantID)
server.add(doc)
}
}
}
server.commit()
Groovy is a Java scripting language that does not require compilation. It would be just as easy to maintain as a DIH config file.
To figure out how DIH XML import works, I suggest you first carefully read this chapter in DIH wiki: http://wiki.apache.org/solr/DataImportHandler#HttpDataSource_Example.
Open the Slashdot link http://rss.slashdot.org/Slashdot/slashdot in your browser, then right click on the page and select View source. There's the XML file used in this example.
Compare it with XPathEntityProcessor configuration in DIH example and you'll see how easy it is to import any XML file in Solr.
If you need more help just ask...
Often the best thing to do is NOT use the DIH. How hard would it be to just post this data using the API and a custom script in a language you DO know?
The benefit of this approach is two-fold:
You learn more about your system, and know it better.
You don't spend time trying to understand the DIH.
The downside is that you're re-inventing the wheel a bit, but the DIH is quite a thing to understand.