I have a custom object in SalesForce called Deal, which is a child of the built-in Account object. I am trying to use the Bulk XML API to upload a batch of records, but I can't seem to figure out how to specify this relationship correctly. From the documentation it says that you should reference a custom object's relationships like so:
<Relationship__r>
<sObject>
<some_indexed_field>#####</some_indexed_field>
</sObject>
</Relationship__r>
If you have any idea how to specify a relationship to the Account object from a custom object I'd really appreciate it.
Added
The Deal object has the following 2 fields:
DealID
API Name - DealID__c
Data Type - Text(255)(External ID)(Unique Case Sensitive)
Account
API Name - Account__c
Data Type - Master-Detail(Account)
Request XML:
<Account__r>
<sObject>
<ID>0013000000kcWpfAAE</ID>
</sObject>
</Account__r>
Result XML:
<result>
<errors>
<message>Field name provided, Id is not an External ID or indexed field for Account</message>
<statusCode>INVALID_FIELD</statusCode>
</errors>
<success>false</success>
<created>false</created>
</result>
There appears to be a bug and you have to strip out all whitespace and newlines when dealing with reference objects.
Check out:
http://success.salesforce.com/ideaview?id=08730000000ITQ7AAO
From the docs
<RelationshipName>
<sObject>
<IndexedFieldName>rwilliams#salesforcesample.com</IndexedFieldName>
</sObject>
Everything looks good, but instead of using "ID" for the Indexed Field Name, you need to use "Account__c". That should take care of your issue.
Related
I am working out the structure for a JSON database for an app like onlyFans. Basically, someone can create a club, then inside of that club, there are sections where the creator's posts are shown and another where the club members posts are shown. There is however a filter option where both can be seen.
In order to make option 1 below work, I need to be able to filter based on if isFromCreator=true and at the same time based on timstamp. How can I do this?
Here are the 2 I have written down:
ClubContent
CreatorID
clubID
postID: {isFromCreator: Bool}
OR
creatorPosts
postID: {}
MemeberPosts
postID: {}
Something like the below would be what I want:
ref.child("Content").child("jhTFin5npXeOv2fdwHBrTxTdWIi2").child("1622325513718")
.queryOrdered(byChild: "timestamp")
.queryLimited(toLast: 10)
.queryEqual(toValue: true, childKey: "isFromCreator")
I triedqueryEqual yet it did not return any of the values I know exist with the configuration I specified.
You can use additional resource locations within rules by referencing the parent/child directories specifically and comparing the val() of the respective node structure.
for example:
".write": "data.parent().child('postID').child('isFromCreator').val()"
Just be aware that Security Rules do not filter or process the data in the request, only allow or deny the requested operation.
You can read more about this from the relevant documentation:
https://firebase.google.com/docs/database/security/rules-conditions#referencing_data_in_other_paths
https://firebase.google.com/docs/database/security/core-syntax#rules-not-filters
I have a salesforce query which returns contact information. I need to save the data in 2 tables. In the first table I have to store some metadata about the contacts in an intermediate state. I then get the auto-generated metadata table ID from the metadata saved and apply it to every contact. I then have to save the contact data into a database table and then finally update the contacts metadata to its final state. The problem is that there is a lot of data so I have to include a fetch size when performing this process. What I want to achieve should be something like this, please note this is only what I am looking to achieve. How can I know that the fetch contacts is complete, so that I can save the final state? How can I structure the flow for transactions?
Ideally, I would like to pass the ConsumerIterator to a Java component where I can easily control the process. Can I pass a reference of the ConsumerIterator to a Java component for example? If I can how can I do so?
<sfdc:query fetchSize="100" config-ref="sfdc-connector"
query="dsql:SELECT Id, Account.Id,
Account.Name, Account.PersonEmail, Account.LastName From Contact" />
dw:transform-message metadata:id="d1f6ab4f-4b40-4e30-ae" doc:name="trnsfm">
<enricher target="variable:metaInfo">
<flow-refname="getContactMetadata"/>
</enricher>
<dw:set-payload><![CDATA[%dw 1.0
.//Rest of transformer
<db:insert config-ref="MySQL_Configuration" doc:name="Save Metatdata">
...
</db:insert>
<db:insert config-ref="MySQL_Configuration" doc:name="Save contacts">
....
<db:insert>
<db:insert config-ref="MySQL_Configuration" doc:name="Update Metadata
Final State">
....
<db:insert>
</flow>
Problem Solved. I wrote a Java Component, implementing Callable and grabbed the ConsumerIterator. I then was able to use the iterator to obtain items in fetchSize. I then used Spring Jdbc since the data model is simple enough to transactionally save the data
All:
I am new to solr and solrj. What I want to do right now is uploading pdf file to solr and set customized field such as last_modified field at same time.
But I keep encounter the error such as " multiple values encountered for non multiValued field last_modified", I use solrj to upload pdf and set the last_modified field like
ContentStreamUpdateRequest up = new ContentStreamUpdateRequest("/update/extract");
up.setParam("literal.last_modified", "2011-05-19T09:00:00Z");
I guess the error is due to when solr extract the pdf, it uses some meta data as last_modified field value as well so that my custmized last_modified value leads to a multivalue error, but I wonder how to replace the meta data with my custmized data?
Thanks
/update/extract is defined in solrconfig.xml for your core. You can see the configuration there and modify it to match it to your particular scenario. The Reference Guide lists the options.
In your particular scenario, something look strange. The parameter that seems to be relevant is literalsOverride but it is true by default. Perhaps, you are setting it to false somewhere.
You can also try explicitly map Tika's last update field to some different name.
I would enable catch-all (dynamicField *) as store=true and see what is being captured. Then you can play with the parameters until you are happy. You don't have to restart Solr, just reload the core from the Admin UI.
I have faced similar issue, where I need to fetch one dynamic field value and do some operation then update it. I use below code to achieve this.
First check for that field is it exist or not. Try using below code may be it will help you.
Map<String, String> partialUpdate = new HashMap<String, String>();
if(alreadyPresent)
{
partialUpdate.put("set", value);
}else
{
partialUpdate.put("add", value);
}
doc.addField("projectId", projectId); // unique id for solrdoc
doc.addField(keys[0], partialUpdate);
docs.add(doc);
solrServer.add(docs);
solrServer.commit();
I'm using app engine's bulkloader to import a CSV file into my datastore. I've got a number of columns that I want to merge into one, for example they're all URLs, but not all of them are supplied and there is a superseding order, eg:
url_main
url_temp
url_test
I want to say: "Ok, if url_main exists, use that, otherwise user url_test and then use url_temp"
Is it, therefore, possible to create a custom import transform that references columns and merges them into one based on conditions?
Ok, so after reading https://developers.google.com/appengine/docs/python/tools/uploadingdata#Configuring_the_Bulk_Loader I learnt about import_transform and that this can use custom functions.
With that in mind, this pointed me the right way:
... a two-argument function with the keyword argument bulkload_state,
which on return contains useful information about the entity:
bulkload_state.current_entity, which is the current entity being
processed; bulkload_state.current_dictionary, the current export
dictionary ...
So, I created a function that handled two variables, one would be the value of the current entity and the second would be the bulkload_state that allowed me to fetch the current row, like so:
def check_url(value, bulkload_state):
row = bulkload_state.current_dictionary
fields = [ 'Final URL', 'URL', 'Temporary URL' ]
for field in fields:
if field in row:
return row[ field ]
return None
All this does is grab the current row (bulkload_state.current_dictionary) and then checks which URL fields exist, otherwise it just returns None.
In my bulkloader.yaml I call this function simply by setting:
- property: business_url
external_name: URL
import_transform: bulkloader_helper.check_url
Note: the external_name doesn't matter, as long as it exists as I'm not actually using it, I'm making use of multiple columns.
Simples!
I want to implement some kind of tagging functionality to my app. I want to do something like...
class Item(db.Model):
name = db.StringProperty()
tags = db.ListProperty(str)
Suppose I get a search that have 2 or more tags. Eg. "restaurant" and "mexican".
Now, I want to get Items that have ALL, in this case 2, given tags.
How do I do that? Or is there a better way to implement what I want?
I believe you want tags to be stored as 'db.ListProperty(db.Category)' and then query them with something like:
return db.Query(Item)\
.filter('tags = ', expected_tag1)\
.filter('tags = ', expected_tag2)\
.order('name')\
.fetch(256)
(Unfortunately I can't find any good documentation for the db.Category type. So I cannot definitively say this is the right way to go.) Also note, that in order to create a db.Category you need to use:
new_item.tags.append(db.Category(unicode(new_tag_text)))
use db.ListProperty(db.Key) instead,which stores a list of entity's keys.
models:
class Profile(db.Model):
data_list=db.ListProperty(db.Key)
class Data(db.Model):
name=db.StringProperty()
views:
prof=Profile()
data=Data.gql("")#The Data entities you want to fetch
for data in data:
prof.data_list.append(data)
/// Here data_list stores the keys of Data entity
Data.get(prof.data_list) will get all the Data entities whose key are in the data_list attribute