Does anyone know what does an index in Web table dataset mean?How should we give one, when i try to gave 1, its throwing an error while previewing data .
The value of the property 'index' is invalid: 'The required property is not specified.
Parameter name: index'.
The required property is not specified.
Parameter name: index
enter image description here
Related
ENVIRONMENT:
Keycloack 3.2
Saml2.0
SITUATION:
I need to add user attributes value dynamically.
TASK:
I need name attribute for my user, which can fill dynamically from First Name and Last Name fields, which as I found in keycloack can be fullName property.
NOTE: Instead of fullName it can be firstName + lastName field in my case as well.
ACTION:
I added user property with name fullName under my Clients -> myCLient -> Mappers,
then added under my user Users -> myUser -> Attributes, attribute key name and attribute value ${fullName}.
RESULT:
As a result I got ${fullName} as a value instead of dynamic value from my predefined user property.
QUESTIONS:
Is it possible to do this kind of things what I need ?
If it's possible then, what are wrong in my steps here?
For users like me who looking for a solution of this problem with newest version of Keycloak, in keycloak 18.0 you can create a Mapper with the type Javascript Mapper with this code: user.getFirstName() + ' ' + user.getLastName().
As a solution, I found that under client in keylock we have builtin user properties.
Example X500 givenName, X500 surname can be added and can get in BE side as a part of SAML assertion attributes.
There is another solution if the user federation are LDAP or Active Directory
On the user federation you can use the full-name-ldap-mapper.
By default it uses cn, but you can change that.
Next in your client you would add a saml mapper.
{
"name": "fullName",
"protocol": "saml",
"protocolMapper": "saml-user-attribute-mapper",
"consentRequired": false,
"config": {
"attribute.nameformat": "Unspecified",
"user.attribute": "full name",
"friendly.name": "Full name",
"attribute.name": "displayName"
}
}
Remember the attribute.name is the property that the SP would use.
Also the nameformat has to be discussed with the SP.
I am trying list all the API Names available in a salesforce organization. I am able to retrieve all the object API names using below code:
for ( Schema.SObjectType o : Schema.getGlobalDescribe().values() )
{
Schema.DescribeSObjectResult objResult = o.getDescribe();
system.debug( 'Sobject: ' + objResult );
system.debug( 'Sobject API Name: ' + objResult.getName() );
system.debug( 'Sobject Label Name: ' + objResult.getLabel() );
}
But the list does not contain the objects belongs to managed packages and unmanaged packages.
And also I am trying to access managed package objects records via workbench.developerforce.com , I am getting the error as
message: Select COUNT(id) FROM CustomObject__c ^ ERROR at
Row:1:Column:23 sObject type 'CustomObject__c' is not supported. If
you are attempting to use a custom object, be sure to append the '__c'
after the entity name. Please reference your WSDL or the describe call
for the appropriate names. errorCode: INVALID_TYPE
I did post question developer.salesforce.com did not get the response yet.
EDIT :
Setup#QuickSearch#Objects this screen has the objects listed from managed packages but same objects not coming in Schema.getGlobalDescribe().values().
A managed object should contain two underscores before and after the object name:
Namespace__CustomObject__c
You should be able to identify it by the Namespace or by the fact that still it contains two consecutive underscores after removing __c
In case someone is still looking for SOQL, This can be achieved using following SOQL as well
select sobjecttype from ObjectPermissions where parent.NamespacePrefix='PackageName'
I am creating the bulkloader.yaml automatically from my existing schema and have trouble downloading my data due the repeated=True of my KeyProperty.
class User(ndb.Model):
firstname = ndb.StringProperty()
friends = ndb.KeyProperty(kind='User', repeated=True)
The automatic created bulkloader looks like this:
- kind: User
connector: csv
connector_options:
# TODO: Add connector options here--these are specific to each connector.
property_map:
- property: __key__
external_name: key
export_transform: transform.key_id_or_name_as_string
- property: firstname
external_name: firstname
# Type: String Stats: 2 properties of this type in this kind.
- property: friends
external_name: friends
# Type: Key Stats: 2 properties of this type in this kind.
import_transform: transform.create_foreign_key('User')
export_transform: transform.key_id_or_name_as_string
This is the error message I am getting:
google.appengine.ext.bulkload.bulkloader_errors.ErrorOnTransform: Error on transform. Property: friends External Name: friends. Code: transform.key_id_or_name_as_string Details: 'list' object has no attribute 'to_path'
What can I do please?
Possible Solution:
After Tony's tip I came up with this:
- property: friends
external_name: friends
# Type: Key Stats: 2 properties of this type in this kind.
import_transform: myfriends.stringToValue(';')
export_transform: myfriends.valueToString(';')
myfriends.py
def valueToString(delimiter):
def key_list_to_string(value):
keyStringList = []
if value == '' or value is None or value == []:
return None
for val in value:
keyStringList.append(transform.key_id_or_name_as_string(val))
return delimiter.join(keyStringList)
return key_list_to_string
And this works! The encoding is in Unicode though: UTF-8. Make sure to open the file in LibreOffice as such or you would see garbled content.
The biggest challenge is import. This is what I came up with without any luck:
def stringToValue(delimiter):
def string_to_key_list(value):
keyvalueList = []
if value == '' or value is None or value == []:
return None
for val in value.split(';'):
keyvalueList.append(transform.create_foreign_key('User'))
return keyvalueList
return string_to_key_list
I get the error message:
BadValueError: Unsupported type for property friends: <type 'function'>
According to Datastore viewer, I need to create something like this:
[datastore_types.Key.from_path(u'User', u'kave#gmail.com', _app=u's~myapp1')]
Update 2:
Tony you are to be a real expert in Bulkloader. Thanks for your help. Your solution worked!
I have moved my other question to a new thread.
But one crucial problem that appears is that, when I create new users I can see my friends field shown as <missing> and it works fine.
Now when I use your solution to upload the data, I see for those users without any friend entries a <null> entry. Unfortunately this seems to break the model since friends can't be null.
Changing the model to reflect this, seems to be ignored.
friends = ndb.KeyProperty(kind='User', repeated=True, required=False)
How can I fix this please?
update:
digging further into it:
when the status <missing> is shown in the data viewer, in code it shows friends = []
However when I upload the data via csv I get a <null>, which translates to friends = [None]. I know this, because I exported the data into my local data storage and could follow it in code. Strangely enough if I empty the list del user.friends[:], it works as expected. There must be a beter way to set it while uploading via csv though...
Final Solution
This turns out to be a bug that hasn't been resolved since over one year.
In a nutshell, even though there is no value in csv, because a list is expected, gae makes a list with a None inside. This is game breaking, since retrieval of such a model ends up in an instant crash.
Adding a post_import_function, which deletes the lists with a None inside.
In my case:
def post_import(input_dict, instance, bulkload_state_copy):
if instance["friends"] is None:
del instance["friends"]
return instance
Finally everything works as expected.
When you are using repeated properties and exporting to a CSV, you should be doing some formatting to concatenate the list into a CSV understood format. Please check the example here on import/export of list of dates and hope it can help you.
EDIT : Adding suggestion for import transform from an earlier comment to this answer
For import, please try something like:
`from google.appengine.api import datastore
def stringToValue(delimiter):
def string_to_key_list(value):
keyvalueList = []
if value == '' or value is None or value == []: return None
for val in value.split(';'):
keyvalueList.append(datastore.Key.from_path('User', val))
return keyvalueList
return string_to_key_list`
if you have id instead of name , add like val = int(val)
ExtJS Model fields have mapping option.
fields: [
{name: 'brandId', mapping:'brand.id', type: 'int'},
{name: 'brandName', mapping:'brand.name', type: 'string'},
The problem is: if the response from server does not contain some field(brand field in my example) and mapping from inner fields is defined, Ext Store silently fails to load any records.
Does anybody have problems with this? Is it some kind of a bug?
UPDATE
To make it clear: suppose I have ten fields in my model. Response from server has nine fields, one is missing. If there is no nested mapping for this field (mapping:'x.y.z') everything is OK - store loads record, the field is empty. But if this field has to be loaded from some nested field and has mapping option - store fails to load ANYTHING.
UPDATE 2
I have found the code, that causes problems. The fact is: when Ext tries to load some field from Json it performs a check like this
(source["id"] === undefined) ? __field0.defaultValue : source["id"]
But when field has mapping option(mapping 'brand.id') Reader does it this way
(source.brand.id === undefined) ? __field20.defaultValue : source.brand.id
which causes error if source has no brand field.
In case you have same problems as I: you can fix it by overloading Ext.data.reader.Json's method createFieldAccessExpression
I agree that Ext should only fail to load that field, not the entire record. One option that isn't great, but should work, is instead use a mapping function:
{
name: 'brandId',
mapping: function(data, record) {
return data.brand && data.brand.id;
}
}
I could have the arguments wrong (I figured out that this feature existed by looking at the source code), so maybe put a breakpoint in there to see what's available if it doesn't work like this.
I think you're misinterpret mapping and nesting paradigms: these are not interchangeable.
If you define nesting in your data, the result MUST have the corresponding field.
I have a custom object in SalesForce called Deal, which is a child of the built-in Account object. I am trying to use the Bulk XML API to upload a batch of records, but I can't seem to figure out how to specify this relationship correctly. From the documentation it says that you should reference a custom object's relationships like so:
<Relationship__r>
<sObject>
<some_indexed_field>#####</some_indexed_field>
</sObject>
</Relationship__r>
If you have any idea how to specify a relationship to the Account object from a custom object I'd really appreciate it.
Added
The Deal object has the following 2 fields:
DealID
API Name - DealID__c
Data Type - Text(255)(External ID)(Unique Case Sensitive)
Account
API Name - Account__c
Data Type - Master-Detail(Account)
Request XML:
<Account__r>
<sObject>
<ID>0013000000kcWpfAAE</ID>
</sObject>
</Account__r>
Result XML:
<result>
<errors>
<message>Field name provided, Id is not an External ID or indexed field for Account</message>
<statusCode>INVALID_FIELD</statusCode>
</errors>
<success>false</success>
<created>false</created>
</result>
There appears to be a bug and you have to strip out all whitespace and newlines when dealing with reference objects.
Check out:
http://success.salesforce.com/ideaview?id=08730000000ITQ7AAO
From the docs
<RelationshipName>
<sObject>
<IndexedFieldName>rwilliams#salesforcesample.com</IndexedFieldName>
</sObject>
Everything looks good, but instead of using "ID" for the Indexed Field Name, you need to use "Account__c". That should take care of your issue.