I'm trying to setup IdentityServer4 to work with my own (mongodb) database, instead of the in-memory examples shown in the documentation.
To do so I have configured the following services:
builder.Services.AddTransient<IPersistedGrantStore, PersistedGrantStore>();
builder.Services.AddTransient<IResourceOwnerPasswordValidator, ResourceOwnerPasswordValidator>();
builder.Services.AddTransient<IClientStore, ClientStore>();
builder.Services.AddTransient<IResourceStore, ResourceStore>();
In my databases I've created 4 collections: "ApiResources", "IdentityResources" and "Clients".
In the ApiResources I've defined what should be the API I'm protecting:
{
"Name" : "MyAPI",
"DisplayName" : "Test API Resource"
}
In IdentityResources I've defined what should be my identities:
{
"Name" : "MyIdentity",
"DisplayName" : "Test Identity Resource"
}
And I have defined the following client:
{
"ClientId" : "client",
"Enabled" : true,
"ClientSecrets" : [
{
"Description" : null,
"Value" : "K7gNU3sdo+OL0wNhqoVWhr3g6s1xYv72ol/pe/Unols=",
"Expiration" : null,
"Type" : "SharedSecret"
}
],
"ClientName" : null,
"ClientUri" : null,
"LogoUri" : null,
"RequireConsent" : true,
"AllowRememberConsent" : true,
"AllowedGrantTypes" : [
"client_credentials"
],
"AllowedScopes" : [
"MyAPI"
],
"Claims" : [
],
"AllowedCorsOrigins" : [
]
}
My database representation is similar to what is represented in the example in the documentation.
In my IResourceStore implementation, for FindIdentityResourcesByScopeAsync I look for the scopes names in my IdentityResources collection (as the name of the method implies) and in my FindApiResourcesByScopeAsync I look for the scopes in my ApiResources collection as the name implies.
When I try to authenticate the client against the server I'm getting Requested scope not allowed: MyAPI.
But if I change my code in FindIdentityResourcesByScopeAsync to the the ApiResources then it works.
Is this a bug? or do I not get exactly what's the difference between IdentityResources and ApiResources? When should each be used? If in FindIdentityResourcesByScopeAsync I should take my API resource, what should I take in FindApiResourcesByScopeAsync?
So I finally figured out what the problem was. Although returning the API Resource when asked FindIdentityResourcesByScopeAsync - This is obviously not the way to go.
I finally noticed that the problem was actually in the ApiResource object returned by FindApiResourcesByScopeAsync. While it was returning an ApiResource with the name of the API I want to grant access to, that object did not contain any values for the Scopes, which should also contain a record of MyAPI.
The thing I do not understand here is what is this Scopes object. And why it should contain the MyAPI definition again (like the parent object). What other Scopes should/can I add here and what are their meaning?
Related
I am looking for way to get fields and picklists for a salesforce object. I can do it with a REAT API call using /describe after the object name. But sometimes the returned JSON data is really big with 95% extra data I don't want, with repetitive pattern strings.
This would be too inefficient to pull all that data, which could actually be as large as 2.8Mb, just to get small info I require.
How can I query query filter this data to get more specific results? Or is there a better way to get picklists for a field, or any other sub data from that big json at /describe?
Here is what I am using currently
https://[myinstance].salesforce.com/services/data/v51.0/sobjects/Casedata/describe
You can query FieldDefinition table in Tooling API, for example
/services/data/v52.0/tooling/query?q=SELECT+Metadata+FROM+FieldDefinition+WHERE+EntityDefinitionId+=+'Account'+AND+QualifiedApiName+=+'Status__c'
(...)
"valueSet" : {
"controllingField" : null,
"restricted" : true,
"valueSetDefinition" : {
"sorted" : false,
"value" : [ {
"color" : null,
"default" : false,
"description" : null,
"isActive" : null,
"label" : "Prospect",
"urls" : null,
"valueName" : "Prospect"
}, {
"color" : null,
"default" : false,
"description" : null,
"isActive" : null,
"label" : "Live",
"urls" : null,
"valueName" : "Live"
}, {
"color" : null,
"default" : false,
"description" : null,
"isActive" : null,
"label" : "Cancelled",
"urls" : null,
"valueName" : "Cancelled"
}
(...)
The picklist values will be in the Metadata field but to query it you need to ensure only 1 row is returned. So if you need 3 picklists - that's 3 API calls...
It'll return the "master" picklist, not filtered by record type.
There's also interesting table called PicklistValueInfo. It's not described too well, it's a related list to EntityParticle. You can query to get multiple picklist values in 1 go
SELECT DurableId,EntityParticleId,IsActive,Label,Value
FROM PicklistValueInfo
WHERE EntityParticle.EntityDefinition.DeveloperName = 'Account' AND
(DurableId LIKE 'Account.Industry%' OR DurableId LIKE 'Account.Type%')
ORDER BY DurableId
Or use it related list style (which might be closer to results of describe call?)
SELECT DataType, FieldDefinition.QualifiedApiName,
(SELECT Value, Label FROM PicklistValues)
FROM EntityParticle
WHERE EntityDefinition.QualifiedApiName ='Account'
AND QualifiedApiName IN ('Industry', 'Type', 'Status__c')
If you use record types - UI API David linked to is easiest.
https://developer.salesforce.com/docs/atlas.en-us.uiapi.meta/uiapi/ui_api_resources_picklist_values_collection.htm
You can grab them all
/services/data/v52.0/ui-api/object-info/Account/picklist-values/012...
Or build links like shown on the screenshot to get data for single field.
The only other API that might be applicable to your use case is the UI API, which is intended to provide the information a client would need to render the UI for a record or object. For example, the Get Object Metadata endpoint might (or might not) suit your needs. Its response body is also not particularly small.
You cannot filter describe data. You're already using the smallest-scoped version of that API.
If you are puzzled which recordTypeId to use in UI API query you can try 012000000000000AAA as specified in this sample request
Record type Id. Use 012000000000000AAA as default when there are no custom record types.
Works well in my case where I do not have Record Types created.
I am pretty new to react, but am trying to do the following ...
Am making API requests to ElasticSearch (using elasticsearch npm package) from reactjs.
I want to put some of the returned json data (after putting in Objects and distilling keys/values) into a table using the react-table package ...
AFAI understand most examples on the react-table documentation talk about mapping object keys to columns and then using the column accessors to populate the values (from key/values) in the correct columns as rows. So, map objects keys to columns and put values in as rows ...
But in my case I want to put values AND keys into rows and have a few manually defined columns ...
At this point there is no proper mapping between columnn accessors and object keys ...
Is ther ea way to do this using react-table? WOuld appreciate if someone can point me in the right direction reagrding documentation/examples ...
Also, my returend JSON data has some nested dicts ...
See example dataset below
"hits" : [
{
"_index" : "obj-model",
...
"_source" : {
"MoClass" : {
"Name" : "fvBD",
"Description" : "A bridge domain is a unique l2 forwarding domain that contains one or more subnets. Each bridge domain must be linked to a context.",
"Class ID" : "1887",
"Class Label" : "Bridge Domain",
"AbstractionLayer" : "Logical Model",
"Write Access" : "[admin, tenant-connectivity-l2]",
"Read Access" : "[access-connectivity-l3, admin, fabric-connectivity-l3, nw-svc-device, nw-svc-policy, tenant-connectivity-l2, tenant-connectivity-l3, tenant-connectivity-mgmt, tenant-epg, tenant-ext-connectivity-l2, tenant-ext-connectivity-l3, tenant-ext-protocol-l3, tenant-network-profile, tenant-protocol-l2, tenant-protocol-l3, tenant-security]",
"Semantic Scope" : "EPG",
"Semantic Scope Evaluation Rule" : "Explicit",
"Monitoring Policy Source" : "Explicit",
"Property" : [ <===================== NESTED DICT
{
"Name" : "OptimizeWanBandwidth",
"Comment" : "OptimizeWanBandwidth flag is enabled between sites",
"Constants" : [
"no",
"yes"
]
},
{
"Name" : "annotation",
"Comment" : "NO COMMENTS",
"Constants" : [
"no",
"yes"
]
},
...
I am trying to use json-ld on my website using schema.org as the lanquage.
The reason is to assist search engine's crawlers to understand my site.
Schema.org offers many key/value attribute pairs for Types of Items.
Sometimes the values for those keys are themselves an Item with their own Type and have their own set of key/value pairs.
In practice, the same Item is appropriate answer for several different keys, and it is desirous/necessary to give that Item's key/value set.
In my case, for example, I am marking up a web pages on a website with schema.org's "WebPage" type.
I want to give the same person as the answer for various keys on the WebPage type: author, creator, copyrightHolder, etc.
I think I can do this repeating the values each time with something like:
<script type="application/ld+json">
{
"#context": "http://schema.org",
"#type" : "WebPage",
"name" : "The Name of the Webpage",
"author" :
{
"#type" : "Person",
"name" : "Tim"
}
"creator":
{
"#type" : "Person",
"name" : "Tim"
}
"copyrightHolder" :
{
"#type" : "Person"
"name" : "Tim",
}
}
</script>
However, that is repetitive and verbose to me.
I would rather assign/define the person once, and then reference him (me) using a keyword as needed.
I don't know much about json-ld or coding/programming, and as a lay person I have found the information (spec + jsonld.org + here) a bit confusing.
I understand that #context can be expanded for the document (here a webpage) to define 'things' in addition to declaring the relevant 'language' as being schema.or, and that json-ld also seems to support referencing specific items using 'IRIs' as an ID.
So it seems like I might be able to define the Person once as desired with something similar to the following:
<script type="application/ld+json">
{
"#context":
["http://schema.org",
{
"Tim" :
{
"#type" : "Person",
"#id" : "https://www.example.com/tim#tim"
"name" : "Tim"
}
}],
"#type" : "WebPage",
"name" : "The Name of the Webpage",
"author" : "Tim",
"creator": "Tim"
}
</script>
So my questions are:
Can we do this and, if so, how?
In a lot of documentation, IRI's appear to be URLs with #value tacked on the end. Is the #value simply a declaration to differentiate it from the page URL (which may be a value unto itself for some other keys), or is the #value referencing a div on the page such as a div with an id="value" or perhaps some other protocol?
If I do this, will say Google's crawler simply cache the IRI for referencing later of the associated URL or div, or will it likely assign the values defined? Ideally, I would like the expanded values to be returned for each use.
I have looked a lot on this site for answers to these questions. I have seen similar questions and answers, and which may have answered these questions but in a way I could not understand. For example, I do not know what a "node" or an "object" is.
Please excuse my lack of knowledge. Any use of simple plain language would be appreciated. Actually, any help would be much appreciated!
Thank you.
Your example is almost right. You need to assign an #id to the person object that you reuse elsewhere:
<script type="application/ld+json">
{
"#context": "http://schema.org",
"#type" : "WebPage",
"name" : "The Name of the Webpage",
"author" : {
"#type" : "Person",
"#id": "#tim",
"name" : "Tim"
},
"creator": {
"#id": "#tim"
},
"copyrightHolder": {
"#id": "#tim"
}
}
</script>
I seem to be having an issue accessing the contents of an array nested within an array in a mongodb document. I have no problems accessing the first array "groups" with a query like the following...
db.orgs.update({_id: org_id, "groups._id": group_id} , {$set: {"groups.$.name": "new_name"}});
Where I run into trouble is when I try to modify properties of an element in the array "features" nested within the "group" array.
Here is what an example document looks like
{
"_id" : "v5y8nggzpja5Pa7YS",
"name" : "Example",
"display_name" : "EX1",
"groups" : [
{
"_id" : "s86CbNBdqJnQ5NWaB",
"name" : "Group1",
"display_name" : "G1",
"features" : [
{
_id : "bNQ5Bs8BWqJn6CdNa"
type : "blog",
name : "[blog name]"
owner_id : "ga5YgvP5yza7pj8nS"
},
]
},
]
},
And this is the query I tried to use.
db.orgs.update({_id: "v5y8nggzpja5Pa7YS", "groups._id": "qBX3KDrtMeJGvZWXZ", "groups.features._id":"bNQ5Bs8BWqJn6CdNa" }, {$set: {"groups.$.features.$.name":"New Blog Name"}});
It returns with an error message:
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 2,
"errmsg" : "Too many positional (i.e. '$') elements found in path 'groups.$.features.$.name'"
}
})
It seems that mongo doesn't support modifying arrays nested within arrays via the positional element?
Is there a way to modify this array without taking the entire thing out, modifying it, and then putting it back in? With multiple nesting like this is it standard practice to create a new collection? (Even though the data is only ever needed when the parent data is necessary) Should I change the document structure so that the second nested array is an object, and access it via key? (Where the key is an integer value that can act as an "_id")
groups.$.features.[KEY].name
What is considered the "correct" way to do this?
After some more research, it looks like the only way to modify the array within an array would be with some outside logic to find the index of the element I want to change. Doing this would require every change to have a find query to locate the index, and then an update query to modify the array. This doesn't seem like the best way.
Link to a 2010 JIRA case requesting multiple positional elements...
Since I will always know the ID of the feature, I have opted to revise my document structure.
{
"_id" : "v5y8nggzpja5Pa7YS",
"name" : "Example",
"display_name" : "EX1",
"groups" : [
{
"_id" : "s86CbNBdqJnQ5NWaB",
"name" : "Group1",
"display_name" : "G1",
"features" : {
"1" : {
type : "blog",
name : "[blog name]"
owner_id : "ga5YgvP5yza7pj8nS"
},
}
},
]
},
With the new structure, changes can be made in the following manner:
db.orgs.update({_id: "v5y8nggzpja5Pa7YS", "groups._id": "s86CbNBdqJnQ5NWaB"}, {$set: {"groups.$.features.1.name":"Blog Test 1"}});
I am working with a TreePanel/TreeStore in ExtJS 4.1.1, with autoSync enabled and API calls to server endpoints defined via an Ajax proxy.
When a node is created with certain properties set, I have the server automatically add 2 child nodes during the autoSync call to the create API endpoint, and the server response text looks like this:
{
"success" : true,
"errorMsg" : null,
"children" : {
"id" : "toolbox-42",
"parentId" : null,
"itemName" : "My Toolbox",
"nodeType" : "toolbox",
"children" : [{
"id" : "tool-91",
"parentId" : "toolbox-42",
"itemName" : "Default Tool 1",
"nodeType" : "tool",
"leaf" : true,
}, {
"id" : "tool-92",
"parentId" : "toolbox-42",
"itemName" : "Default Tool 2",
"nodeType" : "tool",
"leaf" : true
}
]
}
}
Setting the node's properties via the "children" key at the root level works just fine. The "id" property is set at the inserted node just fine.
My problem is that the child nodes that the server added don't appear in the tree. How do I get these added to the tree view?
Here are some solutions I have considered:
In the server response, make the root level "children" object into an array, and append the new nodes to the end of the array (instead of nesting them under their parent node). The extractData method in Ext.data.reader.Reader (source here) indicates all returned records will be extracted. But, the commitRecords method in Ext.data.Operation (here) only updates the number of clientRecords included in the request, which obviously does not include any new records coming down the pike in the server's response.
After the server's response, manually add the records to the treestore client-side using the "children" node in the server's response. But, there seems to be no easy way to mark these records as "already synced".
Don't add the records on the server at all; instead, manually add them on the client before the sync operation takes place (thus, the sync operation will give the server 3 inserts to do). But, the child nodes in the create request won't have a parentId set, because the server hasn't yet added the parent node and returned the id as a response.
Attach an event handler that will fire once after the server has added the parent node, and then add the child nodes programmatically on the client (which would then be auto-synced to the server). But, this would requires 2 server round trips. Also, the only candidate event I know of is 'write' in Ext.data.TreeStore, and there is no corresponding 'failwrite' event which could be used to remove the listener in case the write operation fails. I could add an abstraction layer to provide that... but I would rather not if Sencha already built a better way.
Any other suggestions? I will accept a suggestion that works or any statement/link describing how Sencha recommends addressing this problem.
Thanks.
UPDATE: My store, proxy, and reader are configured as follows:
var store = Ext.create('Ext.data.TreeStore', {
model : 'App.models.Task',
autoLoad : true,
autoSync : true,
proxy : {
type : 'ajax',
api : {
create : appUrl + 'Data/InsertTreeData',
read : appUrl + 'Data/GetTreeData',
update : appUrl + 'Data/UpdateTreeData',
destroy : appUrl + 'Data/DeleteTreeData'
},
reader : {
type : 'json',
messageProperty : 'errorMsg'
}
}
});
What I do when I add a node to the tree on the server side is grab the parent object id (it could be the root and that is fine) and then run the refreshParent routine that looks like this:
var node = this.store.getNodeById(id);
if (node){
this.store.load({node:node});
}
Context here is the tree panel. This routine reloads specific branch of the tree.