URL path variables in Azure Logic App Custom Connectors - azure-logic-apps

I'm trying to build a Logic Apps Custom Connector that can update a JIRA issue (a feature not currently available in the prebuilt connector).
Here is a cURL example from the JIRA documentation for this request
curl -D- -u fred:fred -X PUT --data {see below} -H "Content-Type: application/json" http://kelpie9:8081/rest/api/2/issue/QA-31
{
"fields": {
"assignee":{"name":"harry"}
}
}
The QA-31 value is the unique identifier that I want to make a variable. Using Postman I set that as an Environment variable and successfully ran the request. When I uploaded the Postman collection to my custom connector 'QA-31' value wasn't available as a path variable
Then I tried editing the custom connector directly. In the Import Sample menu I replaced 'QA-31' in the URL with '{issueKey}'. This created a path variable but it also prefixed the url with '/en-us/widgets/manage'; which I don't want
Here is a picture of the problem
So there are a couple questions here:
Why is my path variable in Postman not being picked up in the custom connector while other requests from that collection were working fine
Why is my URL being prefixed with '/en-us/widgets/manage' when add a path variable in the 'Import from Sample' menu
Thanks!

Inside the Logic Apps Custom Connector Editor you may define path variables by enclosing the variable inside brackets (e.g. https://api.library.com/[method}/). This can be done manually during the "Definition" step of creating/editing your custom connector. However, the drawback is that you must use the "Import from sample" feature which requires you to manually rewrite the particular request.
To answer your question we can define the path variables in PostMan and then run the V1 export.
You can define a path variable in a Postman request by prepending a ':' to the variable name like so, https://api.library.com/:method/. This will add the key (method) and the optional value to the request parameters field.
When you export as a Postman V1 collection the resulting JSON code looks like,
{
"id": "fc10d942-f460-4fbf-abb6-36943a112bf6",
"name": "Custom Method Demo",
"description": "",
"auth": null,
"events": null,
"variables": [],
"order": [
"becb5ff8-6d31-48ee-be3d-8c70777d60aa"
],
"folders_order": [],
"folders": [],
"requests": [
{
"id": "becb5ff8-6d31-48ee-be3d-8c70777d60aa",
"name": "Custom Request Method",
"url": "https://api.library.com/:method",
"description": "Use a path variable to define a custom method.",
"data": null,
"dataMode": "params",
"headerData": [],
"method": "GET",
"pathVariableData": [
{
"key": "method",
"value": ""
}
],
"queryParams": [],
"auth": {
"type": "noauth"
},
"events": [
{
"listen": "prerequest",
"script": {
"id": "b7b91243-0c58-4dc6-b3ee-4fb4ffc604db",
"type": "text/javascript",
"exec": [
""
]
}
}
],
"folder": null,
"headers": "",
"pathVariables": {
"method": ""
}
}
]}
Notice the "pathVariables" field which corresponds to our custom path variable.
Now we can import this into our Logic App and the path variable is properly interpreted as described in the first paragraph.
Hope that helps.

Related

Struggling with optional claims in id/access token

I am updating an internally developed single-page app (Typescript/React) that uses OAuth2 from AD-FS 2016 to Azure AD v2. Things are complicated slightly by the fact that I (the developer) don't have direct access to the Azure console and am working on this with a (non-developer) sysadmin who does.
I have implemented PKCE and got the flow working; I can now obtain JWT access, ID and refresh tokens from the server and authenticate them via JWKS. So far so good.
Now, my apps to know a couple more things:
whether or not the user should be treated as an administrator. This is inferred from group memberships
the preferred username and first name/surname of the user
The first of these we dealt with by setting up a "role" and mapping it out to groups in the Azure console. We then added the role claim to the tokens. I can find this as a string array in "id_token". No problem.
I was confused for a while because I was looking for it in "access_token", but it's not a problem for my app to use "id_token" instead.
The second is the thing that is really giving us problems. No matter what we put into the "optional claims" dialog - we've added all these fields and more, for the ID token, they do not appear in it. Nothing we are doing seems to affect the actual tokens that come out at all.
I am beginning to think that I have missed something out with regards to obtaining the information. I am using the https://graph.microsoft.com/profile, https://graph.microsoft.com/email and https://graph.microsoft.com/user.read scopes and the administrator has authorized these on behalf of the app. The user is synced from our in-house active directory, which the AD-FS is running from as well, so I know that this information is in there. I tried messing with the resource parameter but this is deprecated in Azure AD v2 apparently.
I've read and re-read https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-optional-claims along with other online documentation, and the following passage confuses me and makes me think that the issue might be related to scopes:
Access tokens are always generated using the manifest of the resource, not the client. So in the request ...scope=https://graph.microsoft.com/user.read... the resource is the Microsoft Graph API. Thus, the access token is created using the Microsoft Graph API manifest, not the client's manifest. Changing the manifest for your application will never cause tokens for the Microsoft Graph API to look different. In order to validate that your accessToken changes are in effect, request a token for your application, not another app.
Or is that just the reason that I switched to using the id_token?
The optional_claims section of the configuration manifest looks like this:
"optionalClaims": {
"idToken": [
{
"name": "email",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "upn",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "groups",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "family_name",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "given_name",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "preferred_username",
"source": null,
"essential": false,
"additionalProperties": []
}
],
"accessToken": [
{
"name": "email",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "groups",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "preferred_username",
"source": null,
"essential": false,
"additionalProperties": []
}
],
"saml2Token": [
{
"name": "groups",
"source": null,
"essential": false,
"additionalProperties": []
}
]
},
But the resulting payload in the ID tag looks like this:
{
"aud": "redacted",
"iss": "https://login.microsoftonline.com/redacted/v2.0",
"iat": 1654770319,
"nbf": 1654770319,
"exp": 1654774219,
"email": "redacted",
"groups": [
"redacted",
"redacted",
"redacted",
"redacted"
],
"rh": "redacted",
"roles": [
"redacted"
],
"sub": "redacted",
"tid": "redacted",
"uti": "redacted",
"ver": "2.0"
}
Can anyone who has more experience of the platform help me understand what we are doing wrong here? Do we need to define custom scopes? Have we simply forgotten to turn an option on?
All help gratefully received! Thanks in advance...
I tried to reproduce the same in my environment and got below results:
I have implemented PKCE flow and got JWT access, ID and refresh tokens.
I added optional claims like below:
Go to Azure Portal -> Azure Active Directory -> App Registrations -> Your App -> Token Configuration
Please check the scopes you are using to get token.
When I gave only openid as scope, got response like below:
But when I gave scope as openid profile email user.read, got all optional claims successfully like below:

change the header Content-Type to application/cloudevents+json when publishing to event grid

I'm using the trigger "When a HTTP request is received" to then publish multiple events to the event grid using the "Publish Event" action. The For loop works fine to split up the JSON that gets in and create the event to publish but still that publish fails with
{
"error": {
"code": "UnsupportedMediaType",
"message": "The Content-Type header is either missing or it doesn't have a valid value. The content type header must either be application/cloudevents+json; charset=utf-8 or application/cloudevents-batch+json; charset=UTF-8. Report 'edf36bbd-9221-4882-8a29-2264ffb16d72:3:3/6/2020 2:18:20 PM (UTC)' to our forums for assistance or raise a support ticket.",
"details": [
{
"code": "InvalidContentType",
"message": "The Content-Type header is either missing or it doesn't have a valid value. The content type header must either be application/cloudevents+json; charset=utf-8 or application/cloudevents-batch+json; charset=UTF-8. Report 'edf36bbd-9221-4882-8a29-2264ffb16d72:3:3/6/2020 2:18:20 PM (UTC)' to our forums for assistance or raise a support ticket."
}
]
}
}
I assume that the header from the input is used when publishing so I tried to change the header when publishing by changing the header in the Publish_Event block as follows (directly in code view as it is not supported in the UI) so I get the following (headers part is added):
"Publish_Event": {
"inputs": {
"body": [
{
"data": "#items('For_each_2')",
"eventType": "company-location",
"id": "ID : #{items('For_each')['businessId']}",
"subject": "Company Location changed"
}
],
"headers": {
"Content-Type": "application/cloudevents+json; charset=utf-8"
},
"host": {
"connection": {
"name": "#parameters('$connections')['azureeventgridpublish']['connectionId']"
}
},
"method": "post",
"path": "/eventGrid/api/events"
},
"runAfter": {},
"type": "ApiConnection"
}
But this is not working neither. Didn't find an action to make the change.
My full flow looks like this :
and as test data I have the following JSON I use to send with postman (a bit simplified):
[
{
"id": 3603,
"businessId": "QQTADOSH",
"locations": [
{
"id": 5316,
"businessId": "A-yelr3g"
},
{
"id": 5127,
"businessId": "A-c7i8gd"
},
{
"id": 5403,
"businessId": "A-fjdd2y"
},
{
"id": 6064,
"businessId": "A-rqvhz8"
}
]
},
{
"id": 3118,
"businessId": "Cr11_Macan_111qa",
"locations": [
{
"id": 4563,
"businessId": "A-3bv860"
}
]
}
]
Looks like the Official Event Grid Publish Connector doesn't support Cloud Events Schema.
You can set the topic to accept Event Grid Schema but I believe this is possible only at the time of creation.
Its best to open a feature request on UserVoice to add support for this and in the meantime, a workaround would be to use an HTTP Action to Post to Custom Topic instead.
But do note that the workaround would involve building the event payload to send (and things like fetching the access key from Key Vault instead of storing it in your workflow directly).

EventGrid Trigger - How to set clienttrackingid from triggerbody?

In a microservice environment where requests span multiple services including eventgrid i'd like to configure an end-to-end logging with correlationid.
Inspired by this blog https://toonvanhoutte.wordpress.com/2018/08/05/end-to-end-correlation-across-logic-apps/
How can i configure the EventGrid triggers clientTrackingId with my correlationnr from Events data payload?
Checkout my definition below which does not work.
If i substitute "#{coalesce(json(triggerBody().Data)?.CorrelationNr, guid())}" with a string value or even "#parameters('$connections')['azureeventgrid']['connectionId']" it works like a charm.
"triggers": {
"When_a_resource_event_occurs": {
"correlation": {
"clientTrackingId": "#{coalesce(json(triggerBody().Data)?.CorrelationNr, guid())}"
},
"inputs": {
"body": {
"properties": {
"destination": {
"endpointType": "webhook",
"properties": {
"endpointUrl": "#{listCallbackUrl()}"
}
},
"filter": {
"includedEventTypes": [
"webhook.sp.updated"
]
},
"topic": "/subscriptions/xxxx/resourceGroups/xxx/providers/Microsoft.EventGrid/topics/WebHookManager"
}
},
"host": {
"connection": {
"name": "#parameters('$connections')['azureeventgrid']['connectionId']"
}
},
"path": "/subscriptions/#{encodeURIComponent('xxx')}/providers/#{encodeURIComponent('Microsoft.EventGrid.Topics')}/resource/eventSubscriptions",
"queries": {
"x-ms-api-version": "2017-06-15-preview"
}
},
"splitOn": "#triggerBody()",
"type": "ApiConnectionWebhook"
}
}
Logic App does not trigger. No Error message.
Please check the description about clientTrackingId, and your logic app no runs history is because your triggerBody() doesn't have CorrelationNr with the definition you show.
Actually your Event Grid trigger has detected the event, it just couldn't run with the logic. You could go to the EVALUATION and check the trigger history. It's because the value is null, then it won't run.
If you use HTTP request trigger, you could set the x-my-custom-correlation-id header. or set any key-value in the json body, then set the clientTrackingId with like #{coalesce(json(triggerBody())['keyname'], guid())}.
And if you are using some trigger without header, you have to point the value with string or other parameter like you said the connectionid or the parameter value you custom like below.
So the point is the clientTrackingId must be set before it runs and value could be obatined.

How can I use google cloud storage inside kubernetes?

I'm using node js & am wanting to upload files to a bucket of mine. I've setup the secret:
NAME TYPE DATA AGE
cloudsql-oauth-credentials Opaque 1 5d
default-token-dv9kj kubernetes.io/service-account-token 3 5d
The service_account does have access to my google cloud storage API as I've set that up already & tested it locally (on my own computer). I'm unsure how I can reference the location of the service account json file?!
Here is my volumes mount:
"volumes": [{
"name": "cloudsql-oauth-credentials",
"secret": {
"secretName": "cloudsql-oauth-credentials"
}
}
Here is the code where I'm setting up the google-cloud storage variable:
var gcs = require('#google-cloud/storage')({
projectId: 'projectID-38838',
keyFilename: process.env.NODE_ENV == 'production'
? JSON.parse(process.env.CREDENTIALS_JSON) // Parsing js doesn't work
: '/Users/james/auth/projectID-38838.json' // This works locally
});
var bucket = gcs.bucket('bucket-name');
Now if I want to use this inside my docker container on kubernetes, I'll have to reference the json file location...But I don't know where it is?!
I've tried setting the Credentials file as an environment variable, but I cannot parse a js object to the keyFilename object. I have to parse a file location. I set the env variable up like so:
{
"name": "CREDENTIALS_JSON",
"valueFrom": {
"secretKeyRef": {
"name": "cloudsql-oauth-credentials",
"key": "credentials.json"
}
}
},
How can I reference the location of the service_account json file inside my kubernetes pod?!
Look here in the section Using Secrets as Files from a Pod.
Basically, you need to specify two things when mounting a secret volume. The bit that you have + some extra info. There might be some redundancies with the key but this is what I do and it works.
When creating a secret, create it with a key:
kubectl create secret generic cloudsql-oauth-credentials --from-file=creds=path/to/json
Then
"volumes": [{
"name": "cloudsql-oauth-credentials",
"secret": {
"secretName": "cloudsql-oauth-credentials"
"items": [{
"key": "creds",
"path": "cloudsql-oauth-credentials.json"
}]
}
}
But then also specify where it goes in the container definiton (in Pod, Deployment, Replication Controller - whatever you use):
"spec": {
"containers": [{
"name": "mypod",
"image": "myimage",
"volumeMounts": [{
"name": "cloudsql-oauth-credentials",
"mountPath": "/etc/credentials",
"readOnly": true
}]
}],
The file will be mapped to /etc/credentials/cloudsql-oauth-credentials.json.

Is it possible to add a description to the Cloud Endpoint fields in the API Explorer?

I have seen this in the Google API's. Is it possible for Cloud Endpoints as well?
https://developers.google.com/apis-explorer/#p/adexchangebuyer/v1.2/adexchangebuyer.accounts.get
It's totally possible. We've had some StackOverflow posts about monkey patching and this would be another prime example.
For example:
How do I specify my own icons so they show up in a Google Endpoints API discovery document?
For this case, the content served at /_ah/spi/BackendService.getApiConfigs contains your API config and the "description" you want here is for a "parameter".
So for example in the method
#endpoints.method(MySchema, MySchema,
path='myschema/{strField}', name='myschema.echo')
def MySchemaEcho(self, request):
return request
the field strField is a path "parameter" and so in the API config we would see
{
...
"methods": {
"myapi.myschema.echo": {
...
"request": {
...
"parameters": {
"strField": {
"required": true,
"type": "string"
}
}
},
...
}
...
}
}
To get your description in there you would need to add it to the dictionary listed under strField so that it reads
"strField": {
"required": true,
"type": "string",
"description": "Most important field that ever was."
}

Resources