I've created a Logic App which has a HTTP Post trigger containing a JSON payload with the name of the topic as a property. I have a subsequent step to create the topic and subscription but it fails:
with the message '
Service Bus messaging entity 'Topic1' or namespace '.servicebus.windows.net' not found.
I'm confused: of course the topic wasn't found - i want to create it ! 'Topic1' btw is the property that's being passed in the http request which is being correctly evaluated. So what am i doing wrong ?
The action is "Create a topic subscription", not "Create a topic". There doesn't appear to be an action for creating a topic, and creating a subscription to a topic assumes the topic exists. While I've never used it this way, topic subscriptions can be ephemeral and relate to one particular instance of a business process, to be destroyed when this is completed - but topics themselves are intended to be longer-lived and part of your application architecture, not ephemeral objects. So this seems logical. You might consider using the Azure management REST API from the logic app using the HTTP action, otherwise create the topic through the portal or Azure cli.
Related
Hey I was just wondering can you write code in a React application that sends a POST request to the Azure API but specifically create a backlog? I can see you can perform a GET request but haven't seen anything about POST
I have been on their documentation and it covers GET but I do not see anything about POST
AFAIK, you cannot create any backlog instances at all. Since there is no support in doing this via the WebGUI, I would assume that there is also no REST-API call for it.
From a logical (or agile) point of view, having backlog instances makes no sense as well: Either a work item (task / user story /... you name it) is part of a sprint or it is placed in the backlog.
This is also the way DevOps handles your work items: If you create a new work item it is placed in the backlog (single instance) by default. When adding it to a sprint, it will be moved out of the backlog.
TL;DR: I guess it is not possible on intention.
after checking in azure devops, if you are about create a backlog page in boards for workitems, then you could not separately create a single backlog page.
You need to create a new team and the backlog for the team will be generated automatically.
You could refer to this doc for team creation rest api.
My api is below.
post https://dev.azure.com/<orgname>/_apis/projects/<projectID>/teams?api-version=5.1-preview.3
request body
{
"description":"",
"projectId":"<projectID>",
"name":"<teamname>",
"identity":
{
"customDisplayName":"<teamname>"
}
}
I want to implement a connected OAuth app in Salesforce which should trigger push events in case some entities changed, for example an opportunity was closed.
Zapier implemented something similar
https://zapier.com/apps/salesforce/integrations/webhook
Could not find something I need which is a simple way to subscribe to entity changes using the OAuth client's token and passing a webhook endpoint. I read about apex callouts, streaming API and outbound messages.
Yeah, we solved this exact problem at Fusebit and I can help you understand the process as well.
Typically speaking here's what you need to do:
Create triggers on the Salesforce Objects you want to get updates for
Upload Apex class that will send an outgoing message to a pre-determined URL
Enable Remote Site Setting for the Domain you want to send the message to
Add in Secret Verification (or other auth method) to prevent spamming of your external URL
If you're leveraging javascript, then you can use the jsforce sdk & salesforce tooling API to push the code into the salesforce instance AFTER the Auth flow has occurred AND on Salesforce Instances that have API access enabled (typically - this is enterprise and above OR professional with API enabled).
This will be helpful for you to look through: https://jamesward.com/2014/06/30/create-webhooks-on-salesforce-com/
FYI - Zapier's webhooks implementation is actually polling every 15 minutes, instead of real-time incoming events.
In which programming language?
For consuming outbound messages you just need to be able to accept an XML message and send back "Ack" message to acknowledge receiving, otherwise SF will keep trying to resend it for 24h.
For consuming platform events / streaming API / Change Data Capture (CDC) you'll need to raise the event in SF (Platform Event you could raise from code, flow, process builder, CDC would happen automatically, you just tell it which objects it should track).
And then in client app you'd need to login to SF (SOAP or REST API), subscribe to channel (any library that supports cometd should be fine). Have you seen "EMP Connector", mentioned for example in https://trailhead.salesforce.com/en/content/learn/modules/change-data-capture/subscribe-to-events?trail_id=architect-solutions-with-the-right-api ?
Picking right messaging way is an art, there's free course that can help: https://trailhead.salesforce.com/en/content/learn/trails/architect-solutions-with-the-right-api
And pretty awesome PDF if you want to study for certification: https://resources.docs.salesforce.com/sfdc/pdf/integration_patterns_and_practices.pdf
I am trying to create an Event using Microsoft Graph SDK, as following the document #
https://learn.microsoft.com/en-us/graph/api/user-post-events?view=graph-rest-beta&tabs=csharp
1.Created "authProvider"
2.Created GraphClient with above AuthProvider
3.Creating Event using
The event is not creating also no exception/error is throwing, Could any one help me here?
This is happening because this call is being made with same transactionId frequently. It avoids unnecessary retries on the server.
It is an optional parameter , just comment out this property and try again. It should work.
Note : This identifier specified by a client app for the server , to avoid redundant POST operations in case of client retries to create the same event and also useful when low network connectivity causes the client to time out before receiving a response from the server for the client's prior create-event request.
More info is required here, as the reply from Allen Wu stated. without any details I would focus my efforts on the authprovider piece and azure app registration piece. as the rest of the example is just sending a post request to graph api.
but what is recommended really depends on what type of application you are trying to build. eg. is it a service daemon, a web app, mobile app, desktop app, single page app, etc.
I wish to implement sessions in webapp2. From research, I have found this code sample using webapp2_extra.sessions, and a few articles which mentions deprecated or unmaintained session libraries.
I currently lack the knowledge of how sessions work conceptually. This is what I understand so far:
We can include a dispatch() method to a request handler which allows us to create/update a session object; it is during the login phase of the app, the session is created. (Question: how is session stored? In the app's memory or in datastore?)
When a user makes a request to the app, the dispatch() method checks to see if an existing session exists for the user. (Question: How exactly does this validation work? Is there a token inside the request.body or cookie that sessions look for?)
When a user logs out, the session is deleted.
Is my understanding correct? Or perhaps I am missing something important? There seems to be little guidance on this subject on the internet. Thank you for the assistance.
Technically the dispatch() method is not added, it's just overwriting the one that webapp2.RequestHandler already provides, extending it to add session support. If you take a closer look at that method you see that it still calls the original one to do the actual dispatching:
# Dispatch the request.
webapp2.RequestHandler.dispatch(self)
Which could be re-written, if you want, as:
super(BaseHandler, self).dispatch()
All that the extended dispatch() does is picking up the session info from the store making it available to the handler code before dispatching the request (which BTW includes the request processing) and saving it back afterwards, when the request processing completes (when changes to the session info may have been done). For every request! Simply a way to persist info across requests.
The session support is simply that - support - your app is still the one responsible for controlling what info is stored in the webapp2's session dictionary, when is that info added/modified/deleted and how is that info used.
In other words webapp2 itself has no clue what's login/logout/user session, etc (So no, nothing that you mention in #1, #2 and #3 happens in webapp2 itself). It is your app's responsibility to:
set/delete inside the session dictionary the info that represents your "user session" (whatever that means for your app) - typically in the user login/logout request handlers, respectively
use that info as it sees fit while handling incoming requests between the login and the logout one - when the info from the session dictionary represents the "current user session".
For storing the session info webapp2 supports cookies (default), memcache and datastore (ndb). From Sessions:
It has three built-in backends: secure cookies, memcache and
datastore. New backends can be added extending
CustomBackendSessionFactory.
The session store can provide multiple sessions using different keys,
even using different backends in the same request, through the method
SessionStore.get_session(). By default it returns a session using
the default key from configuration.
In OSB Layer when the endpoint uri is changed, I need to alert the core group that the endpoint has changed and to review it. I tried SLA Alert rules but it does not have options for it. My question is, the endpoint uri should be saved somewhere in the underlying database. If so what is the schema and the table name to query it.
URI or in fact any other part of OSB artifact is not stored in relational database but rather kept in memory in it's original XML structure. It can be only accessed thru dedicated session management API. Interfaces you will need to use are part o com.bea.wli.sb.management.configuration and com.bea.wli.sb.management.query packages. Unfortunately it is not as straightforward as it sounds, in short, to extract URI information you will need to:
Create session instance(SessionManagementMBean)
Obtain ALSBConfigurationMBean instance that operates on SessionManagementMBean
Create Query object instance(BusinessServiceQuery) an run it on ALSBConfigurationMBean to get ref object to osb artifact of your interest
Invoke getServiceDefinition on your ref object to get XML service
definition
Extract URI from XML service definition with XPath
Downside of this approach is that you are basically pooling configuration each time you want to check if anything has changed.
More information including JAVA/WLST examples can be found in Oracle Fusion Middleware Java API Reference for Oracle Service Bus
There is also a good blog post describing OSB customization with WLST ALSB/OSB customization using WLST
The information about services and all its properties can be obtained via Java API. The API documentation contains sample code, so you can get it up and running quite quickly, see the Querying resources paragraph when following the given link.
We use the API to read the service (both proxy and business) configuration and for simple management.
As long as you only read the properties you do not need to handle management sessions. Once you change the values, you need to start a session and activate it once you are done -- a very similar approach to Service bus console.