Durable Function "monitor" in portal and Orchestration replay events - azureportal

I've set logReplayEvents to false even though that's the default just to be sure but I'm still seeing multiple entries for my orchestration function for a single invocation in the Monitor section on the Azure portal:
Any idea how, if it's possible, I can change this so it just shows one.

I hope you have set LogReplayEvents to false under 'durableTask' key (i.e., disabled emitting verbose orchestration replay events) in the host.json file.
And yeah, by default all non-replay tracking events are emitted.
Perhaps I am missing something but looking at the screenshot provided by you, I cannot really figure out that those logs are related to replay events. That's fine if you have checked each event and found that they are replay event, or else can you go to your App Insights and run below query to see if any replay events are present or not.
traces| extend isReplay = tobool(tolower(customDimensions["prop__isReplay"]))| where isReplay == "True"
Hope this helps at least up to some extend!! Cheers!! :)

Related

GMB PubSub trigger notifications with delay or not trigger

I create topic.
Attach push-subscription with target to my server.
Set the email mybusiness-api-pubsub#system.gserviceaccount.com as an administrator.
Update notification settings with valid topic name and next notificationTypes:
GOOGLE_UPDATE
NEW_REVIEW
UPDATED_REVIEW
NEW_QUESTION
UPDATED_QUESTION
NEW_ANSWER
UPDATED_ANSWER
Did a test by manually posting a message and got the webhook successfully.
But, when I edit review reply - I recive part of events and with delay up to 30 minutes. Part of notifications don't receive.
If I try to write question or answer - I don't receive any events.
Question:
why events do not come for questions
why events are delayed
how to fix the previous two points
Concerning NEW_QUESTION, UPDATED_QUESTION, NEW_ANSWER, UPDATED_ANSWER, this is a bug (more see here Google My Business push notification is not sent for Question and Answers ).
Concerning review updates, we are also seeing such delays. My guess is this is due to Google's publishing/QA pipeline where changes are first checked if they meet the guidelines before they are published globally. You yourself may see your changes right after editing, but that does not mean that the whole world will.
There's nothing you can do about either case, unfortunately.

Is there a way to tell if a release is fired by a schedule in octopus

We have a project in Octopus that has been configured to release to an environment on a schedule.
In the process definition we use a step template for Slack to send the team a notification when a release takes place. We would like to avoid sending this Slack message if the release was fired by the schedule - rather than user initiated.
I was hoping there would be a system variable that we could check before running the Slack step - but I can't seem to find anything documented as such, and google didn't turn anything up.
TIA
If you are using Octopus 2019.5.0 or later, there are two variables that will be populated if the deployment was created by a trigger.
Octopus.Deployment.Trigger.Id
Octopus.Deployment.Trigger.Name
You can see the details at https://github.com/OctopusDeploy/Issues/issues/5462
For your Slack step, you can use this run condition to skip it if the trigger ID is populated.
#{unless Octopus.Deployment.Trigger.Id}True#{/unless}
I hope that helps!

Rollback a set of actions in Azure Logic Apps

I have a workflow like this as a Azure Logic App:
Read from Azure Table -> Process it in a Function -> Send Data to SQL Server -> Send an email
Currently we can check if the previous action ended with an error and based on that we do not execute any further steps.
Is it possible in Logic Apps to perform a Rollback of actions when one of the steps goes wrong? Meaning can we undo all the steps to the beginning when something in step 3 goes wrong, for example.
Thanks in advance.
Regards.
Currently there is no support for rollback in Logic Apps (as they are not transnational).
Note that Logic Apps provide out-of-the-box resiliency against intermittent errors (retry strategies), which should minimize execution failures.
You can add custom handling of errors (e.g. following your example, if something goes in step 3, you can explicitly handle the failure and add rollback steps). Take a look at https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-exception-handling for details. You
Depending on whether steps in your logic app are idempotent you can also make use of the resubmit feature. It allows you to re-trigger the run with same trigger contents with which the original run instance was invoked. Take a look at https://www.codit.eu/blog/2017/02/logic-apps-resubmit-considerations/ for a good overview of this feature.

Salesforce Schedulable not working

I have several HTTP callouts that are in a schedulable and set to run ever hour or so. After I deployed the app on the app exchange and had a salesforce user download it to test, it seems the jobs are not executing.
I can see the jobs are being scheduled to run accordingly however the database never seems to change. Is there any reason this could be happening or is there a good chance the flaw lies in my code?
I was thinking that it could be permissions however I am not sure (its the first app I am deploying).
Check if the organisation of your end user has added your endpoint to "remote site settings" in the setup. By endpoint I mean an address that's being called (or just the domain).
If the class is scheduled properly (which I believe would be a manual action, not just something that magically happens after installation... unless you've used a post-install script?) you could also examine Setup -> Apex Jobs and check if there are any errors. If I'm right, there will be an error about callout not allowed due to remote site settings. If not - there's still a chance you'll see something that will make you think. For example batch job has executed successfully but there were 0 iterations -> problem?
Last but not least - you can always try the debug logs :) Enable them in Setup (or open the developer console), fire the scheduled class's execute() manually and observe the results? How to fire it manually? Sth like this pasted to "execute anonymous":
MySchedulableClass sched = new MySchedubulableClass();
sched.execute(null);
Or - since you know what's inside the scheduled class - simply experiment.
Please note that if the updates you might be performing somehow violate for example validation rules your client has - yes, the database will be unchanged. But in such case you should still be able to see failures in Setup -> Apex Jobs.

dotnetnuke event logging - Synchronous? Potential speed issues?

I'd like to leverage the DotNetNuke logging system (both Event Logging and Exception Logging).
However, I'm concerned about potential performance issues if we begin logging too many Events.
Does the logging system write events to the database asynchronously? If not, is there an efficient way to make this happen? I'd hate to make an end-user wait on several database writes just to view a page.
Thanks
Short answer: It depends on your DNN configuration. By default, logging is synchronous.
Detailed answer
Event Logging uses LoggingProvider set up in the web.config.
DotNetNuke is shipped with DBLoggingProvider and XMLLoggingProvider.
By default, DotNetNuke uses DBLoggingProvider that writes into EventLog table.
How this is is done, depends on the Host settings and the site's Event Viewer settings. If "Enable Event Log Buffer" in checked the Host settings, logging should be asynchronous.
Should be, since asynchronous logging uses the Scheduler, and if the scheduler is not enabled or is stopped, logging will be immediate.
Immediate logging can also be enforced with LogInfo.BypassBuffering property.
Event log settings determine what is going to be logged by Log Type basis. If you are using Event Logging in your modules, you have to pass the log type in the EventLogController.AddLog method. I usually use EventLogType.HOST_ALERT, since these are easily recognizable in the Event Log view and are logged by default (unlike ADMIN_ALERT).
For more details, check the AddLog implementation in the DNN source code:
DBLoggingProvider.AddLog(ByVal LogInfo As LogInfo)
See Also:
DotNetNuke Host Settings Explained

Resources