I have a service that is being monitored during normal business hours but, occasionally, the service will go Critical at the last minute. This Critical status will carry on through non-business hours.
Is there a setting to change the value outside of this timeperiod? Or does anyone have scripts that I can throw into cron to mark certain services as OK when outside of their timeperiod?
Yes, it is possible.
In timeperiods.cfg, define your custom time.
And use those defined timeperiods in service template (in check_period¬ification_period)
Related
currently we are having issue with an CPU Limit. We do have a lot of processes that are most likely not optimized, I have already combined some processes for the same object but it is not enough. I am trying to understand logs rights now - as you can see on the screenshots, there is one process that is being called multiple times (I assume each time for created record). Even if I create, for example, 60 records in one operation/dml statement, the Process Builders still gets called 60 times? (this is what I think is happening) Is that a problem we are having right now? If so, is there a better way to do it? Because right now we need updates from PB to run, but I expected it should get bulkified or something like that. I was also thinking there might be some looping between processes. If there are more information you need, please let me know. Thank you.
Well, yes, the process builder will be invoked 60 times, 1 record at a time. But that shouldn't be your problem. The final update / create child records / email send (or whatever your action is) will be bulkified, it won't save 1 record at a time. If the process calls some apex actions - they're supposed to support passing collection of records, not just single record.
You maybe looking at wrong place. CPU time suggests code problems, not config (flow, workflow, process builder... although if you're doing updates of fields on "this" record it's possible you'd benefit from before-save flows). Try to compare timestamps related to METHOD_BEGIN, METHOD_END for triggers, code methods (including invocable action / process plugin interfaces).
Maybe there's code that doesn't need to run because key fields didn't change, there's nothing to recalculate, rollup. Hard to say without seeing the debug log.
Maybe the operation doesn't have to be immediate. Think if you can offload some stuff to "scheduled actions", "time based workflows" or in apex terms "#future, batchable, queueable". But they'd have to be relatively safe to run, if there's error - it won't display to the user because the action will be in the background, you'd need to handle the errors manually (send an email, create a record, make chatter post or bell notification).
You could try uploading the log to https://apextimeline.herokuapp.com/ and try to make sense out of that Gantt-chart-like output. Or capture the log "pro" way, with https://help.salesforce.com/s/articleView?id=sf.code_dev_console_solving_problems_using_system_log.htm&type=5 or https://marketplace.visualstudio.com/items?itemName=financialforce.lana (you'll likely need developer's help to make sense out of it).
I have a logic app which is triggered when a blob is added or modified. It checks every few minutes. Given that logic apps are charged for each run of the Trigger (I think), how can I stop the Trigger running at weekends? I can't see anything on here
You can create a azure timer trigger function with the cron expression to schedule the function run every Friday evening and call this api in the timer trigger function to disable your logic app.
For example, the cron expression could be:
59 59 23 * * Fri
Then create another timer trigger function with the cron expression to schedule the function run every Monday morning and call this api in the timer trigger function to enable your logic app.
For example, the cron expression could be:
0 0 0 * * Mon
Another solution:
You can add a condition after the blob trigger(before the actions which the logic will do), shown as below:
The expression of "dayOfWeek()" is:
dayOfWeek(utcNow())
In the response of dayOfWeek() method, Sunday --> 0, Monday --> 1.
So in the conditon above, most of the actions will just run on Monday to Friday. On Saturday and Sunday, you will just pay for the trigger but not for most of the actions in your logic app. But you need to pay attention to the time zone if use this solution. You an know more information about the pricing of logic app in this link.
By the way, I think the second solution may suit you better. Because in first solution we can't call the api easily in azure function, we have to get the access token(in implicit flow) before request the api.
You can also use Powershell to disable you Logic App. One option to run it is from Azure Functions with Managed identity, which you can give permissions to the required Logic Apps.
Set-AzLogicApp -ResourceGroupName "MyResourceGroup" -Name "MyLogicApp" -State Disabled -Force
And to enable, just switch the State option to "Enabled"
I have an always one application, listening to a Kafka stream, and processing events. Events are part of a session. And I need to do calculations based off of a sessions data. I am running into a problem trying to correctly run my calculations due to the length of my sessions. 90% of my sessions are done after 5 minutes. 99% are done after 1 hour. Sessions may last more than a day, due to this being a real-time system, there is no determined end. Session are unique, and show never collide.
I am looking for a way where I can process a window multiple times, either with an initial wait period and processing any later events after that, or a pure process per event type structure. I will need to keep all previous events around(ListState), as well as previously processed values(ValueState).
I previously thought allowedLateness would allow me to do this, but it seems the lateness is only considered for when the event should have been processed, it does not extend an actual window. GlobalWindows may also work, but I am unsure if there is a way to process a window multiple times. I believe I can used an evictor with GlobalWindows to purge the Windows after a period of inactivity(although admittedly, I did not research this yet, because I was unsure of how to trigger a GlobalWindow multiple times.
Any suggestions on how to achieve what I am looking to do would be greatly appreciated, I would also be happy to clarify any points needed.
If SessionWindows won't do the job, then you can use GlobalWindows with a custom Trigger and Evictor. The Trigger interface has onElement and timer-based callbacks that can fire whenever and as often as you like. If you go down this route, then yes, you'll also need to implement an Evictor to dispose of elements when they are no longer needed.
The documentation and the source code are helpful when trying to understand how this all fits together.
I have a scheduler that puts some value(N or Y) into a topic for every 10 mins(usually 'N', unless something abnormal happens with topic). When the topic goes down, the scheduler will populate a property(kind of inter-scheduler communication), so that it can be used during scheduler's next cycle, as way of telling the scheduler that something bad happened during last cycle, so that, it'll place a different value('Y') in topic in this cycle. But the problem here is normal exchange property isn't helping. The property is always null during every scheduler cycle.
When i went through the http://camel.apache.org/schema/blueprint/camel-blueprint.xsd, looking out for something similar to global properties, i got this one "tns:properties"
which can be set at context level.
Can this be used as a global property?
is there a way to read/write it in my scheduler route?
I'm also thinking about having a bean with an instance variable to hold this inter-scheduler-communication property.
Can anyone suggest the right option?
What you've described sounds to me like a means for maintaining state between processes, and using the properties for that will be problematic for a number of reasons.
I suggest breaking the app into a couple different pieces, and use a shared OSGi service to maintain the state.
public interface MyScheduleState() {
public setSomeValue(String x)
public String getSomeValue()
}
Route 1: Timer starts the task .. check the service for values.. send event. if error occurs, sends error message to some queue://MY.ERRORS
Route 2: Listen for errors on MY.ERRORS and update the OSGi service with new values
This gives you control over the behavior and you can change how the "stateful service" stores its data.. either in memory, on disk as a file or in a cache" and your routes will never know the specifics.
Take a look to http://camel.apache.org/properties.html
It seems to be exactly you are looking for - context properties. You can set a property value on each cycle and it will be available in the next cycle too.
I would like to block a specific path (e.g. https://myapp.appspot.com/foo/bar) from being accessed on the server such that the caller gets a 404 or something to that extent. Please note that I have regex based handlers installed (e.g. /foo/.* - will trigger Handler) so by default the /app/foo/bar is being directed to this Handler. I would like to add a specific handler for '/foo/bar' at a higher level before the lower /app//).
One way to do this is to add url handler and direct it to a not_found app handler such as:
- url: /foo/bar.*
script: not_found.app
If there is a better way to do this, please care to share and will be highly appreciated.
Essentially, I have a rogue client who is using a bot to hit my server continuously and is consuming undesired resources. The specific URL being called by this bot is one that I could completely disable. If there are any tips on how one could use such URL's and direct them to a lower priority instance then that would be also very helpful.
Btw, I have already added a range of IP's being used by this bot to dos.yaml. But that has not helped since it keeps changing its IP-Address.
I am sure this is a pretty typical scenario which the web-masters have expert advice on (any help/recommendation is highly welcomed - pardon my pedestrian question).
You can force-route requests to any module of your choosing with dispatch.yaml:
dispatch:
- url: "*/foo.bar"
module: cheapmodule
and then in cheapmodule.yaml you make sure you have at most a single instance of the cheapest kind, say basic scaling with instance_class B1 and max_instances 1 (not sure what happens if cheapmodule is specified to have zero instances, e.g manual scaling with instances 0, or instances 1 to start but then on its _ah/start handler it calls google.appengine.api.modules.modules.set_num_instances_async(instances, module='cheapmodule') -- perhaps worth experimenting with).