I currently run TL backups every 5 mins and my error log is really busy with backup messages. I was looking to supress these by using Trace Flag 3226 but I am still getting the messages where backing up to URL.
Is there any way to suppress these too?
Related
I have a flow which calls an apex class and here is execution logs error,how to resolve this error
Error Occurred During Flow "ServiceAppointment_API": An Apex error occurred: System.AsyncException:
Warning: Approaching hourly email limit for this flow.
Each flow in your organization is limited to 100 error emails per hour. Since 12:00 PM, Salesforce has sent 99 error emails for ServiceAppointment_API flow. After 100 emails, Salesforce suppresses all error emails for this flow. The limit resets at 1:00 PM.
Error element myWaitEvent_myWait_myRule_3_event_0_SA1 (FlowActionCall).
An Apex error occurred: System.AsyncException: You've exceeded the limit of 100 jobs in the flex queue for org 00D36000000rhGR.
Wait for some of your batch jobs to finish before adding more. To monitor and reorder jobs, use the Apex Flex Queue page in Setup.
Your helps is appreciated
Regards,
Carolyn
The apex class schedules some asynchronous (background) processing. Could be a batch job, could be method annotated with #future or something called Queue able. You can have up to 100 of these submitted and up to 5 active (running) at given time.
It's hard to say how to fix without seeing the code.
Maybe the #future isnt needed, the developer meant well but something went wrong. Maybe it has to be async but could be done with time-based workflow or scheduled action in process builder?
Maybe it's legit your bug, that code could be rewritten to work faster or process more than 1 record at a time.
Maybe it's not your fault, maybe there's a managed package that scheduled lots of jobs and next ones fail to submit.
Maybe you'll need to consider a rewrite that detaches it but more. Say instead of almost instant processing - have code that runs every 5 minutes, checks if there's something recently changed that needs processing and does it.
Ok Not sure what is going on here. I have runaway queries that won't cancel. I have one query to select all rows from a table that only has 250 rows and is 1.5KB in size. It's been running for 30 minutes right now and it should only take a few ms.
I've tried canceling by hitting the abort button on the worksheet, going into history and selecting the query and hitting abort, aborting based on the query ID via SQL, and aborting based on the session ID via SQL.
Ironically whenever I try to abort via SQL it shows that the queries have been terminated and then they still show as running, I wait a few minutes and re run the query and it again shows as terminated but they still are running.
I also tried loggin out and logging back in again and am seeing all kinds of weird errors:
Internal Error: Unable to retrieve the current roles.
Error
Problem with your MFA Enrollment: There was an issue with your enrollment
process. Please try again.
Worksheet Not Loaded
I have no idea what is going on but it seems like everywhere I turn there is an issue. Any assistance would be greatly appreciated.
Try logging completely out, close the browser, reboot your machine, and start from there. Here my guess:
Sometimes the query history (which I assume is where you were seeing things still running), needs a browser refresh, but based on MFA errors, refreshing your browser appears to have you logged out of your SAML/MFA process.
Once you successfully login, you'll likely see that the query had completed already before you even tried to cancel it.
If that isn't the case, and you are still seeing issues, then we'd probably need more information, or a quick call to Snowflake Support will walk you through things. My guess is this is all a display issue on your browser/UI, rather than something going wonky with Snowflake.
I have recently changed to use custom Go runtime on GAE, and noticed many errors like this from logs:
internal.flushLog: Flush RPC: Call error 3: invalid security ticket: 6c8027dc99b3ed3e
internal.flushLog: Flush RPC: Canceled: (timeout)
The server is still running well, but I have no idea about that error, as well as why it happens.
I'm using a custom Go runtime by using Dockerfile, and App Engine Release is 1.9.37.
Any help to clarify the error would be highly appreciated. Thanks.
This is a known issue with the Go runtime on App Engine Flexible. It tends to happen when a line is logged right before the end of a request/response.
What happens is that when the line is logged it is actually put in a list of log lines to be batched together and sent to the application server as an RPC at periodic intervals. The security ticket is canceled at the end of a request/response which sometimes can happen before the log lines have been flushed. It's harmless, except that you may lose a log line or two. :\
We're actively working on fixing it.
I know how to set up a job to alert when it's running.
But I'm writing a job which is meant to run many times a day, and I don't want to be bombarded by emails, but rather I'd like a solution where I get an alert when the job hasn't been executed for X minutes.
This can be acheived by setting the job to alert on execution, and then setting up some process which checks for these alerts, and warns when no such alert is seen for X minutes.
I'm wondering if anyone's already implemented such a thing (or equivalent).
Supporting multiple jobs with different X values would be great.
The danger of this approach is this: suppose you set this up. One day you receive no emails. What does this mean?
It could mean
the supposed-to-be-running job is running successfully (and silently), and so the absence-of-running monitor job has nothing to say
or alternatively
the supposed-to-be-running job is NOT running successfully, but the absence-of-running monitor job has ALSO failed
or even
your server has caught fire, and can't send any emails even if it wants to
Don't seek to avoid receiving success messages - instead devise a strategy for coping with them. Because the only way to know that a job is running successfully is getting a message which says precisely this.
I'm testing an import script on a shared web host I just got, but I found that transactions are blocked after running it for 20 minutes or so. I assume this is to avoid overloading the database, but even when I import one item every 1 second, I still run into the problem. To be specific, when I try to save an object I receive the error:
DatabaseError: current transaction is aborted, commands ignored until end of transaction block
I've tried to delay for a few hours after this happens, but there is still a block. The only way to resume importing is to completely restart the importing program. Because of this, I reasoned that all I need to do is reconnect to the DB. This might not be true, but it's wroth a try.
So my question is this, how can I disconnect and reconnect the DB connection in Django? Is this possible?
Most likely some other database error occurred before this one, but your code ignored it and went forward with the transaction in a broken state.