I need open a transaction, and wait the return for do commit or rollback.
But, how can I get return (true/false - not exception or any "break process") of this query?
$this->query('update checks set discount = 100 where check_num = 3001');
In my tests, when an error occurs, it stops the whole process and returns an exception.
I do not want it, just want to know: has successfully executed or error? (true / false)
What mark said in the comments seems accurate, an updateAll() call would seem more appropriate to achieve what you want. If you ever find yourself in a situation where you actually do need transactions, there are examples on how to achieve that in the documentation as well.
Related
currently we are having issue with an CPU Limit. We do have a lot of processes that are most likely not optimized, I have already combined some processes for the same object but it is not enough. I am trying to understand logs rights now - as you can see on the screenshots, there is one process that is being called multiple times (I assume each time for created record). Even if I create, for example, 60 records in one operation/dml statement, the Process Builders still gets called 60 times? (this is what I think is happening) Is that a problem we are having right now? If so, is there a better way to do it? Because right now we need updates from PB to run, but I expected it should get bulkified or something like that. I was also thinking there might be some looping between processes. If there are more information you need, please let me know. Thank you.
Well, yes, the process builder will be invoked 60 times, 1 record at a time. But that shouldn't be your problem. The final update / create child records / email send (or whatever your action is) will be bulkified, it won't save 1 record at a time. If the process calls some apex actions - they're supposed to support passing collection of records, not just single record.
You maybe looking at wrong place. CPU time suggests code problems, not config (flow, workflow, process builder... although if you're doing updates of fields on "this" record it's possible you'd benefit from before-save flows). Try to compare timestamps related to METHOD_BEGIN, METHOD_END for triggers, code methods (including invocable action / process plugin interfaces).
Maybe there's code that doesn't need to run because key fields didn't change, there's nothing to recalculate, rollup. Hard to say without seeing the debug log.
Maybe the operation doesn't have to be immediate. Think if you can offload some stuff to "scheduled actions", "time based workflows" or in apex terms "#future, batchable, queueable". But they'd have to be relatively safe to run, if there's error - it won't display to the user because the action will be in the background, you'd need to handle the errors manually (send an email, create a record, make chatter post or bell notification).
You could try uploading the log to https://apextimeline.herokuapp.com/ and try to make sense out of that Gantt-chart-like output. Or capture the log "pro" way, with https://help.salesforce.com/s/articleView?id=sf.code_dev_console_solving_problems_using_system_log.htm&type=5 or https://marketplace.visualstudio.com/items?itemName=financialforce.lana (you'll likely need developer's help to make sense out of it).
Given the For Loop retry scheme below (working), how can I make the package return success versus failure? I've seen some tantalizing clues such as a task or package's ForceExecutionResult = Success but not sure how I can incorporate that into my process (I have many just like the below). If indeed setting ForceExecutionResult is an answer, do I set this using a ScriptTask or Expression? Is that property available in an obvious way, other than from the properties page? Thank you.
Additional note/explanation: I need to conditionally return success or failure based on the number of retries. A failure, retry, and success is success. In my For Loop, three retries is a failure. I can't arbitrarily set ForceExecutionResults = Success.
The resolution or at least one approach was to assign the On Error Handler's variable Propagate to True or False, using the User variables I was already using for the For Loop try scheme.
#[System::Propagate] = #[User::CountryIncrement] == #[User::RetryCount] - 1
You put that (above) in an Expression Task in the Task's On Error event handler. That way when the error retry count has been reached, the error will propagate up to the parent and correctly report failure. If it retried once, say, and subsequently succeeded, Propagate will be False and not bubble up. The Tasks Job Step will report success.
I'm trying to catch "nrpe unable to read output" output from plugin and send an email when this one occurs and I'm a little bit stuck :) . Thing is there are different return codes when this error occurs on different plugin:
Return code Service status
0 OK
1 WARNING
2 CRITICAL
3 UNKNOWN
Is there a way either to unify return codes of all plugins I use(that there always will be 2[CRITICAL] when this problem occurs), or any other way to catch those alerts? I want to keep return codes for different situations as is(i.e. filesystem /home will be warning(return code 1) for 95% and critical(return code 2) for 98%
Most folks would rather not have this error sending alert emails, because it does not represent an actual failed check. Basically it means nothing more than:
The command/plugin (local or remote) was ran by NRPE, but
failed to return any usable status and/or text back to nrpe.
This most often means something went wrong with the command/plugin and it hasn't done the job it was expected to perform. You don't want alerts being thrown for checks, when the check wasn't actually performed - as this would be very misleading. It's also important to note that the Return Code is not even be coming from the command/plugin.
In my experience, the number one cause of this error is a bad check. And as the docs for NPRE state, you should run the check (with all its options!) to make sure it runs correctly. Do yourself a favor and test both working AND not working states. About 75% of the time, this has happened because the check only works correctly when it has OK results, and blows up when something not-OK must be reported.
Another issue that causes these are network glitches. NRPE connects and runs the check; but the connection is closed before any response is seen. Once again, not a true check result.
For a production Nagios monitoring system, these should be very rare errors. If they are happening frequently, then you likely have other issues that need to be fixed.
And as far as I can tell, all built-in Nagios plugins use the exact same set of return codes. Are you certain this isn't a 'custom' check?
Ok, I think I've found the solution for my problems-I will try to check nagios.log on each node for those errors.
My application is running in a job. I want to get a Handle to this Jobobject using OpenJobObject so i can later use this handle. The problem is, that i don't know the jobs name, and with passing NULL to the Job name it gives error 87 ( The parameter is incorrect ) back.
This is how i tried it:
HANDLE handle = OpenJobObject( JOB_OBJECT_QUERY, FALSE, NULL );
if ( !handle ) printf( "\nError %d", GetLastError() );
else printf( "\nOK" );
I also found this on MSDN:
An application cannot obtain a handle to the job object in which it is running unless it has the name of the job object. However, an application can call the QueryInformationJobObject function with NULL to obtain information about the job object.
So my question is, is it possible to get somehow a handle to the JobObject in which my application is running? Or get the name of the job my application is running in?
Thanks!
Update:
My code so far: http://pastebin.com/aJ7XMmci
Right now, i'm getting Error 87 ( The parameter is incorrect ) from SetInformation :(
OK, doesn't look like there's any supported method. That doesn't mean it can't be done! :-)
To enumerate all the handles in the system, see this question. The sample code here filters the handles and only looks for those belonging to a particular process, but that's easy to change. You might need to enable debug privilege first.
For each handle, duplicate it into your process, then call IsProcessInJob to find out whether it's the right handle or not.
Once you've got that working, check whether SYSTEM_HANDLE.ObjectTypeNumber is always the same for job objects. It probably is (on any given OS, at least) in which case you can drastically increase the efficiency of the code by only checking job object handles.
You could perhaps also filter to just the process running the Secondary Logon service, since this seems to be what creates the job objects for runas.
(If you do get this working, please post code - it could be very useful for future visitors.)
An ActiveX Script in a DTS package can return DTSTaskExecResult_Success or DTSTaskExecResult_Failure. Is there a way to indicate that it is not a success without failing the entire package? We'd like it to keep going, but in a different direction. The options for a path are Success, Failure, and Completion, but it appears the only return values for the ActiveX Script are Failure and Success. 'DTSTaskExecResult_Completion' isn't right. What is?
(The solution we're probably going to pursue is modifying this to SSIS, but I wanted to know if it was possible in DTS.)
Try setting the DTSTaskExecResult_Retry return value; it tells you that it failed, but doesn't failout completely?
Edit: sorry it is : DTSTaskExecResult_RetryStep
FYI: ActiveScriptTask objects have full access to the GlobalVariables collection, which provides a way to share information across tasks.
As far as I know, when using the ActiveX DTS Component, your options are limited to Success/Failure. :(
-Shaun
Yes, the options are limited to Success/Failure.
You can also use RetryStep option, but to do what? This option just repeat the step and the DTS never end.
The option that I use, is to use Success result, always.
But previously to set the result, I do my checking, and if I don't want to execute the next step, just set the task (the next task) as Completed.
By this way, the next task is not executed and you succeed in your purpose.
Sample:
Set oPackage = DTSGlobalVariables.Parent
If HasRows = 0 Then
oPackage.Steps("DTSStep_DTSExecuteSQLTask_1").ExecutionStatus = DTSStepExecStat_Completed
End If
Main = DTSTaskExecResult_Success
Hope this can help to anyone.
Regards,