I have to execute "Start" and "Finish" Commands in the Sequential Order in my program and synchronize everything at the end. So I'm inserting the Offline commands in the order first and assuming they will execute in the same order. I'm using "List" with "Iterator" for this.
Problem here is: Finish Command will be missed execution in some strange scenarios in the middle and "start" commands will execute next to each other and sending all wrong data and mapped it in a wrong way.
As action will get ID when command executes at the server, I'm keeping tempory id's to map the offline commands in storage(localID). Instaead of List if I use anyother collection will this gets any better? It is hard to reproducing this on simulator. Please review both scenarios and advise where can this approaches go wrong. Thanks
I will add the OfflineCommands into the List and save in the Storage. After that user can perform delete delete operation in the App so that I will retrieve the list and remove the commands which got deleted from storage so now I have filtered list.
Don't synchronize.
That's nearly always a mistake in Codename One. Your code deals with the UI so it should be on the EDT and Display.getInstance().isEDT() should be true.
My guess is that one of the commands in the middle uses one of the following invokeAndBlock() derivatives:
addToQueueAndWait
Modal dialogs
Which triggers a second round of synchronization to run.
You can trace that by reproducing the issue and checking which command in the list is specifically there at each time. Then fix that command so it doesn't block in this way.
Another approach to fixing it is to remove the list immediately when you start processing which will prevent a duplicate execution of commands.
Related
currently we are having issue with an CPU Limit. We do have a lot of processes that are most likely not optimized, I have already combined some processes for the same object but it is not enough. I am trying to understand logs rights now - as you can see on the screenshots, there is one process that is being called multiple times (I assume each time for created record). Even if I create, for example, 60 records in one operation/dml statement, the Process Builders still gets called 60 times? (this is what I think is happening) Is that a problem we are having right now? If so, is there a better way to do it? Because right now we need updates from PB to run, but I expected it should get bulkified or something like that. I was also thinking there might be some looping between processes. If there are more information you need, please let me know. Thank you.
Well, yes, the process builder will be invoked 60 times, 1 record at a time. But that shouldn't be your problem. The final update / create child records / email send (or whatever your action is) will be bulkified, it won't save 1 record at a time. If the process calls some apex actions - they're supposed to support passing collection of records, not just single record.
You maybe looking at wrong place. CPU time suggests code problems, not config (flow, workflow, process builder... although if you're doing updates of fields on "this" record it's possible you'd benefit from before-save flows). Try to compare timestamps related to METHOD_BEGIN, METHOD_END for triggers, code methods (including invocable action / process plugin interfaces).
Maybe there's code that doesn't need to run because key fields didn't change, there's nothing to recalculate, rollup. Hard to say without seeing the debug log.
Maybe the operation doesn't have to be immediate. Think if you can offload some stuff to "scheduled actions", "time based workflows" or in apex terms "#future, batchable, queueable". But they'd have to be relatively safe to run, if there's error - it won't display to the user because the action will be in the background, you'd need to handle the errors manually (send an email, create a record, make chatter post or bell notification).
You could try uploading the log to https://apextimeline.herokuapp.com/ and try to make sense out of that Gantt-chart-like output. Or capture the log "pro" way, with https://help.salesforce.com/s/articleView?id=sf.code_dev_console_solving_problems_using_system_log.htm&type=5 or https://marketplace.visualstudio.com/items?itemName=financialforce.lana (you'll likely need developer's help to make sense out of it).
My project consist of creating multiple sub directories and copy files to those sub directories. I developed this part using file system task inside a foreach loop in SSIS.
The final part is insert into SQL Table, the status of the process. If the file was copy successful the Status column should be "Successful" and the reason in another column should be "File was copied successfully" or something like that.
The error flow redirection (red arrow) is available for file system task or foreach loop? I have read somewhere that in event handlers you can work these status messages and insert them in SQL. Could someone please provide a solution or suggest one to solve this problem?
I would steer away from using event handlers. They are like hidden GOTOs, in which there is no indication in the control flow that they exist and you have to go to another screen to see what they are doing.
It's much more clear to use the control flow to direct errors. Any arrow from any task or container can be double clicked and configured. Change the constraint option to value=Failure to make the arrow go red.
I am working on a project where we were asked to "patch" (they don't want a lot of time spent on development as they soon will replace the system) a system implemented under ExtJS 4.1.0.
That system is used under a very slow and non-stable network connection. So sometimes the stores don't get the expected data.
First two things that come to my mind as patches are:
1. Every time a store is loaded for the first time, wait 5 seconds and try again. Most times, a page refresh fix the problem of stores not loading.
Somehow, check detect that no data was received after loading a store and, try to get it again.
This patches should be executed only once to avoid infinite loops or unnecessary recursivity, given that it's ok that some times, it's ok that stores don't get any data back.
I don't like this kind of solutions but it was requested by the client.
This link should help with your question.
One of the posters suggests adding the below in an overrides.js file which is loaded in between the ExtJs source code and your applications code.
Ext.util.Observable.observe(Ext.data.Connection);
Ext.data.Connection.on('requestexception', function(dataconn, response, options){
if (response.responseText != null) {
window.document.body.innerHTML = response.responseText;
}
});
Using this example, on any error instead of echoing the error in the example you could log the error details for debugging later and try to load again. I would suggest adding some additional logic into this so that it will only retry a certain number of times otherwise it could run indefinitely while the browser window is open and more than likely crash the browser and put additional load on your server.
Obviously the root cause of the issue is not the code itself, rather your slow connection. I'd try to address this issue rather than any other.
I would like to make a trigger that only executes for a single user(myself). The reason, is so that I don't "break the build".
Longer explanation: I'm trying to sandbox a Clearcase trigger to automatically apply an attribute to an element when it is checked in, and I don't want to accidentally create a trigger that applies to all developers and potentially ruin everybody's day with the prototype(what works on the first try?).
I see the -nus/ers option which seems to exclude users in the list. I suppose I could comma separate a list of all users, excepting myself. Is this what I'm looking for?
The best sources of information about triggers are listed here, and then EV (Environment Variables) are mentioned in mktrype man page.
Check for isntance:
CLEARCASE_USER
The user who issued the command that caused the trigger to fire; derived from the UNIX or Linux real user ID or the Windows user ID.
If the user id somehow doesn't work, you could consider other environment variables:
CLEARCASE_SNAPSHOT_PN
Your script can control if the user id is yours, and if not, abort.
The path to the root of the snapshot view directory in which the operation that caused the trigger to fire took place.
If your script detect that the path isn't the exact one expected (ie your snapshot view from which your triggered your script), said trigger script would abort.
In my project I want remove some rows first then afterwards insert new rows.
But some times what happens is it inserts the new rows first then afterwards removes the starting rows.
To solve this problem I need to manage the operations in a proper sequence.
Please help me out.
This is a common pattern/problem with Silverlight as pretty much "everything" is asynchronous (for good reasons).
Depending on how your Adds and Removes are triggered, you could queue up tasks (e.g. a list of delegates) and have each task execute the next one off the list when they complete.
The alternative is going to sound a little complex, but the solution we came up with is to create a SequentialAsynchronousTaskManager class that operates in a similar way to the SilverlightTest class which uses EnqueueConditional() methods to add wait conditions and EnqueueCallback()s to execute code.
It basically holds a list of delegates (which can be simple Lambda expressions) and either executes it regularly until it returns true (EnqueueConditional) or just executes some code (EnqueueCallback).