how to make my U_store.load() waiting infinitely
var U_store = new Ext.data.JsonStore({
id:'jfields',
totalProperty:'totalcount',
root:'rows',
url: 'first-utility/index_json.php',
});
my index_json.php returns result in 10 min but the load() in extjs does not wait so much it return immediately , can somebody help me how to get result from index_json.php ??
Your users are going to wait 10 mins for data to load?
You'd probably be better off with a solution based on periodic polling rather than "infinte" waiting. Maybe the initial call starts your long process and you have a separate call that checks for the results? Without knowing what you're doing it's hard to know what the best approach is.
Why don't you gather all information and save it in the database/file by a separate process and then you can easily load store from database data.
In fact if it takes 10 minutes to load the data then I think it should be a huge amount of data. If possible then you can go for partial loading of data depending on the client events/actions.
To answer his question as asked, I'd try to use the Ext.Ajax object since you can define a timeout on there.
On the success of the Ajax you can take the response object and create the data store using something like :
var myResponseData = response.responseText;
myStore.loadData(myResponseData);
The drawback to going this route is that you cannot use to Ext.Ajax again while it is processing the query since it is a static member.
It might take some tweaking but I hope the basic idea is sound. If anyone sees a problem with this idea, I'd like to know. This has me thinkin'
Related
currently we are having issue with an CPU Limit. We do have a lot of processes that are most likely not optimized, I have already combined some processes for the same object but it is not enough. I am trying to understand logs rights now - as you can see on the screenshots, there is one process that is being called multiple times (I assume each time for created record). Even if I create, for example, 60 records in one operation/dml statement, the Process Builders still gets called 60 times? (this is what I think is happening) Is that a problem we are having right now? If so, is there a better way to do it? Because right now we need updates from PB to run, but I expected it should get bulkified or something like that. I was also thinking there might be some looping between processes. If there are more information you need, please let me know. Thank you.
Well, yes, the process builder will be invoked 60 times, 1 record at a time. But that shouldn't be your problem. The final update / create child records / email send (or whatever your action is) will be bulkified, it won't save 1 record at a time. If the process calls some apex actions - they're supposed to support passing collection of records, not just single record.
You maybe looking at wrong place. CPU time suggests code problems, not config (flow, workflow, process builder... although if you're doing updates of fields on "this" record it's possible you'd benefit from before-save flows). Try to compare timestamps related to METHOD_BEGIN, METHOD_END for triggers, code methods (including invocable action / process plugin interfaces).
Maybe there's code that doesn't need to run because key fields didn't change, there's nothing to recalculate, rollup. Hard to say without seeing the debug log.
Maybe the operation doesn't have to be immediate. Think if you can offload some stuff to "scheduled actions", "time based workflows" or in apex terms "#future, batchable, queueable". But they'd have to be relatively safe to run, if there's error - it won't display to the user because the action will be in the background, you'd need to handle the errors manually (send an email, create a record, make chatter post or bell notification).
You could try uploading the log to https://apextimeline.herokuapp.com/ and try to make sense out of that Gantt-chart-like output. Or capture the log "pro" way, with https://help.salesforce.com/s/articleView?id=sf.code_dev_console_solving_problems_using_system_log.htm&type=5 or https://marketplace.visualstudio.com/items?itemName=financialforce.lana (you'll likely need developer's help to make sense out of it).
I am working on a project where we were asked to "patch" (they don't want a lot of time spent on development as they soon will replace the system) a system implemented under ExtJS 4.1.0.
That system is used under a very slow and non-stable network connection. So sometimes the stores don't get the expected data.
First two things that come to my mind as patches are:
1. Every time a store is loaded for the first time, wait 5 seconds and try again. Most times, a page refresh fix the problem of stores not loading.
Somehow, check detect that no data was received after loading a store and, try to get it again.
This patches should be executed only once to avoid infinite loops or unnecessary recursivity, given that it's ok that some times, it's ok that stores don't get any data back.
I don't like this kind of solutions but it was requested by the client.
This link should help with your question.
One of the posters suggests adding the below in an overrides.js file which is loaded in between the ExtJs source code and your applications code.
Ext.util.Observable.observe(Ext.data.Connection);
Ext.data.Connection.on('requestexception', function(dataconn, response, options){
if (response.responseText != null) {
window.document.body.innerHTML = response.responseText;
}
});
Using this example, on any error instead of echoing the error in the example you could log the error details for debugging later and try to load again. I would suggest adding some additional logic into this so that it will only retry a certain number of times otherwise it could run indefinitely while the browser window is open and more than likely crash the browser and put additional load on your server.
Obviously the root cause of the issue is not the code itself, rather your slow connection. I'd try to address this issue rather than any other.
What technique shall one use to implement batch insert/update for Backbone.sync?
I guess it depends on your usage scenarios, and how much you want to change the calling code. I think you have two options:
Option 1: No Change to client (calling) code
Oddly enough the annotated source for Backbone.sync gives 'batching' as a possible reason for overriding the sync method:
Use setTimeout to batch rapid-fire updates into a single request.
Instead of actually saving on sync, add the request to a queue, and only batch-save every so often. _.throttle or _.delay might help you here.
Option 2: Change client code
Alternatively, instead of calling save on your models, you could add some sort of save method to collections. You'd have to track which models were actually modified and hence in need of update, since as far as I can tell, Backbone only knows whether they're new or not (but I could be wrong about that).
Here's how I did it
Backbone.originalSync = Backbone.sync;
Backbone.sync = function (method, model, options) {
//
// code to extend sync
//
// calling original sync
Backbone.originalSync(method, model, options);
}
works fine for me , and I use it to control every ajax request coming out of any model or collection
I'm using Ext Direct to communicate with the server side. My server side takes more than 45 seconds to return all the data to extjs. I can see in the network ( in chrome browser ), that my request was cancelled since the operation took more than 30 seconds.
Where can i override this setting ?
Is it possible ?
I understand in Leo's answer that he suggests to edit directly ExtJS code, I don't think this is a good practice, all the more so as the parameter exists in the REMOTING_API:
Ext.app.REMOTING_API = {
"url":"/usermanagement/extdirect/router",
"actions":{"myService":[{"len":0,"name":"myMethod"}]},
"type":"remoting",
"timeout":120
};
I'm pretty sure it's browser thing. It's not ExtJs breaking your connection attempt but the browser itself.
Update: I haven't tried using ExtDirect with huge data. And honestly speaking - you should not force your user to just wait on load such long time. It's very bad design. If you have something that takes that long - you need to provide some kind of feedback of the progress and break whole communication into smaller pieces.
In your ext-all-debug.js,
under
Ext.define('Ext.data.Connection', { timeout:30000
You can edit the timeout to a higher value, the default value is 30 seconds.
This seriously is one of the biggest thorns in my side. SFDC does not allow you to use complex objects or collections of objects as parameters to a future call. What is the best workaround for this?
Currently what I have done is passed in multiple parallel arrays of primitives which form a complete object based on the index. Meaning if I need to pass a collections of users, I may pass 3 string arrays, say - Name[], Id[], and Role[]. Name[0], Id[0]. and Role[0] are the first user, etc. This means I have to build all these arrays and build the future method to reconstruct the relevant objects on the other end as well.
Is there a better way to do this?
As to why, once an Apex "transaction" is complete, the VM is destroyed. And generally speaking, salesforce will not serialize your object graph for resuming at a future time.
There may be a better way to get this task done. Can the future method query for the objects it needs to act on? Perhaps you can pass the List of Ids and the future method can use this in a WHERE clause. If it's a large number of objects, batch apex may be useful to avoid governor limits.
I would suggest creating a new custom object specifically for storing the information required in your custom apex class. You can then insert these into the database and then query for the records in the #future method before using them for the callout.
Then, once the callout has completed successfully you can then delete those records from the database to keep things nice and tidy.
My answer is essentially the same. What I do is prepare a custom queue object with all relevant Ids (User/Contact/Lead/etc.) along with my custom data that then gets handled from the #Future call. This helps with governor limits since you can pull from the queue only what your callout and future limitations will permit you to handle in a single thread. For Facebook, for example, you can batch up 20 profile updates per single callout. Each #Future allows 10 callouts and each thread permits 10 #Future calls which equals 2000 individual Facebook profile updates - IF you're handling your batches properly and IF you have enough Salesforce seats to permit this number of #Future calls. It's 200 #Future calls per user per 24 hours last I checked.
The road gets narrow when you're performing triggered callouts, which is what I assume you're trying to do based on your need to callout in an #Future method in the first place. If you're not in a trigger, then you may be able to handle your callouts as long as you do them before processing any DML. In other words, postpone any data saves in any particular thread until you're done calling out.
But since it sounds like you need to call out from a trigger, batching it up in sObjects is really the way to go. It's a bit of work, but essentially serializing your existing heap data is the road to travel here. Also consider doing this from an hourly scheduled Batch Apex call since with the queue approach you'll be able to process all of your callouts eventually. If you run into governor limits (or rather, avoid hitting them) in a particular thread, it will wake up an hour later and finish the work left in your queue. Launching that process looks something like this:
String jobId = System.schedule('YourScheduleName', '0 0 0-23 * * ?', new ScheduleableClass());
This will instantiate an instance of ScheduleableClass once an hour which would pull the work from your queue object and process the maximum amount of callouts.
Good luck and sorry for the frustration.
Just wanted to give my answer on how I do this very easily in case anyone else stumbles across this question. Apex has functions to easily serialize and de-serialize objects to and from JSON encoding. Let's say I have a list of cases that I need to do something with in a future call:
String jsonCaseList = '';
List<Case> caseList = [SELECT id, Other fields FROM Case WHERE some conditions];
//Populate the list
//Serialize your list
jsonCaseList = JSON.serialize(caseList);
//Pass jsonCaseList as a string parameter to your future call
futureCaseActivity(jsonCaseList);
#future
public static void futureCaseActivity(string jsonCases){
//De-serialize the string back into a list of cases
List<Case> futureCaseList = (List<Case>)JSON.deserialize(jsonCases, List<Case>);
//Do whatever you want with your cases
for(Case c : futureCaseList){
//Stuff
}
Update futureCaseList;
}
Anyway, seems like a much better option than adding database clutter with a new custom object and prevents needing to query the database again for info you already have, which just makes me hurt inside.
Almost forgot to add the link: https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_json_json.htm