Extend Store class to always execute a function after load on ExtJS - extjs

I am working on a project where we were asked to "patch" (they don't want a lot of time spent on development as they soon will replace the system) a system implemented under ExtJS 4.1.0.
That system is used under a very slow and non-stable network connection. So sometimes the stores don't get the expected data.
First two things that come to my mind as patches are:
1. Every time a store is loaded for the first time, wait 5 seconds and try again. Most times, a page refresh fix the problem of stores not loading.
Somehow, check detect that no data was received after loading a store and, try to get it again.
This patches should be executed only once to avoid infinite loops or unnecessary recursivity, given that it's ok that some times, it's ok that stores don't get any data back.
I don't like this kind of solutions but it was requested by the client.

This link should help with your question.
One of the posters suggests adding the below in an overrides.js file which is loaded in between the ExtJs source code and your applications code.
Ext.util.Observable.observe(Ext.data.Connection);
Ext.data.Connection.on('requestexception', function(dataconn, response, options){
if (response.responseText != null) {
window.document.body.innerHTML = response.responseText;
}
});
Using this example, on any error instead of echoing the error in the example you could log the error details for debugging later and try to load again. I would suggest adding some additional logic into this so that it will only retry a certain number of times otherwise it could run indefinitely while the browser window is open and more than likely crash the browser and put additional load on your server.
Obviously the root cause of the issue is not the code itself, rather your slow connection. I'd try to address this issue rather than any other.

Related

Salesforce Apex CPU LImit

currently we are having issue with an CPU Limit. We do have a lot of processes that are most likely not optimized, I have already combined some processes for the same object but it is not enough. I am trying to understand logs rights now - as you can see on the screenshots, there is one process that is being called multiple times (I assume each time for created record). Even if I create, for example, 60 records in one operation/dml statement, the Process Builders still gets called 60 times? (this is what I think is happening) Is that a problem we are having right now? If so, is there a better way to do it? Because right now we need updates from PB to run, but I expected it should get bulkified or something like that. I was also thinking there might be some looping between processes. If there are more information you need, please let me know. Thank you.
Well, yes, the process builder will be invoked 60 times, 1 record at a time. But that shouldn't be your problem. The final update / create child records / email send (or whatever your action is) will be bulkified, it won't save 1 record at a time. If the process calls some apex actions - they're supposed to support passing collection of records, not just single record.
You maybe looking at wrong place. CPU time suggests code problems, not config (flow, workflow, process builder... although if you're doing updates of fields on "this" record it's possible you'd benefit from before-save flows). Try to compare timestamps related to METHOD_BEGIN, METHOD_END for triggers, code methods (including invocable action / process plugin interfaces).
Maybe there's code that doesn't need to run because key fields didn't change, there's nothing to recalculate, rollup. Hard to say without seeing the debug log.
Maybe the operation doesn't have to be immediate. Think if you can offload some stuff to "scheduled actions", "time based workflows" or in apex terms "#future, batchable, queueable". But they'd have to be relatively safe to run, if there's error - it won't display to the user because the action will be in the background, you'd need to handle the errors manually (send an email, create a record, make chatter post or bell notification).
You could try uploading the log to https://apextimeline.herokuapp.com/ and try to make sense out of that Gantt-chart-like output. Or capture the log "pro" way, with https://help.salesforce.com/s/articleView?id=sf.code_dev_console_solving_problems_using_system_log.htm&type=5 or https://marketplace.visualstudio.com/items?itemName=financialforce.lana (you'll likely need developer's help to make sense out of it).

huggingface_load_dataset() function quit() without finishing

I try to use datasets to get "wikipedia/20200501.en" with the code below.The progress bar shows that I just complete 11% of the total dataset, however the script quit without any output in standard outut. I checked the cached directory and find the arrow file is just not completed.
wiki = load_dataset("wikipedia", "20200501.en", split="train", download_config=download_conf)
I tried several times and got different ratio of completion but never succeed in completing the progress.Could any one help me?
env:python version:3.7
datasets version:1.1.2
MacOS
This is caused by unstable network connection. It can be solved by downloading the URL with browser or "wget" function.

How to make JSON loads faster with large data (on HTTP or WebPage)

. Requesting the page(on HTTP or WebPage), it is very slow or even crash unless i load my JSON with fewer data. I really need to solve this since sooner or later i will be using large amount of data frequently. Here are my JSON data. --->>>
Notes:
1. The JSON loads only String and Integer.
2. I used to view my JSON in JSONView more like treeview using plugin
from GoogleChrome.
I am using angular and nodejs. tq
A quick resume of all the things that comes to my mind :
I had a similar issue once. My solutions may make the UI change.
Pagination
I doubt you can display that much data at one time, so the strategy should be divising your data in small amounts and then only load more when the client ask for it.
This way, the whole data is no longer stored in RAM as it is currently. This is how forums works (only 20 topics at a time).
Just imagine if StackOverflow make you load the whole historic of questions in the main page, how much GB would your navigator need just for that ?
You can use pagination in a classic way (button with page number, like google), or in an infinite scroll way, as you want.
For that you need to adapt your api and keep track of the index of the pages you already loaded at every moment in your Front. There are plenty of examples in AngularJS.
Only show the beginning of the data
When you look at Facebook comments, you may have a "show more" button. In their case, maybe it's to not break the UI, but it can also be used to not overload data.
You can display only the main lines of your datae (titles or somewhat) and add a button so the user can load more details if they want.
In your data model, the cost seems to be on the second level of "C". Just load data untill this second level, and download the remaining part (for this object) only if the user asks for.
Once again, no need to overload, your client's RAM will be thankfull, and your client's mobile 3G too.
Optimize your data stucture
If this is still not enough :
As StefanArya said in comment, indeed remove the "I" attribute, which is redundant with the JSON key.
Remove the "I" as you can use Object.keys() to get key name.
You also may don't need that much precision on your floats.
If I see any other ideas, I'll edit this post later.

How to catch "Nrpe unable to read output" when occured?

I'm trying to catch "nrpe unable to read output" output from plugin and send an email when this one occurs and I'm a little bit stuck :) . Thing is there are different return codes when this error occurs on different plugin:
Return code Service status
0 OK
1 WARNING
2 CRITICAL
3 UNKNOWN
Is there a way either to unify return codes of all plugins I use(that there always will be 2[CRITICAL] when this problem occurs), or any other way to catch those alerts? I want to keep return codes for different situations as is(i.e. filesystem /home will be warning(return code 1) for 95% and critical(return code 2) for 98%
Most folks would rather not have this error sending alert emails, because it does not represent an actual failed check. Basically it means nothing more than:
The command/plugin (local or remote) was ran by NRPE, but
failed to return any usable status and/or text back to nrpe.
This most often means something went wrong with the command/plugin and it hasn't done the job it was expected to perform. You don't want alerts being thrown for checks, when the check wasn't actually performed - as this would be very misleading. It's also important to note that the Return Code is not even be coming from the command/plugin.
In my experience, the number one cause of this error is a bad check. And as the docs for NPRE state, you should run the check (with all its options!) to make sure it runs correctly. Do yourself a favor and test both working AND not working states. About 75% of the time, this has happened because the check only works correctly when it has OK results, and blows up when something not-OK must be reported.
Another issue that causes these are network glitches. NRPE connects and runs the check; but the connection is closed before any response is seen. Once again, not a true check result.
For a production Nagios monitoring system, these should be very rare errors. If they are happening frequently, then you likely have other issues that need to be fixed.
And as far as I can tell, all built-in Nagios plugins use the exact same set of return codes. Are you certain this isn't a 'custom' check?
Ok, I think I've found the solution for my problems-I will try to check nagios.log on each node for those errors.

Is this a bug in the Log class in CodenameOne?

I've been trying to use the log class to capture some strange device-specific failures using local storage. When I went into the Log class and traced the code I noticed what seems to be a bug.
when I call the p(String) method, it calls getWriter() to get the 'output' instance of the Writer. It will notice output is null so it calls createWriter() create it. Since I haven't set a File URL, the following code gets executed:
if(getFileURL() == null) {
return new OutputStreamWriter(Storage.getInstance().createOutputStream("CN1Log__$"));
}
On the Simulator, I notice this file is created and contains log info.
so in my app I want to display the logs after an error is detected (to debug). I call getLogContent() to retrieve it as a string but it does some strange things:
if(instance.isFileWriteEnabled()) {
if(instance.getFileURL() == null) {
instance.setFileURL("file:///" + FileSystemStorage.getInstance().getRoots()[0] + "/codenameOne.log");
}
Reader r = new InputStreamReader(FileSystemStorage.getInstance().openInputStream(instance.getFileURL()));
The main problem I see is that it's using a different file URL than the default writer location. and since the creation of the Writer didn't set the File URL, the getLogContent method will never see the logged data. (The other issue I have is a style issue that a method getting content shouldn't be setting the location for that content persistently for the instance, but that's another story).
As a workaround I think I can just call "getLogContent()" at the beginning of the application which should set the file url correctly in a place that it will retrieve it from later. I'll test that next.
In the mean time, is this a Bug, or is it functionality I don't understand from my user perspective?
It's more like "unimplemented functionality". This specific API dates back to LWUIT.
The main problem with that method is that we are currently writing into a log file and getting its contents which we might currently be in the middle of writing into can be a problem and might actually cause a failure. So this approach was mostly abandoned in favor of the more robust crash protection approach.

Resources