Does anyone know what causes these huge blocking functions in Relay when making requests? I have a search box that calls setVariables and it's immensely noticeable.
It's much better in production. Still curious though
Related
I've been scavenging the internet for the last week lately trying to figure out how to spot and solve memory leaks within my React application, because well, I think I have a memory leak in my application. I've noticed that our application crashes more and more frequently lately and I continue to get the same error from Node.js: API fatal error handler returned after process out of memory. I knew that the application that I was working on developed by others before me had some serious flaws, but never knew that they were this bad, so I decided to turn to the internet to try and solve this issue.
I looked at the Chrome Dev Tools and taking heap snapshots to see if there is an increase in memory and it is apparent that there is when I see the memory shoot from 123MB to 200+MB after a few actions within the application. Now this is a good tool for determining whether there is a possible memory leak or not, but it's absolutely hard to read and understand, which doesn't help me determine where the issues lye.
Now our AWS instance is only 1GB in size and a lot of answers I see about this sort of issue is to just increase the max space of Node.js but that doesn't solve any issues, instead it just throws a band-aid on it until the issue occurs again, which is not good practice in my opinion. I'm coming here now in hopes that someone can help me understand what in the world I'm looking at when using the Chrome Dev Tools, and/or if someone knows a better way in finding out where the issues are in my code that would be very helpful as well. Thanks In Advance.
EDIT
Out of all of the most common issues with memory leaks in javascript that I have read online, there aren't any that stick out to me within our application, so I'm very confused on were the possible leak is coming from.
Another thing is that the application is grabbing a lot of data from our backend and keeping it in memory. Could minimizing the amount of data that is retrieved help, or would that only slow down the issue instead of fixing it?
I had the same issue with the react application in my organization. React application received huge data from the Api and it stored the data in the state variables. Based on the type of operations, the application would send back the huge data to the Api.
Due to this, the application break with Out of Memory issue.
The solution to the issue is not a direct one. However it involves lot of analysis on the code.
Try to see if the data size can be reduced.
Deep dive to see if you could use useMemo in the child components which are getting re-rendered everytime a parent component re-renders.
If there is need to modify the small part of the state in the application, try using Immutability Helper and Immer
In my application, I tried reducing the size of the response where ever it was applicable and used immer to modify the state. I don't see the Out of Memory issue in the application after the changes.
I noticed my app becoming quite slow with increasing amounts of data.
I thought it is because of some filters in ng-repeats that are triggered too often, but I've optimized them already and there's still no sign of performance improvement.
Then I thought it's database slowness, but it turns out that wasn't the case either.
I'd like to stop guessing where the slowness is coming from and find the cause systematically.
Does anyone know how to do this?
I just see tons of optimizations articles everywhere, but first I want to find out what part of the code needs the optimization.
I ran this Chrome analysis - and it seems to me that there's some AngularJS functions that take forever, but I don't know how to find out what's causing them.
No idea how this can help me. Looks like there's a function call that takes 4.5 seconds...
I'd like to get rid of it :D :D
I guess I was thinking too complicated...
I found the cause of the slowness simply by commenting everything out and commenting it in again step by step until the first slowness appeared :D
Consider a basic WPF line-of-business application where the server and clients both run on the local network. The server simply exposes a Web API which the clients (running on desktop computers) call into. The UI would consist of CRUD-style screens with buttons to trigger calls to the server.
In my original version of the app none of these UI operations were asynchronous; the UI would freeze for the duration of the call. But nobody complained about the UI becoming unresponsive, nor did anyone notice; the calls were typically less than a quarter second. On the rare occasion if the network connection was down, the UI would freeze for as long as it took for the operation to timeout, which was the only time that eyebrows were raised.
Now that I’ve begun implementing async/await for all server calls, it has quickly become apparent that I have a new issue on my hands: the complexities of dealing with re-entrancy and cancellation. Theoretically, now the user can click on any button while a call is already in progress. They can initiate operations that conflict with the pending one. They can inadvertently create invalid application states. They can navigate to a different screen or log out. Now all these previously impossible scenarios have to be accounted for.
It seems like I’ve opened up a Pandora’s Box.
I contrast this to my old non-async design, where the UI would lock-up for the duration of the server call, and the user could simply not click on anything. This guaranteed that they couldn’t foul anything up, and thus allowed the application code to remain at least 10x simpler.
So what is really gained by all this modern approach of async-everywhere? I bet if the user compared the sync and async versions side-by-side, they wouldn’t even notice any benefit from the async version; the calls are so quick that the busy indicator doesn’t even have time to render.
It just seems like a whole tonne of extra work, complexity, harder-to-maintain code, for very little benefit. I hear the KISS principle calling…
So what am I missing? In an LOB application scenario, what are the benefits of async warrant the extra work?
So what is really gained by all this modern approach of async-everywhere?
You already know the answer: the primary benefit of async for UI apps is responsiveness. (The primary benefit of async on the server side is scalability, but that doesn't come into play here).
If you don't need responsiveness, then you don't need async. In your scenario, it sounds like you may get away with that approach, and that's fine.
For software that is sold, though - and in particular, mobile applications - the standard is higher. Apps that freeze are so '90s. And mobile platforms really dislike apps that freeze, since you're freezing the entire screen instead of just a window - at least one platform I know of will execute your application and drop network acess, and if it freezes, it's automatically rejected from the app store. Freezing simply isn't acceptable for modern applications.
But like I said, for your specific scenario, you may get away without going async.
I have an WPF application, which do very slow an operation. The same operation does quickly second time. This operation uses third-party components. Seems, it's loading some libraries or something else. How can I found, what happens to fix it?
The simplest possible thing you can do is watch the output window while it is running in the Debugger. This will write a line for each assembly that is loaded so if your theory is correct then you will see lots of lines added while the slowness occurs.
In my experience this isn't the usual cause of delays such as this.
A far better solution is to get hold of a profiler, there are quite a few out there with trial periods so you can evaluate which most meets your needs, see Ants from redgate or DotTrace by Jetbrains. These will let you find out exactly where the delays are occuring.
Note: This is not for unit testing or integration testing. This is for when the application is running.
I am working on a system which communicates to multiple back end systems, which can be grouped into three types
Relational database
SOAP or WCF service
File system (network share)
Due to the environment this will run in, there are no guarantees that any of those will be available at run time. In fact some of them seem pretty brittle and go down multiple times a day :(
The thinking is to have a small bit of test code which runs before the actual code. If there is a problem then persist the request and poll until the target system until it is available. Tests could possibly be rerun within the code to check it is still available at logical points. The ultimate goal is to have a very stable system, regardless of the stability (or lack thereof) of the systems it communicates to.
My questions around this design are:
Are there major issues with it? (small things like the fact it may fail between the test completing and the code running are understandable)
Are there better ways to implement this sort of design?
Would using traditional exception handling and/or transactions be better?
Updates
The system needs to talk to the back end systems in a coordinated way.
The system is very async in nature so using things like queuing technologies is fine.
The system must run even if one or more backend systems are down as others may be up and processing of some information is possible.
You will be needing that traditional exception handling no matter what, since as you point out there's always the chance that things'll fail between your last check and the actual request. So I really think any solution you find should try to interact smoothly with this.
You are not stating if these flaky resources need to interact in some kind of coordinated manner, which would indicate that you should probably be using a transaction manager of some sort to do this. I do not believe you want to get into the footwork of transaction management in application code for most needs.
Sometimes I have also seen people use AOP to encapsulate retry logic to back-end systems that fail (for instance due to time-out issues). Used sparingly this may be a decent solution.
In some cases you can also use message queuing technology to alleviate unstable back-ends. You could for instance commit to a message queue as part of a transaction, and only pop off the queue when successful. But this design is normally only possible when you're able to live with an asynchronous process.
And as always, real stability can only be achieved by attacking the root cause of the problem. I had a 25-year old bug fixed in a mainframe TCP/IP stack fixed because we were overrunning it, so it is possible.
The Microsoft Smartclient framework provides a ConnectionMonitor class. Should be easy to use or duplicate.
Our approach to this kind of issue was to run a really basic 'sanity tester' prior to bringing up our main application. This was thick client so we could run the test every time the app started. This sanity test would go out and check things like database availability, and external network (extranet) access, and it could have been extended to do webservices as well.
If there was a failure, the user was informed, and crucially an email was also sent to the support/dev team. These emails soon became unweildy as so many were being created, but we then setup filters, so we knew when somethings really bad was happening. Overall the approach worked pretty well, our biggest win was being able to tell users that the system was down, before they had entered data, and got part way through a long winded process. They absolutely loved it.
At a technica level the sanity was written in C#, it used exception handling in a conventional way not to find the problems it was looking for. The sanity program became a mini app in its own right, and it was standalone from the main app. If I were doing it again I'd using a logging framework to capture issues, which is more flexible then our hard coded approach.