ExtJs component cleanup - extjs

I read the following comment in ExtJs-in-action -
'Do not dismiss the destruction portion of a Component’s lifecycle if you plan on developing your own custom
Components. Many developers have gotten into trouble when they’ve ignored this crucial step and have code that
has left artifacts such as data Stores that continuously poll web servers...'
I have never called explicit destructors/destroy on my my containers/components in 3.4.x
Though things seem to work fine - I am curious on
1. What are some instances where implementing destructors becomes essential
2.what is the proper convention to handle component destruction on close of browser instance.

This guide may be a good read.
You should always consider cleaning up your objects after they are needed to free up memory, especially unbinding event listeners and any timers you've created with setInterval. Once the object reference is destroyed you cannot access it but it could still be listening to or firing events and using up resources.
Generally in ExtJs, You free up resources in the destroy method, but just remember to call the callParent() function too so that ExtJs does it's own cleanup.
Here is another article from IBM in 2012 that seems to go into more depth on the subject.

Related

Locking while front-end waits on response from backend

End users in our react application have the ability to make payments to loans via a pop up. The initial problem that we encountered is that users could click the pay button twice (or heaven forbid more than twice) and this would create multiple payments throwing our accounting into disarray. We thus implemented a sort of lock state that, when triggered to true, shows a loading gif displayed in a div with a simple tweak of the z-index. The state is passed down to the pop up from 2 components above. Every now and then I get an error message displaying that there is a possible memory leak. I assume this has something to do with my fix.
I'm just wondering, is there best practice on how to handle this sort of "locking" situation with react while waiting on some other external system to respond? I've tried to do this via the front-end but I'm not 100% convinced that it's the best and/or only solution.
If you need some code to better illustrate the scenario then let me know and I'll work on adding some examples.
Thanks in advance for your advice!
There's plenty of ways of doing this. You could even had multiple layers to the process. On top of layering the page using a z-indexed loading screen, you could also disable the button depending on some form of state change.
Also, the memory leak could be from you not disposing everything after the life cycle of a particular hook ends. I would suggest you look at using useEffect as a starting point. There's a good chance that either your modal or loading indicator is causing this. Often times, this can be fixed by adding a dependency array to useEffect. Obviously, I am making a lot of assumptions here.

Om but in javascript

I'm getting to be a fan of David Nolen's Om library.
I want to build a not-too-big web app in our team, but I cannot really convince my teammates to switch to ClojureScript.
Is there a way I can use the principles used in om but building the app in JavaScript?
I'm thinking something like:
immutable-js or mori for immutable data structures
js-csp for CSP
just a normal javascript object for the app-state atom
immutable-js for cursors
something for keeping track of the app-state and sending notification base on cursors
I'm struggling with number 5 above.
Has anybody ventured into this territory or has any suggestions? Maybe someone has tried building a react.js app using immutable-js?
Edit July 2015: currently the most promising framework based on immutability is Redux! take a look! It does not use cursors like Om (neither Om Next does not use cursors).
Cursors are not really scalable, despite using CQRS principles described below, it still creates too much boilerplate in components, that is hard to maintain, and add friction when you want to move components around in an existing app.
Also, it's not clear for many devs on when to use and not use cursors, and I see devs using cursors in place they should not be used, making the components less reusable that components taking simple props.
Redux uses connect(), and clearly explains when to use it (container components), and when not to (stateless/reusable components). It solves the boilerplate problem of passing down cursors down the tree, and performs greatly without too much compromises.
I've written about drawbacks of not using connect() here
Despite not using cursors anymore, most parts of my answer remains valid IMHO
I have done it myself in our startup internal framework atom-react
Some alternatives in JS are Morearty, React-cursors, Omniscient or Baobab
At that time there was no immutable-js yet and I didn't do the migration, still using plain JS objects (frozen).
I don't think using a persistent data structures lib is really required unless you have very large lists that you modify/copy often. You could use these projects when you notice performance problems as an optimization but it does not seem to be required to implement the Om's concepts to leverage shouldComponentUpdate. One thing that can be interesting is the part of immutable-js about batching mutations. But anyway I still think it's optimization and is not a core prerequisite to have very decent performances with React using Om's concepts.
You can find our opensource code here:
It has the concept of a Clojurescript Atom which is a swappable reference to an immutable object (frozen with DeepFreeze). It also has the concept of transaction, in case you want multiple parts of the state to be updated atomically. And you can listen to the Atom changes (end of transaction) to trigger the React rendering.
It has the concept of cursor, like in Om (like a functional lens). It permits for components to be able to render the state, but also modify it easily. This is handy for forms as you can link to cursors directly for 2-way data binding:
<input type="text" valueLink={this.linkCursor(myCursor)}/>
It has the concept of pure render, optimized out of the box, like in Om
Differences with Om:
No local state (this.setState(o) forbidden)
In Atom-React components, you can't have a local component state. All the state is stored outside of React. Unless you have integration needs of existing Js libraries (you can still use regular React classes), you store all the state in the Atom (even for async/loading values) and the whole app rerenders itself from the main React component. React is then just a templating engine, very efficient, that transform a JSON state into DOM. I find this very handy because I can log the current Atom state on every render, and then debugging the rendering code is so easy. Thanks to out of the box shouldComponentUpdate it is fast enough, that I can even rerender the full app whenever a user press a new keyboard key on a text input, or hover a button with a mouse. Even on a mobile phone!
Opinionated way to manage state (inspired by CQRS/EventSourcing and Flux)
Atom-React have a very opinionated way to manage the state inspired by Flux and CQRS. Once you have all your state outside of React, and you have an efficient way to transform that JSON state to DOM, you will find out that the remaining difficulty is to manage your JSON state.
Some of these difficulties encountered are:
How to handle asynchronous values
How to handle visual effects requiring DOM changes (mouse hover or focus for exemple)
How to organise your state so that it scales on a large team
Where to fire the ajax requests.
So I end up with the notion of Store, inspired by the Facebook Flux architecture.
The point is that I really dislike the fact that a Flux store can actually depend on another, requiring to orchestrate actions through a complex dispatcher. And you end up having to understand the state of multiple stores to be able to render them.
In Atom-React, the Store is just a "reserved namespace" inside the state hold by the Atom.
So I prefer all stores to be updated from an event stream of what happened in the application. Each store is independant, and does not access the data of other stores (exactly like in a CQRS architecture, where components receive exactly the same events, are hosted in different machines, and manage their own state like they want to). This makes it easier to maintain as when you are developping a new component you just have to understand only the state of one store. This somehow leads to data duplication because now multiple stores may have to keep the same data in some cases (for exemple, on a SPA, it is probable you want the current user id in many places of your app). But if 2 stores put the same object in their state (coming from an event) this actually does not consume any additional data as this is still 1 object, referenced twice in the 2 different stores.
To understand the reasons behind this choice, you can read blog posts of CQRS leader Udi Dahan,The Fallacy Of ReUse and others about Autonomous Components.
So, a store is just a piece of code that receive events and updates its namespaced state in the Atom.
This moves the complexity of state management to another layer. Now the hardest is to define with precision which are your application events.
Note that this project is still very unstable and undocumented/not well tested. But we already use it here with great success. If you want to discuss about it or contribute, you can reach me on IRC: Sebastien-L in #reactjs.
This is what it feels to develop a SPA with this framework. Everytime it is rendered, with debug mode, you have:
The time it took to transform the JSON to Virtual DOM and apply it to the real DOM.
The state logged to help you debug your app
Wasted time thanks to React.addons.Perf
A path diff compared to previous state to easily know what has changed
Check this screenshot:
Some advantages that this kind of framework can bring that I have not explored so much yet:
You really have undo/redo built in (this worked out of the box in my real production app, not just a TodoMVC). However IMHO most of actions in many apps are actually producing side effects on a server, so it does not always make sens to reverse the UI to a previous state, as the previous state would be stale
You can record state snapshots, and load them in another browser. CircleCI has shown this in action on this video
You can record "videos" of user sessions in JSON format, send them to your backend server for debug or replay the video. You can live stream a user session to another browser for user assistance (or spying to check live UX behavior of your users). Sending states can be quite expensive but probably formats like Avro can help. Or if your app event stream is serializable you can simply stream those events. I already implemented that easily in the framework and it works in my production app (just for fun, it does not transmit anything to the backend yet)
Time traveling debugging ca be made possible like in ELM
I've made a video of the "record user session in JSON" feature for those interested.
You can have Om like app state without yet another React wrapper and with pure Flux - check it here https://github.com/steida/este That's my very complete React starter kit.

Specific working example needed using IProgress interface described by Albahari

I am a neophyte at C# threading. I am trying to get my head around how to make 100K web requests, with some degree of parallelism, and report progress real-time to the GUI:
urls processed so far: ######
total moved so far: ######
timed out so far: ####3
I am reading pages 596ff in C# 5.0 in a Nutshell by the Albahari brothers, the section on Progress Reporting. At this point, I don't see how in the Progress instance these counters would be incremented in a thread-safe manner, and exactly how/where the UI gets updated. EVen in the example specifically discussing the differences between writing to the console and writing to the GUI, the book uses Console.WriteLine. I'd be grateful for an example showing exactly what occurs in the Progress instance -- incrementing some int variables and writing to a textbox, for example.
I have a walkthrough on my blog, in particular pointing out the caveats:
IProgress<T>.Report is asynchronous, so it works best if the object you send it is immutable.
By convention, the IProgress<T> progress passed into your TAP method may be null, so check for that before reporting progress.
Progress<T> works best with a UI or other single-threaded SynchronizationContext.
Exceptions raised from Progress<T>'s handler go directly to the SynchronizationContext, so avoid throwing exceptions from the progress report handler.
There's also a blog post here and the MSDN docs are quite good.

Advice needed for multi-threading strategy for WPF application

I'm building a single window WPF application
In the window is a list items (which are persisted in a database of course)
Periodically I need to start a background task that updates the database from an Atom feed. As each new item is added to the database, the list in the UI must also update to reflect this. I don't want this background task to slow down the UI but at the same time it needs to interact with the UI.
Having read loads of articles and seen lots of simple examples, I am still unsure of the best way to implement this.
What I think maybe I could do is:
On the Window_Loaded event, create a DispatchTimer.
When the Tick event fires, call UpdateDb() method.
UpdateDB() will get the items from the Atom feed and add to the database. As I iterate through each item I will call another method to rebind the list to the database so that it "refreshes".
When all the tasks are finished reset the DispatchTimer ??? (not sure if this can / needs to be / done).
Remember, this is background task so a user could be using the UI at the same time.
How does this sound?
Thanks.
This sounds suboptimal because you're doing database connectivity on the UI thread. When the Tick event fires on the DispatcherTimer, handlers will execute on the UI thread. You need to minimize the amount of work you do on this thread to keep the UI responsive, and you definitely shouldn't be doing IO-bound work on this thread.
I would probably have a data service whose responsibility is to update the database and raise events as changes are made. Your UI layer can attach to these events and marshal to the UI thread to apply changes. To marshal to the UI thread, you just need to call Dispatcher.Invoke.
Regardless of your specific approach, the key is to do as much as you can (including any database access) on a separate thread. Marshal back to the UI thread as late as possible and do as little work as possible on the UI thread.
One other thing to note is that WPF automatically marshals changes to scalar values for you. You only need to marshal changes to collections (adding/removing/replacing items).
Your approach would work.
You'd start the timer when the app loads. For each tick of the timer, you start a thread to update the database. Once the database update has happened, you can call .BeginInvoke() on your UI objects to update the UI on the presentation thread (that will be the only time your UI should be affected).
I'd use a System.Threading.Timer, which will call a specified method at a specified interval on a threadpool thread, so no need to create an additional thread, do your db work with that and marshal back to the ui thread as needed.
WPF Multithreading with BackgroundWorker by Pavan Podila:
The good news is that you really don’t have to write such a component since one is available already: the BackgroundWorker class introduced in .Net Framework 2.0. Programmers who are familiar with WinForms 2.0 may have already used this component. But BackgroundWorker works equally well with WPF because it is completely agnostic to the threading model.

detecting gdi / user handler leaks in winforms

I did nice winforms 2.0 application and it's working great and customers are still happy but unfortunatelly I cannot solve one issue. The problem is that after using app for a couple of hours, gdi user handles number is rising and rising and finally process cannot allocate more objects and app crashes...
I'm not doing anything fancy, it's regular app, a few forms, a few more modal forms, a few datagridviews and a lot tablelayoutpanels where I'm adding a lot labels and textboxes.
My questions are:
are there any "recommended-practises"
concerning adding/removing regular system
controls on forms at runtime (dgv/tlp)
how to detect system handles'
leaks - preferably using visual
studio and a kind of free plugin
(profiler?)
Detecting graphics and window handle leaks is very difficult. As to a particular strategy for finding them at runtime, I can't suggest anything (though I'd love to hear someone else's!).
As for preventing them, here are a couple of reminders:
While the Control class's finalizer will call Dispose(), this is non-deterministic. You are not guaranteed that ANY object will EVER get finalized by the garbage collector. It's likely that it will, but it's not a guarantee.
In keeping with the above, Forms are an exception. When a Form is shown in a NON-MODAL way (meaning through Show(), NOT ShowDialog()), then when the Form closes it will deterministically call Dispose(). Forms that are shown through ShowDialog() must have Dispose() called manually in order to deterministically clean up the control handle.
Keeping those two things in mind, the most important thing that you can do is to ensure that you always call Dispose() on any object that you explicitly create that implements IDisposable. This INCLUDES Forms, Controls, Graphics objects, even the graphics helper classes like Pen and Brush. All of those classes implement IDisposable, and all of them need to be disposed of as soon as you no longer need them.
Try to cache your graphics utility classes, assuming you're using some. While a Pen and a Brush are fairly lightweight to create, they do take up handles and need to be disposed of when you're finished. Rather than creating them all the time, create a cache manager that allows you to pass in the parameters that you would use in the constructor for those objects and keep that object around. Repeated calls with the same parameters should still only use one single instance. You can then flush your cache on a periodic basis or at specific places in your application if you know where those would be.
Following those guidelines will greatly reduce--if not eliminate--your handle leaks.
I find that using the Task Manager with the GDI Objects column visible essential to finding such leaks. You can target specific areas by breaking before the call, make a note of the GDI objects, then break after the suspect call to determine if the objects are being released properly.
The source code for two useful GDI leak tracking tools can be found here: link text
I have used it successfully on many Visual Studio C++ projects. I am not sure whether I work with .NET as well.

Resources