Question using Ext's update() instead of dom.innerHTML - extjs

I have a question concerning the performance, reliability, and best practice method of using Extjs's update() method, versus directly updating the innerHTML of the dom of an Ext element.
Consider the two statements:
Ext.fly('element-id').dom.innerHTML = 'Welcome, Dude!';
and
Ext.fly('element.id').update('Welcome, Dude!', false);
I don't need to eval() any script, and I'm certain that update() takes into consideration any browser quirks.
Also, does anyone know if using:
Ext.fly('element-id').dom.innerHTML
is the same as
d.getElementById('element-id').innerHTML
?
Browser and platform compatibility are important, and if the two are fundamentally the same, then ditching Ext.element.dom.innerHTML altogether for update() would probably be my best solution.
Thanks in advance for your help,
Brian

If you do not need to load scripts dynamically into your updated html or process a callback after the update, then the two methods you've outlined are equivalent. The bulk of the code in update() adds the script loading and callback capabilities. Internally, it simply sets the innerHTML to do the content replacement.
Ext.fly().dom returns a plain DOM node, so yes, it is equivalent to the result of getElementById() in terms of the node it points to. The only subtlety to understand is the difference between Ext.fly() and Ext.get(). Ext.fly() returns a shared instance of the node wrapper object (a flyweight). As such, that instance might later point to a different node behind the scenes if any other code calls Ext.fly(), including internal Ext code. As such, the result of a call to Ext.fly() should only be used for atomic operations and not reused as a long-lived object. Ext.get().dom on the other hand returns a new, unique object instance, and in that sense, would be more like getElementById().

I think you answered your own question: "Browser and platform compatibility are important, and if the two are fundamentally the same, then ditching Ext.element.dom.innerHTML altogether for update() would probably be my best solution." JS libraries are intended (in part) to abstract browser differences; update is an example.
#bmoeskau wrote above, update() provides additional functionality that you don't need right for your current problem. Nevertheless, update is a good choice.

Related

React Simple Global Entity Cache instead of Flux/React/etc

I am writing a little "fun" Scala/Scala.js project.
On my server I have Entities which are referenced by uuid-s
(inside Ref-s).
For the sake of "fun", I don't want to use flux/redux architecture but still use React on the client (with ScalaJS-React).
What I am trying to do instead is to have a simple cache, for example:
when a React UserDisplayComponent wants the display the Entity User with uuid=0003
then the render() method calls to the Cache (which is passed in as a prop)
let's assume that this is the first time that the UserDisplayComponent asks for this particular User (with uuid=0003) and the Cache does not have it yet
then the Cache makes an AjaxCall to fetch the User from the server
when the AjaxCall returns the Cache triggers re-render
BUT ! now when the component is asking for the User from the Cache, it gets the User Entity from the Cache immediately and does not trigger an AjaxCall
The way I would like to implement this is the following :
I start a render()
"stuff" inside render() asks the Cache for all sorts of Entities
Cache returns either Loading or the Entity itself.
at the end of render the Cache sends all the AjaxRequest-s to the server and waits for all of them to return
once all AjaxRequests have returned (let's assume that they do - for the sake of simplicity) the Cache triggers a "re-render()" and now all entities that have been requested before are provided by the Cache right away.
of course it can happen that the newly arrived Entity-s will trigger the render() to fetch more Entity-s if for example I load an Entity that is for example case class UserList(ul: List[Ref[User]]) type. But let's not worry about this now.
QUESTIONS:
1) Am I doing something really wrong if I am doing the state handling this way ?
2) Is there an already existing solution for this ?
I looked around but everything was FLUX/REDUX etc... along these lines... - which I want to AVOID for the sake of :
"fun"
curiosity
exploration
playing around
I think this simple cache will be simpler for my use-case because I want to take the "REF" based "domain model" over to the client in a simple way: as if the client was on the server and the network would be infinitely fast and zero latency (this is what the cache would simulate).
Consider what issues you need to address to build a rich dynamic web UI, and what libraries / layers typically handle those issues for you.
1. DOM Events (clicks etc.) need to trigger changes in State
This is relatively easy. DOM nodes expose callback-based listener API that is straightforward to adapt to any architecture.
2. Changes in State need to trigger updates to DOM nodes
This is trickier because it needs to be done efficiently and in a maintainable manner. You don't want to re-render your whole component from scratch whenever its state changes, and you don't want to write tons of jquery-style spaghetti code to manually update the DOM as that would be too error prone even if efficient at runtime.
This problem is mainly why libraries like React exist, they abstract this away behind virtual DOM. But you can also abstract this away without virtual DOM, like my own Laminar library does.
Forgoing a library solution to this problem is only workable for simpler apps.
3. Components should be able to read / write Global State
This is the part that flux / redux solve. Specifically, these are issues #1 and #2 all over again, except as applied to global state as opposed to component state.
4. Caching
Caching is hard because cache needs to be invalidated at some point, on top of everything else above.
Flux / redux do not help with this at all. One of the libraries that does help is Relay, which works much like your proposed solution, except way more elaborate, and on top of React and GraphQL. Reading its documentation will help you with your problem. You can definitely implement a small subset of relay's functionality in plain Scala.js if you don't need the whole React / GraphQL baggage, but you need to know the prior art.
5. Serialization and type safety
This is the only issue on this list that relates to Scala.js as opposed to Javascript and SPAs in general.
Scala objects need to be serialized to travel over the network. Into JSON, protobufs, or whatever else, but you need a system for this that will not involve error-prone manual work. There are many Scala.js libraries that address this issue such as upickle, Autowire, endpoints, sloth, etc. Key words: "Scala JSON library", or "Scala type-safe RPC", depending on what kind of solution you want.
I hope these principles suffice as an answer. When you understand these issues, it should be obvious whether your solution will work for a given use case or not. As it is, you didn't describe how your solution addresses issues 2, 4, and 5. You can use some of the libraries I mentioned or implement your own solutions with similar ideas / algorithms.
On a minor technical note, consider implementing an async, Future-based API for your cache layer, so that it returns Future[Entity] instead of Loading | Entity.

What is safe method to perform `Object.assign` when using AngularJS?

Basically I would like to perform Object.assign to get copy of my data, without anything that AngularJS attached in the past, it attaches now, or may attach in future versions.
Sure I can delete such property as $$hashKey after assignment but this approach is totally fragile, on the other hand I could manually construct the object with fields I want but this on the other hand is tiresome (and also fragile if I change definition of my source object).
Is there something solid in between?
There are no other properties as $$hashKey, it is one of a kind.
All of Angular object helpers are aware of this property and remove it at the end of the operation. angular.extend is a direct Angular counterpart of Object.assign and should be used instead.
angular.copy seems to be helpful in this case

In an AngularJS expression is there a way to compare a scope value to a value in another library?

I'm creating a directive around a third party library, to go in a form, where the option chosen in a select drop-down will bring up a different set of form elements.
In the parent element of each subset of form elements I'm trying to use an expression similar to this: ng-if="myScopeObj.val === ThirdParty.CONSTANT_VAL". I came to realize it's not working because the "ThirdParty" library isn't on the scope.
Should I just assign the library to a variable on the scope, or is there some pattern that can address this? It seems like creating isThis() or isThat() functions for every constant in the library wouldn't be a great solution.
Should I create a service to wrap the third party library and then inject it into the directive? Though I'd still need to put the injected service on the scope. Would that be overkill for a library that doesn't access remote APIs? I don't think it'd need to be mocked for testing, anyway.
You're correct that you do need to get the value on the $scope somehow in order for it to be usable. And you're correct that one of the primary benefits of wrapping in a service is that you can mock the library. Another benefit of wrapping in a service is self-documentation. As someone else (or yourself at a later time) looking at your code, I could be confused as to where ThirdParty is coming from. Working in Angular, the assumption is that all dependencies are injected, and breaking convention comes at a cognitive cost. Having a service also can make it easier to swap out the underlying library later for a different implementation. Anyway, your simplest fix is:
$scope.ThirdParty = ThirdParty;

Global property for DB access rather than passing DB around everywhere? Advice anyone?

Globals are evil right? At least everything I read says so, because something might alter the state of the global at any point.
However, I've a DB object that's a bit of a tramp in regards class parameters. The property below is an instance of a wrapper class that automatically works in MS Access or SQL - hence why it's not EF or some other ORM.
Public Property db As New DBI.DBI(DBI.DBI.modeenum.access, String.Format("Provider=Microsoft.ACE.OLEDB.12.0;Data Source={0} ;Persist Security Info=True;Jet OLEDB:Database Password=""lkjhgfds8928""", GetRpcd("c:\cms")))
The code itself does have PostSharp for exception handling, so I'm thinking that I can conditionally handle oledb errors by logging them and re initialising the DB if it is Null.
Up till now, the solution has been to continually pass the db around as a parameter to every single class that needs it. Most of the data classes have a shared observablecollection that is built from structures that individually implement inotifyproperty changed. One of these is asynchronously built. The collection property checks if it's empty before firing off the private Async buildCollection sub.
Given that we don't use dependency injection (yet) as I need to learn it; is the Global property all that bad? Db is needed everywhere that data is pulled in or saved. The only places I don't need it at all is the View and its code behind.
It's not a customer facing project but it does need to be solid.
Any advice gratefully recieved!!
Passing the DB connection as a parameter into your classes IS using dependency injection, perhaps you just didn't recognize it as such. Hard coding the connection string in the callers is still code that is not free of dependencies, but at least your database accessors themselves are free of the dependency upon a global connection.
Globals aren't just evil because they change without notice - that's just one effect you see resulting from the bad design choice. They're evil because a design using them is brittle. Code that depends upon globals requires invisible stuff to be set correctly before calling it, and that leads to inter-dependencies between unrelated code. The invisible stuff becomes critically important stuff. Reading just the interface of a module that internally uses globals, how would I know that I have to call the SetupGlobalThing() method before calling it? What happens if I call IncrementGlobalThing() and DecrementGlobalThing() and MultiplyGlobalThing() in varying orders, depending on the function the user selects?
Instead, prefer stateless methods where you pass in all the stuff to be changed and used: IncrementThing(Integer thing) doesn't rely on hidden setup steps. It clearly does one thing: it increments the thing passed in.
It may help to think about it from a unit testing viewpoint. If you were to write a unit test to prove a specific module of code works, would you need to pass in a real database connection (hard*), or would you be able to pass in a fake database reference that meets your testing needs easily?
The best way to test your logic is to unit test it. The best way to test your class interfaces and method structure is to write unit tests that call them. If the class is hard to test, it's likely due to dependencies upon external things (globals, singletons, databases, inappropriate member variables, etc.)
The reason I called using a real database "hard" is that a unit test needs to be easy and fast to run. It shouldn't rely on slow or breakable or complex external things. Think about unit testing your software on the bus, with no network connection. Think about how much work it is to create a dummy database: you have to add users, you have to have the right version of schema in it, it has to be installed, it has to be filled with the right kind of testing data, you need network connectivity to it, all those things can make your testing unreliable. Instead, in a unit test you pass in a mock database, which simply returns values that exercise your code being tested.

Is it bad practice to "go deep" with your application of callbacks?

Weird question, but I'm not sure if it's anti-pattern or not.
Say I have a web app that will be rendering 1000 records to an html table.
The typical approach I've seen is to send a query down to the database, translate the records in some way to some abstract state (be it an array, or a object, etc) and place the translated records into a collection that is then iterated over in the view.
As the number of records grows, this approach uses up more and more memory.
Why not send along with the query a callback that performs an operation on each of the translated rows as they are read from the database? This would mean that you don't need to collect the data for further iteration in the view so the memory footprint shrinks, and you're not iterating over the data twice.
There must be something implicitly wrong with this approach, because I rarely see it used anywhere. What's wrong with this approach?
Thanks.
Actually, this is exactly how a well-developed application should behave.
There is nothing wrong with this approach, except that not all database interfaces allow you to do this easily.
If we talk about tabularizing 10 records for a yet another social network, there is no need to mess with callbacks if you can get an array of hashes or whatever with a single call that is already implemented for you.
There must be something implicitly wrong with this approach, because I rarely see it used anywhere.
I use it. Frequently. Even when i wouldn't use too much memory repeatedly copying the data, using a callback just seems cleaner. In languages with closures, it also lets you keep relevant code together while factoring out the messy DB stuff.
This is a "limited by your tools" class of problem: Most programming languages don't allow to say "Do something around this code". This was solved in recent years with the advent of closures. Think of a closure as a way to pass code into another method which is then executed in a context. For example, in GSQL, you can write:
def l = []
sql.execute ("select id from table where time > ?", time) { row ->
l << row[0]
}
This will open a connection to the database, create a statement and a result set and then run the l << it[0] for each row the DB returns. Note that the code runs inside of sql.execute() but it can access local variables (l) and variables defined in sql.execute() (row).
With this kind of code, you can even generate the result of a HTTP request on the fly without keeping much of the page in RAM at any time. In my case, I'd stream a 2MB document to the browser using only a few KB of RAM and the browser would then chew 83s to parse this.
This is roughly what the iterator pattern allows you to do. In many cases this breaks down on the interface between your application and the database. Technologies like LINQ even have solutions that can send back code to the database.
I've found it easier to use an interface resolver than deep callback where its hooked up through several classes. MS has a much fancier version than mine called Unity. This provides a much cleaner way of accessing classes that should not be tightly coupled
http://www.codeplex.com/unity

Resources