I came across a few projects where AngularJS is used along with Sencha Touch (for e.g. https://github.com/tigbro/sencha-touch-angular-adapter). Is there a benefit to using both together? If so, for what? I was under the impression that both of them are full fledged frameworks and you wouldn't have to mix/match.
IMHO, it is not worth the trouble.
Actually, it is very easy to mix other frameworks/libraries into SenchaTouch/ExtJS, and (I assume) people's motivation of doing that is mainly to get the benefit of two-way data binding into ST/Ext.
Things will be fine if you just touch the surface. Your HTML becomes clean and more maintainable, no more "weird" <tpl> tags floating around inside your JavaScript code, etc. The UI part of your project becomes beautiful.
And, you can even easily make data binding sync with simple Ext.data.Model flawlessly.
However, if you're using ST/Ext to handle data communication with backend, you're dealing with Ext.data.Store most of the time. Your collection of data come back from backend into the stores, and your models' hasMany associations are in the form of stores, just to name a few.
Now, how to sync stores with the plain JavaScript arrays or some kind of observable arrays which are used in two-way data binding? What need to be done to stores if the bound arrays changed? What need to be done to bound arrays if stores changed? Adding, removing and inserting are fairly simple operations to deal with, how about sorting and filtering?
Therefore, if you can afford to give up Ext.data.Store, mixing angular with ST/Ext is fairly easy task; otherwise, just stick with sencha.
Related
I am building a CRUD application using angularjs. Currently, I am using the json models returned from the back-end directly in my controllers. These models have a 3-4 level deep hierarchy. So my controller code looks like
$scope.prop1 = object1.object2.object3
...
I am wondering whether I should decouple my controllers from these back-end models. So instead of using the model objects directly, create new (flattened) models and then use them in the controller. Is that a recommended practice?
What are the advantages / disadvantages of this ?
My advice would be to check domain of the object you're passing from backend. Do object1 really contain object2? Are those objects connected or it is just handy to return?
Speaking about AngularJS - there is no any difference. You can $watch('object1.object2.object3') with mostly same performance impact as $watch('object3'). There will be no any error if object2 will not contain object3. There will be a very small difference, as $parse will parse your expression to AST, and evaluating it will take little bit longer to traverse to third object. But this difference would be so small, so it would be extremely hard to notice.
So I would advice not to "flatten" everything or "normalise" into strict hierarchy, but try to figure our real relations between objects. Even if on start you won't see any difference, later it will pay back you with much higher maintainability rate.
I had a talk with teamlead about this topic and from his point of view we can just use bindings and commandings omitting ViewModel, because we can test UI behaviour without VM using Automation or our own developed mechanisms of UI testing (based on automated clicks on Views). So, if there are no real benefits, why should I spawn "redundant" entities? Besides, automated integration tests look much more indicative than VM tests. So, it seems that we can mix VMs and Models.
update:
I agree that mixing VMs and Models brings into single .cs a data model and rules of data transformation for representing it in a View. But if this is only one benefit - I don't want to create a VM for each View.
So what pros of VM do you know?
The VM is the logic behind your UI. combining the UI code with the logic ends up in a mess, at least from my experience. Your view should define what you see - buttons, menus etc. Your VM is in charge of the binding and handling events caused by the user.
Edit:
Not wanting to create a VM for each view doesn't sound like a SW-oriented reason. Doing so will leave your view clean of logic and your model free to be the connecting layer between the core layer and your app layer.
I like the following example referring to the model and its role (why it shouldn't be combined with the VM): imagine you're developing some Android app using Google maps. Google maps are your core. Then one fine day you really need the option to, say, color parts of the map in pink, bright pink. An email to Google asking for colorPink(Map)will probably get you nowhere. That's where your model steps in and provides you the map wrapper that you need to define your pinky method.
Without a separate model, you'd have to go through every VM that uses map and update it.
So, the view has a role, the model has a role, the VM is the logic between those.
Edit 2:
There are some cases where there's no need of a model layer - I tended to disagree at first but was convinced after a discussion: In relatively small applications, the model can end up being a redundant wrapper with no added functionality over the core. In such cases, you can use the core objects directly.
What is he binding to? Whatever he is binding to is effectively a view model, whether you call it that or not. If it also doubles as a data model, then you're violating the Single Responsibility Principal (SRP). Essentially, the problem here is that you're lumping together code that is servicing different parts of the application architecture, which will lead to a convoluted mess.
UI Testing is a pain not just because you need to accomodate for changes in the View which can occur many times but also because UI tends to interfere with its own testing. Depending on the tests needed you'll need to keep track of focus, resize and possibly other (mouse) events which can be extremely hard to make it a representative test.
It is much easier to scope the test to the logic in the ViewModel and leave the UI testing to humans. The human testers do not need to test the logic; their only consern should be the UI.
By breaking down a ViewModel into a hierarchy of ViewModels you might be able to reuse a ViewModel multiple times. There is no rule that states that you should have a ViewModel for each View. (After a release or two I end up there but that's besides the point :) )
It depends on the nature of your Model - sometimes they are simple and can serve as both like you are suggesting. However, depending on the framework, you'll need to dirty up your model with some PropertyChanged event code which can get messy and distracting. In more complex cases, they serve as a mediator between your the view and the model. Let's suppose you're creating a client app that monitors a remote process or database entries - creating View Model's for these entities let's you surface your data in a way that is convenient to bind to for a UI framework (but is silly in a DB framework), encapsulate operations as commands, perform validation, etc etc.
I am confused.Please guide me anyone.
Is it mandatory to use any ORM tools(EF or Linq2SQL) when building an application in MVVM pattern?
Right now my application returns dataset using straight queries to like "select * from table"
Can I use dataset/datatable to List and then observable collection?or we need to have EF or L2S.
I am confused to kick start in MVVM
There's no reason you can't build your own Model layer, if that's what you want to do. The nice thing about modern design patterns is that they are generally agnostic toward what you use to fill each part.
I would build specific, separated classes for all your data access code, to keep that first M separate.
An overarching principle of patterns like MVVM and MVC are to separate your various concerns. This helps in so many ways - including, specifically, to support your ability to use your own data access (Model) while using the general pattern.
Ideally, you would write your code such that if you decided to move to Entity Framework in the future, you could do so without much change in the rest of the code. Rather - without any change in the rest of the code.
You can write your data access using the Repository pattern, using your custom classes that execute your hand-written SQL and produce classes that your View and ViewModel can deal with. With the Repository being the main place where your other code interacts, if you switch to EF or anything else in the future, you know you don't have to change any of your View or ViewModel code.
So I am currently working on a UI written in WPF. One thing I really like about WPF is the way it leads you to write more decoupled, isolated UI components. One pain point for me in WPF is that it leads you to write more decoupled, isolated UI components that sometimes need to communicate with one another :). This is probably due to my relative lack of UI experience, especially in WPF (I'm not a novice, but most of my work is far more low level than UI design).
Anyway, here is the situation:
At any one time, the central area of the UI displays one of three views implemented as UserControls, let's call them Views A, B, and C.
The user will be switching between these views at various times, and there is more than one way to switch views (this works well for the customer, causes some pain in code design currently).
Right now each view switching mechanism does its own thing to transition to another view. A certain singleton class takes care of storing data and communicating between the views. I don't like this, it's messy, error prone, and the singleton class knows way too much about the details of the UI. I want to eliminate it as much as is possible.
I ran into a bug today that had to do with the timing of switching between views. To make it simple, one view needs to perform some cleanup when it is unloaded, but that cleanup erases some data that is needed for another view. If the cleanup runs after the other view is loaded, problems ensure. See what I mean? Messy.
I am trying to take a step back and imagine a different way to get these views loaded with the data they need to do their job. Some of you more experience UI / WPF people out there must have come across a similar issue. I have a couple of ideas, but I am hoping someone will present a cleaner approach to me here. I don't like depending upon order of operations (at a high level) for my code to work properly. Thanks in advance for any help you may be able to offer.
I would recommend some kind of parent ViewModel that handles the CurrentView. I wrote an example here a while back if you're interested.
Basically the parent view will have a List<ViewModelBase> AvailablePages, a ViewModelBase CurrentPage, and an ICommand ChangePageCommand
How you choose to display these is up to you. My preferred method is a ContentControl with it's Content bound to the CurrentPage, and using DataTemplates to determine which View should be displayed based on the ViewModel stored in CurrentPage
Rachel's post sums up my basic approach to this, quite well. However, I would like to add a few things based on your comments which you may want to consider here.
Note that this is all assuming a ViewModel-first approach, as mentioned in comments.
The user will be switching between these views at various times, and there is more than one way to switch views (this works well for the customer, causes some pain in code design currently).
This shouldn't cause pain in the design. The key here is to have a single, consistent way to request a "current ViewModel" change, and the View will follow suit automatically. The actual mechanism used in the View can be anything - changing the VM should be consistent.
Done correctly, there should be little pain in the design, and a lot of flexibility in terms of how the View actually operates.
Right now each view switching mechanism does its own thing to transition to another view. A certain singleton class takes care of storing data and communicating between the views. I don't like this, it's messy, error prone, and the singleton class knows way too much about the details of the UI. I want to eliminate it as much as is possible.
This is where a coordinating ViewModel can really ease things. It does not require a singleton, as it effectively "owns" the individual ViewModels of the views. One option here, that's fairly simple, is to implement an interface on the ViewModels that includes an event - the ViewModel can raise the event (which I would name based more on what the intent is, not based on the "view change"). The coordinating VM would subscribe to each child VM, and based on the event, change it's "CurrentItem" property (for the active VM) based on the appropriate content to make the request. There are no UI details at all required.
I ran into a bug today that had to do with the timing of switching between views. To make it simple, one view needs to perform some cleanup when it is unloaded, but that cleanup erases some data that is needed for another view. If the cleanup runs after the other view is loaded, problems ensure. See what I mean? Messy.
This is screaming out for a refactoring. A ViewModel should never clean up data it doesn't own. If this is occurring, it means that a VM is cleaning up data that really should be managed separately. Again, a coordinating VM could be one way to handle this, though it's very difficult without more information.
I don't like depending upon order of operations (at a high level) for my code to work properly.
This is the right way to think here. There should be no dependencies on order within your code if it can be avoided, as it will make life much simpler over time.
I am trying to take a step back and imagine a different way to get these views loaded with the data they need to do their job.
The approach Rachel and I are espousing here is effectively the same approach I used in my series on MVVM to implement the master-detail View. The nice thing here is that the "detail" portion of the View does not always have to be the same type of ViewModel or View - if you use a ContentPresenter bound to a property that's just an Object (or an interface that the VMs share), you can easily switch out the Views with completely different Views merely by changing the property value at runtime.
My suggestion for this is to have one main view model that coordinates everything (not static / singleton) that you then use sub view models to transfer data around. This keeps the decoupling you are looking for, provides testability, and allows you to control when the data for each object is changed.
I had a client ask for advice building a simple WPF LOB application the other day. They basically want a CRUD application for a database, with main purpose being as a WPF training project.
I also showed them Linq To SQL and they were impressed.
I then explained its probably not a good idea to use that L2S entities directly from their BLL or UI code. They should consider something like a Repository pattern instead.
At which point I could already feel the over-engineering alarm bells going off in their heads (and also in mine to some extent). Do they really need all that complexity for a simple CRUD application? (OK, its effectively functioning as a WPF training project for them but lets pretend it turns into a "real" app).
Do you ever think its acceptable to use L2S entities all through your application?
How difficult is it (from experience) to refactor to use another persistence framework later?
The way I see it, if the UI layer uses the L2S entities as a simple a POCO (without touching any L2S specific methods) then that should be really easy to refactor later on if need be.
They do need a way to centralize the L2S queries, so some sort of way to manage that is needed, even if they do use L2S entities directly. So in a way we are already pushing toward some aspects of DAL/DAO/Repository.
The main problem with Repo I can see is the pain of mapping between L2S entities and some domain model. And is it really worth it? You get quite a lot "for free" on L2S entities which I think would be hard to use once you map to another model.
Thoughts?
The only major drawback to using L2S entities all the way through is that your UI needs to know about and bind to the concrete entities. That means your UI knows your data layer. Not usually a good idea. You really want a layered approach for anything that has the potential to be serious.
That said, it's perfectly possible to use the LINQ-to-SQL entities themselves in a layered architecture without knowledge of the data layer: extract interfaces for the entities and bind to them instead.
Keep in mind that all L2S entities are partial classes. Create interfaces that reflect your entities (Refactor => Extract Interface is your friend here) and create partial class definitions of your entities that implement the interfaces. Put the interfaces themselves (and only the interfaces) in a separate .DLL that your UI and Business Layer reference. Have your Business Layer and Data Layer accept and emit these interfaces rather than the concrete versions of them. You can even have the interfaces implement INotifyPropertyChanging, since the L2S entities themselves implement those interfaces. And it all works peachy.
Then, when/if you need a different persistence framework, you have no pain at all in the BL or UI, only in the data layer -- which is where you want it.
Repositories are not a big deal. See here for a pretty good treatment of their use in ASP.NET MVC:
http://nerddinnerbook.s3.amazonaws.com/Part3.htm
Basically what we did for a project was that we had a Business layer that did all of the "LINQing" to L2S objects... in essence centralizing all querying to one layer via "Manager Objects" (I guess these are somewhat akin to repositories).
We did not use DTOs to map to L2S; as we didn't feel is was worth the effort in this project. Part of our logic was that as more and more ORMs support Iqueryable and similar syntax to L2S (e.g. entity framework), then it probably wouldn't be THAT bad to switch to a different ORM, so its not that bad of a sin, IMO.
Do you ever think its acceptable to use L2S entities all through your application?
That depends. If you do a short lived product you can get things easily wired up in a quick succession if you use L2S. But if you do a long lived product which you might have to maintain for a long duration, then you better think about a proper persistence layer.
How difficult is it (from experience) to refactor to use another persistence framework later?
If you use L2S in all your layers, well then you must not think about re-factoring them to use another persistence framework. This is exactly the advantage of using a framework like NHibernate or Entity Framework in your persistence layer , that though it requires a bit of extra effort upfront it will be easy to maintain in the long run
It sounds like you should be considering the ViewModel Pattern for WPF
http://msdn.microsoft.com/en-us/magazine/dd419663.aspx