right now I have controllers/actions that do standard retrieval of model/associated model data. My actions currently just pass the variables to the views to pick and choose which values to display to the user via HTML.
I want to extend and reuse these functions for the case where a mobile device is making a call to grab the JSON format version of the data. I am using Router:parseExtensions("json") and that all works fine.
My main question is how to handle data size. Right now even a User model has many, many associated models and recursive relationships. As of now I am not using contain to cut out the unnecessary data before I pass it to the view, b/c the view will take the elements it wants and it won't affect the HTML size.
But for my JSON views, I just format it and return the whole thing, which makes it extremely large. My current thought process is I just need to case it to use containable in the case of JSON, but I was hoping there was a more elegant solution? Or is this the cakey way to do it?
Thanks!
Actually, using containable and fine tuning your query is a very elegant solution. Even if your view does not use the actual data, you put unnecessary load on your database by adding data / joins you don't need.
Try and limit your query and relations by using both Containable and fine tuning the relationships in your models and paginator calls.
It is also recommended that you move most of your find calls to the model layer. They will be re-usable, easier to test and overall more Cake-ish.
Related
New to backbone/marionette, but I believe that I understand how to use backbone when dealing with CRUD/REST; however, consider something like results from a search query. How should one model this? Of course the results likely relate to some model(s), but they are not meant to be tied to said model(s).
Part of me thinks that I should use a collection using a model that doesn't actually sync with a data store through the server, but instead just exists as a means of a modeling a search result object.
Another solution could be to have a collection with no models and just override parse.
I assume that the former is preferred, but again I have no experience with the framework. If there's an alternative/better solution than those listed above, please advise.
I prefer having one object which is responsible for both request and response parsing. It can parse the response to appropriate models and nothing more. I mean - if some of those parsed models are required somewhere in your page, there is something that keeps reference to this wrapper object and takes models from response it requires via wrapper methods.
Another option is to have Radio (https://github.com/marionettejs/backbone.radio) in this wrapper - you will not have to keep wrapper object in different places but call for data via Radio.
With Backbone, I do a somewhat expensive calculation for each Model in my Collection and there can be a lot of Models. I'm thinking I'd like to store the result in each Model with set(), but I don't want to save it to the server. Is this generally a bad idea?
If that's not good, is the better practice to keep it in an array variable or a Model (a calculations results model separate from the cached server data model)?
Why do I think this might be a good idea?
I wouldn't ever have to give thought to the array variable's scope/context.
No looking up the array contents once I have the relevant Model.
Data is more encapsulated
Why do I think this might be a bad idea?
Mixes cached server data with calculated local data.
Probably have to write sync code so that save() only saves the attributes the server should get.
Thanks!
EDIT
Found someone exploring a similar issue, with good discussion: Custom Model Property in Template.
This seems to have a pretty thorough answer that I am exploring: Backbone Computed Properties.
One solutions might be to override the toJSON function of your Model.
This function is called by the save function to get the attributes to be send back to the server.
Looking at the docs of the toJSON function is basically is saying you could use it for your specific purpose:
Return a copy of the model's attributes for JSON stringification. This can be used for
persistence, serialization, or for augmentation before being sent to the server.
I would personally not consider it as bad practice but all depends on the amount of and the calculations itself that is needed. So it would depend on your specific use case.
Also you could not store the calculated object in your model.attributes object but somewhere in your model instance. That way it would be hidden from the model attributes you will synchronize back and forth with your server.
Wijgrid will not work with breeze and angular. To save me repeating myself, please have a look at this post:
http://wijmo.com/topic/wijgrid-will-not-play-with-angular-and-breeze-together/
Your model, like most models, has circular references. We are not wijgrid experts (at all!) but we suspect that the wijgrid component - at least as you have it configured - is unprepared for objects with circular references.
Objects with circular references are not bad. In fact they are normal and expected in entity models. A customer has orders and each order has a property that returns its parent customer. If you don't
We suggest you consult the vendor on this topic.
If you are using the grid read-only, you can write a simple function to copy just the data you want to display from the queried entities into special purpose "ItemViewModel" objects; then load those into the grid. It's even easier to query with a Breeze projection because that returns raw data, not entities; just remember that Breeze isn't caching those non-entity results either.
The situation is more complicated if you want to edit the grid.
I want to reiterate that this is not a Breeze problem. It is a problem with the grid control.
I'm working on a personal project using WPF with Entity Framework and Self Tracking Entities. I have a WCF web service which exposes some methods for the CRUD operations. Today I decided to do some tests and to see what actually travels over this service and even though I expected something like this, I got really disappointed. The problem is that for a simple update (or delete) operation for just one object - lets say Category I send to the server the whole object graph, including all of its parent categories, their items, child categories and their items, etc. I my case it was a 170 KB xml file on a really small database (2 main categories and about 20 total and about 60 items). I can't imagine what will happen if I have a really big database.
I tried to google for some articles concerning traffic optimization with STE, but with no success, so I decided to ask here if somebody has done something similar, knows some good practices, etc.
One of the possible ways I came out with is to get the data I need per object with more service calls:
return context.Categories.ToList();//only the categories
...
return context.Items.ToList();//only the items
Instead of:
return context.Categories.Include("Items").ToList();
This way the categories and the items will be separated and when making changes or deleting some objects the data sent over the wire will be less.
Has any of you faced a similar problem and how did you solve it or did you solve it?
We've encountered similiar challenges. First of all, as you already mentioned, is to keep the entities as small as possible (as dictated by the desired client functionality). And second, when sending entities back over the wire to be persisted: strip all navigation properties (nested objects) when they haven't changed. This sounds very simple but is not at all trivial. What we do is to recursively dig into the entities present in trackable collections of say the "topmost" entity (and their trackable collections, and theirs, and...) and remove them when their ChangeTracking state is "Unchanged". But be carefull with this, because in some cases you still need these entities because they have been removed or added to trackable collections of their parent entity (so then you shouldn't remove them).
This, what we call "StripEntity", is also mentioned (not with any code sample or whatsoever) in Julie Lerman's - Programming Entity Framework.
And although it might not be as efficient as a more purist kind of approach, the use of STE's saves a lot of code for queries against the database. We are not in need for optimal performance in a high traffic situation, so STE's suit our needs and takes away a lot of code to communicate with the database. You have to decide for your situation what the "best" solution is. Good luck!
You can find an Entity Framework project item at http://selftrackingentity.codeplex.com/. With version 0.9.8, I added a method called GetObjectGraphChanges() that returns an optimized entity object graph with only objects that have changes.
Also, there are two helper methods: EstimateObjectGraphSize() and EstimateObjectGraphChangeSize(). The first method returns the estimate size of the whole entity object along with its object graph; and the later returns the estimate size of the optimized entity object graph with only object that have changes. With these two helper methods, you can decide whether it makes sense to call GetObjectGraphChanges() or not.
I've run into reoccuring problem for which I haven't found any good examples or patterns.
I have one core service that performs all heavy datasbase operations and that sends results to different front ends (html, silverlight/flash, web services etc).
One of the service operation is "GetDocuments", which provides a list of documents based on different filter criterias. If I only had one front-end, I would like to package the result in a list of Document DTOs (Data transfer objects) that just contains the data. However, different front-ends needs different amounts of "metadata". The simples client just needs the document headline and a link reference. Other clients wants a short text snippet of the document, another one also wants a thumbnail and a third wants the name of the author. Its basically all up to the implementation of the GUI what needs to be displayed.
Whats the best way to model this:
As a lot of different DTOs (Document, DocumentWithThumbnail, DocumentWithTextSnippet)
tends to become a lot of classes
As one DTO containing all the data, where the client choose what to display
Lots of unnecessary data sent
As one DTO where certain fields are populated based on what the client requested
Tends to become a very large class that needs to be extended over time
One DTO but with some kind of generic "Metadata" field containing requested metadata.
Or are there other options?
Since I want a high performance service, I need to think about both network load and caching strategies.
Does anyone have any good patterns or practices that might help me?
What I would do is give the front end the ability to request the presence of the wanted metadata ( say getDocument( WITH_THUMBNAILS | WITH_TEXT_SNIPPET ) )
Then this DTO is built with only this requested information.
Adding all the possible metadata is as you said, unacceptable.
I will surely stay with one class defining all the possible methods (getTitle(), getThumbnail()) and if possible it will return a placeholder when the thumbnail was not requested. Something like "Image not available".
If you want to model this like a pattern, take a look at the factory patterns.
Hope this helps you.
Is there any noticable cost to creating a DTO that has all the data any of your views could need and using it everywhere? I would do that, especially since it insulates you from a requirement change down the line to have one of the views incorporate data one of the other views uses
ex. Maybe your silverlight/flash view doesn't show the title itself b/c it's in the thumb now, but they decide they want to sort by it later.
To clarify, I do not necesarily think you need to pass down all of the data every time, but I think your DTO class should define all of them. Just don't fall into the pits of premature optimization or analysis paralysis. Do the simplest thing first, then justify added complexity. Throw it all in and profile it. If the perf is unacceptable, optimize and try again.