I have near about 100 entities and each entity has custom property of byteArray which is the actual byte array of the image. My question is should i bring the byte array also along with the entities or should i make 100 requests(each request for each entity) so that the user can atleast see something while the other bytearrays are being downloaded.
Thanks in advance :)
Think through the application requirements and make a decision based on that. How big are the byte[]s? What will the receiving app actually do with these? Display them all at once or on-demand? Will they be displayed initially just as thumbnails?
Your decision isn't just whether or not to include the byte[]... depending on the actual requirements, you could put the byte[]s as seperate entities which can be retrieved by making paged requests to that collection as the user scrolls through some list. Or you could decide to create seperate entities to represent thumbnails of the images. Lots of options...
I wouldn't recommend loading the images along with the data. You should break it apart into multiple loads. Display the information first and then have the images load asynchronously as they are needed.
Related
. Requesting the page(on HTTP or WebPage), it is very slow or even crash unless i load my JSON with fewer data. I really need to solve this since sooner or later i will be using large amount of data frequently. Here are my JSON data. --->>>
Notes:
1. The JSON loads only String and Integer.
2. I used to view my JSON in JSONView more like treeview using plugin
from GoogleChrome.
I am using angular and nodejs. tq
A quick resume of all the things that comes to my mind :
I had a similar issue once. My solutions may make the UI change.
Pagination
I doubt you can display that much data at one time, so the strategy should be divising your data in small amounts and then only load more when the client ask for it.
This way, the whole data is no longer stored in RAM as it is currently. This is how forums works (only 20 topics at a time).
Just imagine if StackOverflow make you load the whole historic of questions in the main page, how much GB would your navigator need just for that ?
You can use pagination in a classic way (button with page number, like google), or in an infinite scroll way, as you want.
For that you need to adapt your api and keep track of the index of the pages you already loaded at every moment in your Front. There are plenty of examples in AngularJS.
Only show the beginning of the data
When you look at Facebook comments, you may have a "show more" button. In their case, maybe it's to not break the UI, but it can also be used to not overload data.
You can display only the main lines of your datae (titles or somewhat) and add a button so the user can load more details if they want.
In your data model, the cost seems to be on the second level of "C". Just load data untill this second level, and download the remaining part (for this object) only if the user asks for.
Once again, no need to overload, your client's RAM will be thankfull, and your client's mobile 3G too.
Optimize your data stucture
If this is still not enough :
As StefanArya said in comment, indeed remove the "I" attribute, which is redundant with the JSON key.
Remove the "I" as you can use Object.keys() to get key name.
You also may don't need that much precision on your floats.
If I see any other ideas, I'll edit this post later.
Many examples on the net show you how to use ng-repeat with in-memory data, but in my case I have long table with infinite scroll that gets data by sending requests to a REST API (scroll down - fetch some data, scroll down again - fetch some more data, etc.). It does work, but I'm wondering how can I integrate that with filters?
Right now I have to call a specific method of API service that makes a request based on text in "search" input box and then controller updates $scope.data.
Is it possible to build a custom filter that would do that? And then my view would be utterly decoupled from the service and I could declaratively tell it how to group and order and filter data, regardless if it's in-memory or comes from a remote server, server that can serve only limited records at a time.
Also later I'm gonna need grouping and ordering as well, I'm so tempted to download the entire dataset and lock parts of the app responsible for grouping, searching and ordering (until all data is on the client), but:
a) that dataset is huge (hundred thousands of records)
b) nobody wants to deal with cache invalidation headaches
c) doing so feels so damn wrong, you don't really expect me to 'keep' all that data in-memory, right?
Can you guys point me to maybe some open-source examples where I can steal some ideas from?
Basically I need to build a service and filters that let me to work with my "pageable" data that comes from api, like it's in memory-data.
Regardless of how you choose to solve it (there are many ways to infinite-scroll with angular, here is one: http://binarymuse.github.io/ngInfiniteScroll/), at its latest current beta version, ng-repeat works really bad with large amount of data - so do filters. The reason is obvious - pulling so much information for changes is a tuff job. Moreover, ng-repeat by default will re-draw your complete list every time something changes.
There are many solutions you can explore in this area, here are the ones I found productive:
http://kamilkp.github.io/angular-vs-repeat/#?tab=8
http://www.williambrownstreet.net/blog/2013/07/angularjs-my-solution-to-the-ng-repeat-performance-problem/
https://github.com/allaud/quick-ng-repeat
You should also consider the following, which really helps with large amounts of data.
https://github.com/Pasvaz/bindonce
Updated
I guess you can't really control your server output, because filtering and ordering large amount of data are better off done on the server side.
I was pointing out the links above since even if you write your own filters (and order-bys), which is quite simple to do - http://jsfiddle.net/gdefpfqL/ - (filter by some company name and then click the "Add More" button - to add more items). ordering by is virtually impossible if you can't control the data coming for the server - the only option is getting it all, ordering and then lazy load from the client's memory. So if each of your list items doesn't have many binding by it self (as in the example I've added) - the list item is a fairly simple one (for instance: you simply present the results as a plain text in a <li>{{item.name}}</li> then angular ng-repeat might work for you. In this case, filters will work as expected - say you filter by searched text:
<li ng-repeat="item in items | filter:searchedText"></li>
even for new items added after the user has searched a text, it will still works because the magic of binding.
Im planning to use Leaflet Draw as part of a special wiki with an embedded map. Users should be able to draw geo-objects that are related to one or more pages in the wiki. As the wiki-pages the objects are saved in a database and can be modified by every user.
Problems:
How can i limit the number of editable objects to only one at a time?
How to keep the database consistent if two users are editing the same object at the same time?
How can i generte multi-objects/combine several objects (e.g. polygons) to a super-object (multi-polygon)?
Does anybody know some similiar approches to my idea?
Thanks.
You will have a single FeatureGroup for leaflet.draw's objects that can be edited. Simply figure out which objects will be edited and which won't be and add them to seperate FeatureGroups.
This can be handled in a few ways, maybe have a look around at general database consistency for this.
I'm not sure what you mean, maybe have a look at Well Known Text it might help you with storage here.
I have WCF server and silverlight client. The client call the server to get list of items.
There is some cases that the item list is very big and I want to have the ability to get the items in more then one call -
Call1 => get the items 0-100
Call2 ( if the user click on 'more' button ) => get the item 101-200
.
.
Call N => get the 100*n - 100*(n+1) items.
How can I do it ?
Is there some 'easy' pattern to do it ?
Thanks.
If you have a standard page size of 100 then have the client pass the page they want to the service. Or get the client to tell the service how big their pages are and which page they want
You could hold in memory on the service which page the client has and then have then say "Next" but holding in-memory state in the service on behalf of the client reduces scalability and increases fragility (if that state is lost then the client has to start paging again.
making the client explicitly say what they want is a more robust and scalable solution and has an easy LINQ implementation with Skip and Take
As Richard mentions, paging is a common option. Also, returning the results as a stream (and not a buffered byte[] array but an actual stream -- WCF has some caveats around use of stream) is going to generally be most efficient. Also as marc_s noted, Silverlight local storage isn't huge, so keep that handicap in mind.
The chance of a user 'consuming' more than 100 items in one go is very small, even if the items have very little detail, maybe add navigation (categories etc) as filters to the data so the user will only get the 20 or so items they are actually interested in. Tree views can be quite handy to break lists up into smaller lists that are more relevant to users, but there are many ways of doing this...
I've run into reoccuring problem for which I haven't found any good examples or patterns.
I have one core service that performs all heavy datasbase operations and that sends results to different front ends (html, silverlight/flash, web services etc).
One of the service operation is "GetDocuments", which provides a list of documents based on different filter criterias. If I only had one front-end, I would like to package the result in a list of Document DTOs (Data transfer objects) that just contains the data. However, different front-ends needs different amounts of "metadata". The simples client just needs the document headline and a link reference. Other clients wants a short text snippet of the document, another one also wants a thumbnail and a third wants the name of the author. Its basically all up to the implementation of the GUI what needs to be displayed.
Whats the best way to model this:
As a lot of different DTOs (Document, DocumentWithThumbnail, DocumentWithTextSnippet)
tends to become a lot of classes
As one DTO containing all the data, where the client choose what to display
Lots of unnecessary data sent
As one DTO where certain fields are populated based on what the client requested
Tends to become a very large class that needs to be extended over time
One DTO but with some kind of generic "Metadata" field containing requested metadata.
Or are there other options?
Since I want a high performance service, I need to think about both network load and caching strategies.
Does anyone have any good patterns or practices that might help me?
What I would do is give the front end the ability to request the presence of the wanted metadata ( say getDocument( WITH_THUMBNAILS | WITH_TEXT_SNIPPET ) )
Then this DTO is built with only this requested information.
Adding all the possible metadata is as you said, unacceptable.
I will surely stay with one class defining all the possible methods (getTitle(), getThumbnail()) and if possible it will return a placeholder when the thumbnail was not requested. Something like "Image not available".
If you want to model this like a pattern, take a look at the factory patterns.
Hope this helps you.
Is there any noticable cost to creating a DTO that has all the data any of your views could need and using it everywhere? I would do that, especially since it insulates you from a requirement change down the line to have one of the views incorporate data one of the other views uses
ex. Maybe your silverlight/flash view doesn't show the title itself b/c it's in the thumb now, but they decide they want to sort by it later.
To clarify, I do not necesarily think you need to pass down all of the data every time, but I think your DTO class should define all of them. Just don't fall into the pits of premature optimization or analysis paralysis. Do the simplest thing first, then justify added complexity. Throw it all in and profile it. If the perf is unacceptable, optimize and try again.