Silverlight - Unable to load a lot of data - silverlight

I have a Silverlight application that has a DataGrid and a DataPager. The data source for these controls comes from a database. I am accessing this database through RIA Services.
When I try to load all of the records, I receive an error that says:
"Load operation failed for query 'GetData'. The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error."
By gradually restricting the size of the result set on the server side, I have come to the conclusion that I am getting this error because my data set is too large. My question is, how do I elegantly load large data sets into a DataGrid? I am open to approaches outside of RIA Services.
Thank you!

First off, if you have the means and aren't required to write this code yourself, consider buying a UI component that solves you problem (of find an open source solution). For these types of tasks, there's a good chance that someone else has put a lot of effort into solving problems like this one. For reference, there's a teleric grid control for Silverlight with some demos.
If you can't buy a component, here's some approaches I've seen:
Set up a paging system where
the data for the current page is
loaded, and new data isn't loaded
until the pages are switched. You
could probably cache previous results
to make this work more smoothly.
Load data when needed, so when the user scrolls down/sideways, data is loaded once cells are reached that haven't had data loaded.
One last idea that comes to mind is to gzip the data on the server before sending. If your bottleneck is transmission time, compression speed things up for the type of data you're working with.

You should take into consideration that you are possibly exceeding the command timeout on your data source. The default for LINQ to SQL for example is 30 seconds. If you wanted to increase this, one option is to go to the constructor and change it as follows:
public SomeDataClassesDataContext() :
base(global::System.Configuration.ConfigurationManager.ConnectionStrings["SomeConnectionString"].ConnectionString, mappingSource)
{
this.CommandTimeout = 1200;
OnCreated();
}

Related

App Engine backup never finishes only clue is failure in map reduce worker_callback

Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!

Show images from inputsstream in xhtml page

I need to show some images that queried form database and placed into inputstream.My framework is JSF and I know that by using servlet, I can show them. But the problem is that there is many images in my page that placed into database, now if I want to select each image from database and show in my xhtml page, a lot of queries needed. In one managedbean, all of images placed into List of inputstreams, and I want to show each element as an image in page. In fact my requirement is to read image from inputstream and show in xhtml page.Can any body guide me?
If you're using Richfaces, you can use <ui:repeat> to iterate your list of images and use <a4j:mediaoutput> to show your images on your xhtml, example, also see How to use a4j:mediaOutput correctly
and another example
now if I want to select each image from database and show in my xhtml page, a lot of queries needed
How exactly does that form a problem? Have you measured the performance? Is the bottleneck really in "a lot of queries"? I really don't understand why that would form a bottleneck. It should be blazing fast with a properly designed datamodel, a self-respected SQL database is designed for exactly this purpose.
Isn't your bottleneck actually the step of making a DB connection and that you're doing that on every single query because you aren't using a connection pool? If so, then yes, it would be understandable that it would perform very slow. Making a DB connection can be as slow as 100~500ms. That's exactly why connection pools were invented a long time ago. Connections would then only be initialized and cleaned during "idle time" and shared/reused on a threadsafe manner and thus getting a connection from it should be no more than 10ms or something.
If you fix your data layer to utilize a decent connection pool, then you can keep using your servlet which is already the right tool for the particular job.

Silverlight 4.0 - Memory crash with 20.000 data to display

I've a radiantQ gantt control and 20.000 data recived from wcf service to display. When less data to display it works like a charm. But i need to diplay 20.000 records and all browsers crash-freeze. Is there any way to solve this problem? Increasing isolated-storage may works?
thanks.
#halil ibrahim,
Please contact RadiantQ Technical Support and we will provide you hints on how to optimize gantt usage with large amounts of data.
Does application crash in rendering mode (when you display data on UI)? - You can use virtualization mode. You can try create one more thread (task or use background worker) and display parts of data to UI in "stack" mode. Do not load all data on first step. It should work. I tried do it: load of 1000+ data items from service and display it on UI.
Does application crash in process of receiving data from service (when you call a service method)? - You should configure your service. Increase max receive message size, etc. It depend on service, which you use.

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Silverlight3: What to use: WebClient or database with RIA

I'm asking for the advice. I'm working at the Silverlight 3 application and now I should select the mean how to save the information and get it. I could save the necessary info in files (from 1 to 300K size) or I could save them in database. If I would use WebClient for accessing to separate file there's very low loading of the server. If I get data from database the server would load much more I think and the code on the server too.
Please correct me if I'm not right.
I'm looking forward to hearing from you!
Thanks
there are additional considerations if you use a file that is localized to the users machine. If you wish to save data w/o any user intervention then you are limited to using Isolated Storage, which has constraints on the size of your data. Otherwise, you have to ask the user for information on where to save/load the file. This is due to the security model used by silverlight.
i am thinking that a Database and the RIA framework might be the way to go.
just my 2ยข
If you are saving and loading the entire file at a time, then it might be okay to use a WebClient. This might take a little coding to handle errors that may result in incomplete saves.
If you're serializing some objects or xml data and storing that in a file, then you should probably be using a database instead.
Edit: It can be a pain to get WebClient or HttpWebRequest working correctly with GET/POST, but WCF can also be a pain to configure if you haven't done it before. WCF is probably better style, and you'll want to use a binary binding and send the file across as a byte[].

Resources