Web app returning data but too slow - sql-server

I have created data driven web app that uses MVC 4.0, WebServices, RestFul, SQLServer 2008 R2, jQuery Mobile (one page app), AJAX architecture.
When a user clicks 'Search by name' > 'Browse names' this brings in all the data from a table users (and a couple of other data lookup tables).
The app has been designed to be used on Retina displays as well as normal ones (one large size image only) and re-sizes them in the html.
The problem is this search can take sometimes 10 secs to return around 400 records but by using the URI it takes only around two. Is this normal, and can lazy loading help? How could I implement this in the AJAX below?
AJAX:
function getConsultants() {
$.ajax({
type: 'GET',
async: false,
url: 'http://31.222.187.42/hca-consulting/Farm/users',
//url: 'json/get_consultants.txt',
dataType: 'json',
success: function (users) {
hcaConsultants = users;
},
});
};

Here is what I believe your issues are:
Initial page load amounts to 13.1MB (easy, tiger!). Obviously this needs to be addressed. Run all of your PNGs and JPEGs through an optimizer. Here is one that is an add on for Visual Studio. In light of this, on many PCs, the RAM use of the browser process will hit the roof. Chrome hit 250MB for me and Opera 650MB usage. It's a heavy, heavy page with 171 requests on initial page load. That is really high. Consider using sprites for your images as well, consolidate as many as you can, the performance difference is stunning; the higher the number of requests the less efficient a page becomes.
Do you really need to preload all those images? Definitely consider the lazy loader you mention or other image loader as appropriate.
When the browse names is invoked, the number of requests goes up to 548 individual items on the HTTP stack, going up to 14.3MB (but all of the image links are broken, so I would suggest, once you get those in place, it would be 30+MB at that point. That is obviously unacceptable for potentially a mobile (web or install?) application.
You may wish to consider also thinking about jQuery templating for speed of delivery of those records to the DOM. In addition, you could also implement a scroll load like the waypoint plugin which is a beautifully minimized 8kb, so more JSON results appear in the browser as they scroll down; the fewer records you are dealing with at any one time increases your in-browser efficiency and responsiveness.
You will need heavy, heavy caching here if you are delivering this over the web rather than as an installed app, embrace it with open arms. Also consider bundling and minifying all of your own Javascript into one file at build time with a VS post build event and my hero, Douglas Crockford's JSMin or by using the bundling built into the framework. Also, CDNs for your libraries.
Really that is it, the reason your page is loading so slowly is that it's groaning under the weight of its resources, hence why your api's JSON comes back so quickly when requested directly.
This is a great website for comparing your page speed and Yslow scores and here is the report with recommendations for your site for additional tips and tricks. Responsiveness and perceived speed is everything for these kinds of apps, so I am rooting for you to make it world class.
I hope this helps and good luck with your app.

Related

React App - creating a "sub app" to only load a subsection of a large primary React application

How can I deploy secondary "partial" React Apps using a subset of the same files?
A bit of background:
We have a fairly large React web app (around 75 primary pages/views, along with tons of modals, dynamic content, etc.) with about 5MB of JavaScript once it's been bundled for production.
It works well in normal browsers. But we also display certain views on TV screens, and have it cycling through several views - in a case like this, the "digital signage computer" - in some cases the onboard CPU/browser on a smart TV) is loading a "cycle screen", along with perhaps 4 pages contained in iFrames. (Complex setup, but we want the pages rotating in a few second interval, while only querying the database for updates every 10 minutes or so, and this achieves that.)
However, from my understanding, this means the Smart TV is having to load ~ 25MB of code (might be cached so not have to download it all 5 times, and very little of it is getting rendered, but still has to process it individually.) And occasionally, it appears that the TV gets overwhelmed while rendering and just displays a blank page.
I'm thinking perhaps there is a good a way to create lightweight secondary apps with just those pages (and associated provider contexts), so that these views could be trimmed down to < 1MB each.
But I don't want to maintain separate page/view files in different codebases, with different deployment processes, etc.
Is there a good method to do this? Where you can specify sub-apps which get separately bundled, and then where going to a certain route (e.g. server/subapp1) will only load that page's worth of code, while it's predominately the same code?

Codename One - Reduce the loading time of a web-app

I have a Codename One web-app that, after showing the logo, it remains completely blank and white for a variable time (from few seconds to more that ten seconds). My Internet connection is very fast (optical fiber).
Is there any tip to reduce the loading time of a Codename One web-app? The build size is 663kb and the generated application is 10,5MB (unzipped).
Chrome has some really nice benchmarking tools that help point out the time spent on each stage. You should run these and make sure that the downloaded binaries are gzipped so the download isn't the bottleneck.
Also make sure to run your tests against a deployed app and not on the preview which might exhibit different behavior.
In terms of the app, try to show a form quickly without any server requests or IO. Once you do that defer the code to the actual loading block later. If you trigger a server call this will significantly slow down loading.

Feasibility of using angular js for web app with over 200 medium to complex screens

My team was considering using angular js for web app UI development. But knowing at a high level how single page apps work, I had a question as to, how feasible it is to use angular js framework, for a large web application. this would have at least 200 screens, each screen containing an average of 30 UI controls like text boxes, calendar controls, drop downs etc.
The user will be accessing the web app on desktop or laptop and will be using the application in the browser throughout an 8 hour day, without ever closing the browser.
Given above usage, would angular js, memory usage / performance be issue?
are there web apps with huge and complex UI, built using angular js, that are in production and used everyday?
You can have not just 200 but 1000's of screens - this number does not matter as long as you address the core and fundamental questions below. They all boil down to memory and performance.
At a given point of time how many $watches will be active.
At a given point of time how many listeners are created.
At a given point of time what is the complexity of DOM i.e. the number of DOM elements and thee nesting/depth.
At a given point of time how many Javascript modules (services, controllers etc.) will be loaded in the memory. Even though these things are singletons they consume memory.
For each such screen how much memory will be consumed (if you use jqueryUI etc. your memory increases quite rapidly than pure angular + html components)
This brings to what you can do to control the above factors which if not watched will make your application sink/crash.
1) How do you break the run-time complexities of your "big" application ? You can now think of "Routers" or dialogs. Each of your screen will come-and-go i.e. you will never display 200 screens the same time.
2) Make sure when the screen is gone there is no "leftover". Don't use jQuery - if you do make sure you clean this on $scope.$destroy.
3) Multitudes of directives:- Directives are powerful but don't over use it - try not to deep nest too many of them. templateUrl is "tempting" but sometimes in-lining a template is the best choice. Use build tools that will inline the templates.
4) Load modules on demand using requireJS. This will force you to modularize your application and think hard about concatention strategy (combining JS files). Will force you to write independent modules.
5) Browser caching your assets and a good cache busting mechanism. Grunt based plugins are to your rescue. Minify your assets.
6) Compresss the assets for efficient network utilization and faster transmission.
7) Keep your app encapsulated in Angular. Don't create any global objects. Chances are that they have access to some closure or are referring to some things within angular $scope and $scopes are now still hanging on or are in difficult trajectory to be able to get Garbage Collected.
8) Use one-time-bind or bind-once model binding as much as possible.
9) Splash screen is an excellent weapon - us it. Helps you load some stuff upfront while the user watches smooth/jazzy picture/picture :)
Regarding 8 hours a day constant use
Unless there is a leak of the following kind you should be fine:-
1) Listeners leaking i.e. hanging around.
2) DOM leaks. Detached DOM issue.
3) Size of Javascript objects. Javascript objects coded in a certain way creates repeated instances of function.
(I am developing AngularJS app - with more than 450 screen - MDI like app. The app is not yet in production. The above points are coming from my burnouts in the functionality we have developed so far.)
I've worked on multiple projects that are extremely large single-page applications in Angular and Ember.JS at companies like McKesson an Netflix.
I can tell you for a fact that it's completely "feasible" to build "huge, complex" applications with frameworks such as Angular. In fact, I believe Angular is uniquely well suited to this because of it's modular structure.
One thing to consider when creating your Angular application is whether or not every user will benefit from all "200 pages" of your application. That is to say, is it necessary to have all 200 views be part of the same single page application? Or should you break it into a few single page applications with views that share concerns.
A few tips:
Watch out for name collisions in the IOC container: If you create enough services and controllers, even if you break them into separate modules, it's very easy to create two services with the same name. This may or may not result in an outright error. What happens when you register two "fooService" services? The last one wins.
If you decide to break your app into more than one single page app, you have to be sure you have solid boundaries of functionality between the two. They're not going to be able to share state easily other than with something like cookies or local storage.
If you decide to keep everything in one single page app, you're going to want to keenly watch your initial download time. Always check to see how long it takes to start your app "cold" over a slow-ish connection. If the entire app is in one bundle, depending on how you structure things (are you going to use AMD?) then you might see a long initial load time.
AVOID rendering HTML on your server. I can't stress this enough. Let Angular do that work for you. The only rendering your server should be doing is rendering JSON to be returned from some RESTful service.
Flush out your JS build process early on. Look into Node-based tools like Grunt, Gulp, and Broccoli for linting/concatenating/minifying your JS files. Checkout Karma for running unit tests, and look into Istanbul for code coverage. Protractor is a great tool as well, but I recommend strong unit tests in Karma over integration tests with Protractor just because Web Driver based tests tend to be brittle.
Other than that, I think you'll find a single page app written in any modern framework to be extremely snappy after the initial load is done. In fact, it will make any "old" PHP/ASP.Net style app that renders the entire page at the server seem slow as dirt in comparison. This is because you'll only be loading data and the occasional .html file over HTTP.

Why HTML/Web UI response slower than Native UI?

I can't understand, Why HTML/Web UI response slower than WinForms/WPF/Android View/Native UI?
The Native UI also have styles, elements nesting, events than the CSS, DOM, javascript events of the Web UI.
Event response time includes: focus changing, dropdown, scrolling, animation moving, animation resizing, etc.
The DOM tree insertion/replacing is also slow, inserting 10000 chars html will cost 100 ms in google chrome in android 4.0 while parsing its template only cost 20 ms(jQuery micro template).
I releazied maybe the biggest factor that slowdown event response is:
The UI locking between parallel javascript processes;
The rendering engine is too slow to process the new UI changing messages from javascript workers, especially when the browser rendering engine is busy with the last UI updating(because of the point 3);
The html layout method (for example: css cascading, inline flow layout, responsive layout etc) may slow down partial UI updating.
Parsing html/xml cost long time, a hint: Android view inflation relies heavily on pre-processing of XML files that is done at build time(http://developer.android.com/reference/android/view/LayoutInflater.html)
A subset of HTML and CSS standards maybe the future solution for webview app development:
http://www.silexlabs.org/haxe/cocktail/
http://www.terrainformatica.com/htmlayout/
http://www.nativecss.com/
http://www.pixate.com/
https://github.com/tombenner/nui
http://steelratstory.com/steelrat-products/wrathwebkit
http://trac.webkit.org/wiki/EFLWebKit
https://github.com/WebKitNix/webkitnix
http://qt-project.org/doc/qt-4.8/richtext-html-subset.html
http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/
A pile of native UI markup languages: http://en.wikipedia.org/wiki/User_interface_markup_language
why there is not a simplified HTML standard and a simplified Webcore layout engine to replace these native UIML?
Maybe we could realize a subset html in kivy.org project.
PC, android browser = application thread + ui thread
iOS browser = application thread + ui data thread + ui hardware thread(CoreAnimation/OpenGL ES)
In ios browser, application thread could directly call ui hardware thread.
If Web UI is completely implemented by JavaScript on the client side, the difference from WinForms/Native UI will be trivial.
However, in most cases, the Web UI triggers some Web request to the Web server, then it has to go through the following steps to achieve the same effect as a WinForms/Native app:
Send a HTTP request (GET/POST/...) to the Web server
The Web server is an executable (in the format of an external app or a service) listening to one or multiple ports. When it receives the request, parses it, and finds the Web application.
Web server executes backend (server-side) logic within application.
Web application such as ASP.NET is pre-compiled. Time complexity of this step could be very close to a Windows app.
Web server renders the result into markup and sends it back to the client
Client (Browser) parses the result and updates the UI if necessary.
Controls/images/other resources in a Web page normally take a little longer to render within a browser than a Windows app renders its display.
Even the Web server is local, the cost generated the data parsing/formatting/transfer cannot be simply ignored.
On the other hand, an application with WinForms/Native UI typically maintains a message loop, which is active and hosted in machine code. A UI request normally just triggers a lookup in the message table and then execute the backend logic (Step 2 in the above)
When it returns result and updates UI, it can be simply binary data structure (doesn't need to be in markup), and doesn't reply another application(browser) to render to the screen.
Lastly, a WinForms/Native application normally has full control to maintain multiple threads to update UI gradually, while a Web application has no direct control over that type of server-side resources.
UPDATE:When we compare a Web application and a Windows/WPF (or native) application consuming a same Web service to partially update their UIs
The two UIs should respond and refresh with ignorable speed difference. The implementation difference between binary & scripting execution to respond and refresh UI is almost nothing.
Neither of the UIs needs to reconstruct the control tree and refresh entire appearance. Given same conditions, they could have same CPU priority, memory/virtual memory caching, and same/close number of kernel object & GDI handles at process/thread level.
In this case, as you described, there should be almost no visual difference.
UPDATE 2:
Actually event handling mechanisms in Web and Windows apps are similar. DOM has event bubbling. Similarly, MFC has command routing; Winforms has its event flow; WPF has event bubbling and tunnelling, and so on. The idea is a UI event might not strictly belong to one control and a control has some way to claim an event has been "handled". For standard controls, focus changing, text changing, dropdown, scrolling events should have similar client-side response time for both Web and Windows apps.
Performancewise, rendering is the biggest difference. Web apps have limited control of "device context" because a Web page is hosted by an external application - the Web browser. Windows applications can implement animation effects using GPU resources like WPF and speed up rendering by refreshing the "device context" partially. That's why HTML5 canvas makes Web developers excited while Windows game developers have been using OpenGL/DirectX for over 10 years.
UPDATE 3:
Each Web browser engine (http://en.wikipedia.org/wiki/Layout_engine) has its own implementation of rendering DOM, CSS; and implementation of (CSS) selectors. Moving and resizing elements within a Web page is changing DOM, CSS (tree) setup. The selector and rendering performance highly depends on the Web browser engine.
UI operations could make selectors go through unnecessary steps to update UI.
A Web page doesn't have control to inform the browser to do partial rendering.
which make fancy JavaScript controls (some jQuery UI, dojo, Ext JS) cannot be real-time fast, usually slower than Flash controls.
The time spent on the client is negligible compared to the time the data spends travelling over the network. The actual render time of a Windows form or a webpage in a browser is measured in (tens or maybe hundreds) of microseconds. Sending a request to a server and getting the result back is measured in milliseconds.
You can confirm this quite easily:
Create a simple Winforms application, time it.
Create a similar Web-based application. Run it on the webserver on your own PC, I.E. //localhost/myapp.asp and time it.
Run it on a remote webserver and time it.
You'll see that 1 is fastest followed closely by 2 (a little slower, interpreting the HTML, the CSS etc) and 3 is vastly slower because of the network time.
To answer your question, the difference due almost entirely to network delays, which are an order of magnitude greater than local processing time.
EDIT: It would be kind of the downvoters to add a comment explaining why.
3 big differences
WebUI apps are run within a browser, which then depends on how well the browser is optimized.
The browser also has its own javascript jvm. another process that has to run and interpret the code before it runs.
All of this is an extra layer that is on top of the native OS. If you were to bring up the activity monitor of you computer and bring up a web page in your browser, you will notice what a resource hog web browsers are.
Native UI elements have graphics acceleration support. depending on the os, native ui templates are compiled to a native format that does not have to be parsed for rendering.
One thing to keep in mind is that the browser itself is a native application, so anything built for the browser to run is inherently written with (at least) one additional layer of abstraction, versus something written directly for native execution.
It's also worth noting such dynamics as this:
300ms tap delay, gone away
http://updates.html5rocks.com/2013/12/300ms-tap-delay-gone-away
The initial impetus for this artificial delay was to support pinch-zooming vs other touch interactions -- that is, slower responsiveness in this case was a deliberate way to disambiguate different user actions.
Granted, while this is a rather specific use-case, the general concept does serve as an example of the different considerations for browser-based vs native implementations. That is, browser-based experiences include some of the usual framework cost of solving for a wide variety of interactions and content, whereas native experiences are naturally tailored more specifically to only listen for / respond to the desired interaction models.
Throughout the implementation, many tiny parts (such as this) are slimmer and more focused in a raw native version, which can contribute to the general effect of better responsiveness.
Only in substandard browsers (this includes all Android browsers, all Mac OS browsers, all Linux browsers, and worst of all every version of Google Chrome). These are badly written, unoptimised browsers with no concern for touchscreen latency, UI responsiveness and smooth scrolling. They lock up and stutter during any kind of CPU activity, disk or network I/O and user input.
Superior browsers such as Internet Explorer 11 or iOS Safari are sometimes even more responsive than unoptimised native apps.
Basically only Windows 8.1 and iOS have responsive browsers. All other browsers are inferior as far as UI responsiveness is concerned. The difference is really huge. IE11 and iOS Safari obliterate other browsers in UI latency and smoothness.

Question on serving Images on App Engine ( 2 Alternatives )

planning to launch a comic site which serves comic strips (images).
I have little prior experience to serving/caching images.
so these are my 2 methods i'm considering:
1. Using LinkProperty
class Comic(db.Model)
image_link = db.LinkProperty()
timestamp = db.DateTimeProperty(auto_now=True)
Advantages:
The images are get-ed from the disk space itself ( and disk space is cheap i take it?)
I can easily set up app.yaml with an expiration date to cache the content in user's browser
I can set up memcache to retrieve the entities faster (for high traffic)
2. Using BlobProperty
I used this tutorial , it worked pretty neat. http://code.google.com/appengine/articles/images.html
Side question: Can I say that using BlobProperty sort of "protects" my images from outside linkage? That means people can't just link directly to the comic strips
I have a few worries for method 2.
I can obviously memcache these entities for faster reads.
But then:
Is memcaching images a good thing? My images are large (100-200kb per image). I think memcache allows only up to 4 GB of cached data? Or is it 1 Mb per memcached entity, with unlimited entities...
What if appengine's memcache fails? -> Solution: I'd have to go back to the datastore.
How do I cache these images in the user's browser? If I was doing method no. 1, I could just easily add to my app.yaml the expiration date for the content, and pictures get cached user side.
would like to hear your thoughts.
Should I use method 1 or 2? method 1 sounds dead simple and straightforward, should I be wary of it?
[EDITED]
How do solve this dilemma?
Dilemma: The last thing I want to do is to prevent people from getting the direct link to the image and putting it up on bit.ly because the user will automatically get directed to only the image on my server
( and not the advertising/content around it if the user had accessed it from the main page itself )
You're going to be using a lot of bandwidth to transfer all these images from the server to the clients (browsers). Remember appengine has a maximum number of files you can upload, I think it is 1000 but it may have increased recently. And if you want to control access to the files I do not think you can use option #1.
Option #2 is good, but your bandwidth and storage costs are going to be high if you have a lot of content. To solve this problem people usually turn to Content Delivery Networks (CDNs). Amazon S3 and edgecast.com are two such CDNs that support token based access urls. Meaning, you can generate a token in your appengine app that that is good for either the IP address, time, geography and some other criteria and then give your cdn url with this token to the requestor. The CDN serves your images and does the access checks based on the token. This will help you control access, but remember if there is a will, there is a way and you can't 100% secure anything - but you probably get reasonably close.
So instead of storing the content in appengine, you would store it on the cdn, and use appengine to create urls with tokens pointing to the content on the cdn.
Here are some links about the signed urls. I've used both of these :
http://jets3t.s3.amazonaws.com/toolkit/code-samples.html#signed-urls
http://www.edgecast.com/edgecast_difference.htm - look at 'Content Security'
In terms of solving your dilemma, I think that there are a couple of alternatives:
you could cause the images to be
rendered in a Flash object that would
download the images from your server
in some kind of encrypted format that
it would know how to decode. This would
involve quite a bit of up-front work.
you could have a valid-one-time link
for the image. Each time that you
generated the surrounding web page,
the link to the image would be
generated randomly, and the
image-serving code would invalidate
that link after allowing it one time. If you
have a high-traffic web-site, this would be a very
resource-intensive scheme.
Really, though, you want to consider just how much work it is worth to force people to see ads, especially when a goodly number of them will be coming to your site via Firefox, and there's almost nothing that you can do to circumvent AdBlock.
In terms of choosing between your two methods, there are a couple of things to think about. With option one, where are are storing the images as static files, you will only be able to add new images by doing an appcfg.py update. Since AppEngine application do not allow you to write to the filesystem, you will need to add new images to your development code and do a code deployment. This might be difficult from a site management perspective. Also, serving the images form memcache would likely not offer you an improvement performance over having them served as static files.
Your second option, putting the images in the datastore does protect your images from linking only to the extent that you have some power to control through logic if they are served or not. The problem that you will encounter is that making that decision is difficult. Remember that HTTP is stateless, so finding a way to distinguish a request from a link that is external to your application and one that is internal to your application is going to require trickery.
My personal feeling is that jumping through hoops to make sure that people can't see your comics with seeing ads is solving the prolbem the wrong way. If the content that you are publishing is worth protecting, people will flock to your website to enjoy it anyway. Through high volumes of traffic, you will more than make up for anyone who directly links to your image, thus circumventing a few ad serves. Don't try to outsmart your consumers. Deliver outstanding content, and you will make plenty of money.
Your method #1 isn't practical: You'd need to upload a new version of your app for each new comic strip.
Your method #2 should work fine. It doesn't automatically "protect" your images from being hotlinked - they're still served up on a URL like any other image - but you can write whatever code you want in the image serving handler to try and prevent abuse.
A third option, and a variant of #2, is to use the new Blob API. Instead of storing the image itself in the datastore, you can store the blob key, and your image handler just instructs the blobstore infrastructure what image to serve.

Resources