Kik requirements to application load time - kik

Our application is a full-featured game the should load ~15MB of assets, mainly graphics and sounds (PNGs are already optimized with tinyPNG, sounds are MP3/OGG with average bitrate).
So, the game load time is far great than 4000ms (first time). But the game has a nice (honest) loading progress bar.
Will it be indexed and pass through Kik QA?
If no, what are recommendations for such "heavy" applications?
Thanks,
Sergey.
From Kik requirements:
"
The webpage should always be fast to load.
Use minification to accomplish this, aim to be 4000ms on first visit, 700 ms on repeat visits.
Webpage should not consume excessive amounts of data on load.
Load the webpage and with no interaction should never go above 2MB
"

The load time recommendations are referring to how long it takes for the window.onload event to fire.
Based km what you said, your game will be fine because it loads quickly but then implements its own in-game loading state to load assets.

Related

How to know when React component is ready for interaction

Just curious what can be done to measure TTI (web vital) for a React app. As we are only interested for specific component TTI, so lighthouse won't be able to help here.
Lighthouse's definition of Time to Interactive (TTI) isn't the ideal metric to use when trying to measure component interactivity for a few reasons:
TTI is a lab metric where First Input Delay (FID) is a better proxy to use for real world data, but at the page level
TTI approximates when a page becomes idle by finding the first 5s "quiet" window of network (<=2 in-flight network requests) and CPU activity (no long tasks on the main thread). This approximation isn't the best approach to use when observing things at a component-level.
When trying to measure component-level performance in React, I would suggest either of the following:
Use the official React Developer Tools to get a high-level overview of your entire application's performance sliced in different ways (component profiling, interactions, scheduling)
Use the Profiler API to measure how long any component tree in your application takes to render. You can also use the User Timing API in conjunction to add marks and measures directly in the onRender callback which will then automatically show in Chrome DevTool's Performance panel and in the User Timings section in Lighthouse. This brings you full circle to being able to measure component interactivity directly in Lighthouse :)

Salesforce Low bandwidth tools

Does anyone know a good mechanism for measuring or reporting on page sizes?
I have a low bandwidth (humanitarian client) use case and trying to evaluate my pages, hi-res imagery or other page size issues, across the org. As an example, even a standard Lightning page view seems to be coming in at around 700kb, which seems high.
If there’s something on the AppEchange that would be great, but otherwise any direction in reporting, API tools or creating this through other mechanisms would be really helpful.
I have searched the Salesforce AppExchange, and available metadata/other API and so far haven't found anything. Event Monitoring has logs that help general page load performance and I found an article around improving performance, but haven't found ways to identify SIZE as would be needed for low bandwidth scenarios.
Don't know where to start yet, unfortunately. This could be a programmatic solution, in which case I'd love some direction, but it could also be tools available elsewhere I'm not aware of.
In Chrome dev tools (F12), in the Network tab you can simulate a low bandwith and long latency connection in order to measure the download time of a web page or web application.
You can also visualize the size and download time of every resources downloaded to identify the biggest images and the most time consumming requests.
In Salesforce, there's an administrative tool call Lightning Usage that can be activated. It generates diffrente dashboard and performance stats by page. You can found some screenshots in that Salesforce description of the service: https://developer.salesforce.com/blogs/2018/10/understanding-experienced-page-time.html. The metric EPT could meet your needs.

AngularJS Performance vs Page Size

My Site is ~500 KB Gzipped including js, css and images. It is built on AngularJS. A lot of people in my company are complaining about the site being slow in lower bandwidths. There are a few questions I would like to get answered,
Is 500KB Gzipped too high for lower bandwidths? People claim it takes 20 sec for it to load on their machine, which I believe is an exaggeration. Is it really due to anugularJS and its evaluation time?
How does size of the app matters in lower bandwidths? If my site is 500KB and if I reduce it to 150KB by making a custom framework, Would it really help me in lower bandwidth? If so, how much?
It's all subjective, and the definition of "low bandwidth" is rather wide. However...using https://www.download-time.com/ you can get a rough idea of how long it would take to download 500Kb on different bandwidths.
So, on any connection above 512Kbps (minimum aDSL speeds, most are now better than 5Mbps, and 3G mobile is around the same mark), it's unlikely that the file size is the problem.
If "low bandwidth" also implies "limited hardware" (RAM, CPU), it's possible the performance problem lies in unzipping and processing your application. Angular is pretty responsive, but low-end hardware may struggle.
The above root causes would justify rewriting the application using your own custom framework.
The most likely problem, however, is any assets/resources/templates your angular app requires on initialization - images, JSON files etc. This is hard to figure out without specific details - each app is different. The good news is that you shouldn't need to rewrite your application - you should be able to use your existing application and tweak it. I'm assuming the 500Kb application can't be significantly reduced in size without a rewrite, and that the speed problem is down to loading additional assets as part of start-up.
I'd use Google Chrome's Developer tools to see what's going on. The "performance" tab has an option to simulate various types of network conditions. The "network" tab allows you to see which assets are loaded, and how long they take. I'd look at which assets take time, and seeing which of those can be lazy loaded. For instance, if the application is loading a very large image file on start-up, perhaps that could be lazy-loaded, allowing the application to appear responsive to the end user more quickly.
A common way to improve perceived performance is to use lazy loading.
To decrease your load time just process your caching and find the right download tool to calculate the download speed of your file. You can use https://downloadtime.org for reference. If you have any issues let me know. Also to To decrease the page load time try to create chunks of your javascript functionalities which consist only of the functionality which is needed for e.g. the index page to decrease the load time.
As angular.js itself has a gzipped size of 57kb it seems there is much more loaded with this initial page call which is ~10 times the size of angular.js.
To decrease the page load time try to create chunks of your javascript functionalities which consist only of the functionality which is needed for e.g. the index page to decrease the load time.
For example when you're using Webpack the recommended default maximum file size is around 244kb see here

WPF vs Win App battery usage

We want to use wpf application on tablet and looking for difference battery usage impacts between win app and wpf application?
Is there any comparision battery usage or document?
I doubt there is any type of documentation on what you want, but as suggested above, running your own tests shouldn't be too hard. I don't recall the APIs, but on any mobile device, there are going to be battery state objects you can access giving, at the very least, remaining battery energy. Write two test apps, each using the different paradigms. Run each, one at a time and for a long duration. Check on the energy usage at the beginning and end.
This is late for an answer but one aspect to remember about battery consumption is the use of the radios (Bluetooth and WiFi).
For tablet apps try to manage your app by stepping back and analyzing what data you'll need from the database and try to get the data in one shot so the OS can turn off the radio. If you make an sql call each time the user presses a button then the radio is on more and drains the battery. The OS might also leave the radio on "a little longer" in case you make another query.
For the rest of the UI of the app, you're safe to count on an 8 hr shift and then they dock it for recharge.
You can watch for the battery notifications as well so you can save the info in the app before the OS shuts you down.
Other than that, each app is unique and you'll need to run these tests during your QA cycle.

Profiling and output caching in ASP.NET MVC

So I was recently hired by a big department of a Fortune 50 company, straight out of college. I'll be supporting a brand new ASP.NET MVC app - over a million lines of code written by contractors over 4 years. The system works great with up to 3 or 4 simultaneous requests, but becomes very slow with more. It's supposed to go live in 2 weeks ... I'm looking for practical advice on how to drastically improve the scalability.
The advice I was given in Uni is to always run a profiler first. I've already secured a sizeable tools budget with my manager, so price wouldn't be a problem. What is a good or even the best profiler for ASP.NET MVC?
I'm also looking at adding caching. There is currently no second level and query cache configured for nHibernate. My current thinking is to use Redis for that purpose. Also looking at output caching, but unfortunately the majority of the users will login to the site. Is there a way to still cache parts of the pages served by MVC?
Do you have any monitoring or instrumentation setup for the application? If not, I would highly recommend starting there. I've been using New Relic for a few years with ASP.NET apps and been very happy with it.
Right off the bat you get a nice graph of request response times broken down into 3 kind of tasks that contribute to the response time
.NET CLR - Time spent running .NET code
Database - Time spent waiting on SQL requests
Request Queue - Time spent waiting for application workers to become available
It also breaks down performance by MVC action so you can see which ones are the slowest. You also get a breakdown of performance per database query. I've used this many times to detect procedures that were way too slow for heavy production loads.
If you want to, you can have New Relic add some unobtrusive Javascript to your page that allows you to instrument browser load times. This helps you figure things out like "my users outside North America spend on average 500ms loading images. I need to move my images to a CDN!"
I would highly recommend you use some instrumentation software like this. It will definitely get you pointed in the right direction and help you keep your app available and healthy.
Profiler is a handy tool to watch how apps communicate with your database and debug odd behaviour. It's not a long-term solution for performance instrumentation given that it puts a load on your server and the results require quite a bit of laborious processing and digestion to paint a clear picture for you.
Random thought: check out your application pool configuration and keep and eye out in the event log for too many recycling events. When an application pool recycles, it takes a long time to become responsive again. It's just one of those things can kill performance and you can rip your hair out trying to track it down. Improper recycling settings bit me recently so that's why I mention it.
For nHibernate analysis (session queries, caching, execution time) you could use HibernatingRhinos Profiler. It's developed by the guys that developed nhibernate, so you know it will work really good with it.
Here is the URL for it:
http://hibernatingrhinos.com/products/nhprof
You could give it a try and decide if it helps you or not.

Resources