Does anyone know a good mechanism for measuring or reporting on page sizes?
I have a low bandwidth (humanitarian client) use case and trying to evaluate my pages, hi-res imagery or other page size issues, across the org. As an example, even a standard Lightning page view seems to be coming in at around 700kb, which seems high.
If there’s something on the AppEchange that would be great, but otherwise any direction in reporting, API tools or creating this through other mechanisms would be really helpful.
I have searched the Salesforce AppExchange, and available metadata/other API and so far haven't found anything. Event Monitoring has logs that help general page load performance and I found an article around improving performance, but haven't found ways to identify SIZE as would be needed for low bandwidth scenarios.
Don't know where to start yet, unfortunately. This could be a programmatic solution, in which case I'd love some direction, but it could also be tools available elsewhere I'm not aware of.
In Chrome dev tools (F12), in the Network tab you can simulate a low bandwith and long latency connection in order to measure the download time of a web page or web application.
You can also visualize the size and download time of every resources downloaded to identify the biggest images and the most time consumming requests.
In Salesforce, there's an administrative tool call Lightning Usage that can be activated. It generates diffrente dashboard and performance stats by page. You can found some screenshots in that Salesforce description of the service: https://developer.salesforce.com/blogs/2018/10/understanding-experienced-page-time.html. The metric EPT could meet your needs.
Related
About two weeks ago, a Chrome update crippled users of my angular app. I load a lot of data but the entire single page application loaded in < 4 seconds but every single user went to > 40 seconds after updating Chrome 2 weeks ago. I did not experience the problem, but when I upgraded Chrome to 64.0.3282.167 from 63.0.3239.132, the problem also began for me.
Somewhere between Chrome 63.0.3239.132 and 64.0.3282.167, there was a change that basically slowed my Angular app to a crawl. It affects loading and rendering across the board and made the entire app almost unusable. I've been looking for the issue for a few days with no joy.
Does anyone have any insight or recommendation on what could cause such a performance degradation?
Here is a screenshot of my network tab. All of this used to be very fast before the Chrome update and now it just crawls.
If I set:
httpProvider.useApplyAsync(true), it alleviates the problem but my application is huge and this causes a lot of erratic behavior in a 5 year old application.
I'm not sure if this is still an issue, but I know that Google has continued to ramp up security measures with Chrome. This is especially true with HTTPS and I believe Google is pushing for everything to move to HTTPS. Certificates that are not clean (several criteria for this) present problems and may be requiring extra measures to process. I believe there is an add-on (or built-in) for Chrome dev tools that can break out the TLS processing to show you more detail.
A high TTFB reveals one of two primary issues. Either:
Bad network conditions between client and server, or A slowly
responding server application
To address a high TTFB, first cut out as much network as possible. Ideally, host the application locally and see if there is still a big TTFB. If there is, then the application needs to be optimized for response speed. This could mean optimizing database queries, implementing a cache for certain portions of content, or modifying your web server configuration. There are many reasons a backend can be slow. You will need to do research into your software and figure out what is not meeting your performance budget.
If the TTFB is low locally then the networks between your client and the server are the problem. The network traversal could be hindered by any number of things. There are a lot of points between clients and servers and each one has its own connection limitations and could cause a problem. The simplest method to test reducing this is to put your application on another host and see if the TTFB improves.
So I was recently hired by a big department of a Fortune 50 company, straight out of college. I'll be supporting a brand new ASP.NET MVC app - over a million lines of code written by contractors over 4 years. The system works great with up to 3 or 4 simultaneous requests, but becomes very slow with more. It's supposed to go live in 2 weeks ... I'm looking for practical advice on how to drastically improve the scalability.
The advice I was given in Uni is to always run a profiler first. I've already secured a sizeable tools budget with my manager, so price wouldn't be a problem. What is a good or even the best profiler for ASP.NET MVC?
I'm also looking at adding caching. There is currently no second level and query cache configured for nHibernate. My current thinking is to use Redis for that purpose. Also looking at output caching, but unfortunately the majority of the users will login to the site. Is there a way to still cache parts of the pages served by MVC?
Do you have any monitoring or instrumentation setup for the application? If not, I would highly recommend starting there. I've been using New Relic for a few years with ASP.NET apps and been very happy with it.
Right off the bat you get a nice graph of request response times broken down into 3 kind of tasks that contribute to the response time
.NET CLR - Time spent running .NET code
Database - Time spent waiting on SQL requests
Request Queue - Time spent waiting for application workers to become available
It also breaks down performance by MVC action so you can see which ones are the slowest. You also get a breakdown of performance per database query. I've used this many times to detect procedures that were way too slow for heavy production loads.
If you want to, you can have New Relic add some unobtrusive Javascript to your page that allows you to instrument browser load times. This helps you figure things out like "my users outside North America spend on average 500ms loading images. I need to move my images to a CDN!"
I would highly recommend you use some instrumentation software like this. It will definitely get you pointed in the right direction and help you keep your app available and healthy.
Profiler is a handy tool to watch how apps communicate with your database and debug odd behaviour. It's not a long-term solution for performance instrumentation given that it puts a load on your server and the results require quite a bit of laborious processing and digestion to paint a clear picture for you.
Random thought: check out your application pool configuration and keep and eye out in the event log for too many recycling events. When an application pool recycles, it takes a long time to become responsive again. It's just one of those things can kill performance and you can rip your hair out trying to track it down. Improper recycling settings bit me recently so that's why I mention it.
For nHibernate analysis (session queries, caching, execution time) you could use HibernatingRhinos Profiler. It's developed by the guys that developed nhibernate, so you know it will work really good with it.
Here is the URL for it:
http://hibernatingrhinos.com/products/nhprof
You could give it a try and decide if it helps you or not.
My problem is this...
I have two sites, one acting as an "Admin" site, the other as general "User" site. I need to broadcast live audio from the "Admin" site to all clients of the "User" site. I need to do this with <1 sec of latency.
Some restrictions include:
No install on "User" machines (the idea being the whole thing sits on the web)
If there needs to be a 3rd party plugin then Silverlight is preferred*
Any help much appreciated here
*I have tried IceCast with a flash client, IIS Smooth Streaming, Internet radio, all of which give us a latency of >5 secs.
Have you tried Flash with a server like Red5? You're generally going to get subsecond latency (though not much less than that), as it's designed for realtime communications. There's a learning curve with Flex and ActionScript, but if you're at all familiar with XAML, you can pick it up from the sample apps that come with Red5 pretty quickly.
Failing that, if there aren't too many clients, you can use one of the two real-time peer-to-peer solutions out there, namely Flash over RTMFP or WebRTC over JSEP/ICE/RTP. If you can ensure that all the clients are using Chrome, then WebRTC is probably your best bet. If you can ensure that they're not using Chrome, then Flash is a good choice. The current Flash Pepper client on Chrome is buggy up the wazoo when it comes to audio processing, and no sign of a fix in sight. (It doesn't support echo cancellation, and the volume of the audio goes up and down horribly.) So if you're using Flash, steer clear of recording and broadcasting your audio on Chrome. And I wouldn't recommend either approach if you have more than half a dozen clients - the number of audio streams is gonna overwhelm your "Admin" browser pretty quickly, I think. Better to push that out to something like a Red5 server.
Silverlight is a bad choice for more reasons than I can count. I'm saying this as a guy who spent several years trying to implement a realtime communication solution on Silverlight. Don't do it.
I have an idea for a web application and I am currently researching different platforms. I am really interested in Google App Engine, but it looks like it works pretty good for certain application types while it is less suitable for others (there are horror as well as success stories e.g. Goodbye Google App Engine vs. Why we are really happy with Google App Engine
There is also a similar negative story in this thread from 1 year ago, concluding GAE was not ready for commercial production platform: GAE as Production Platform. There are also other threads from 2009 talking about data select limits (1000 rows) that has since been lifted.
My app will essentially perform some mathematical analysis based on data pulled from external data feeds (could be some substantial amount of data), it would be real time only the first time data is downloaded for a specific item at hand and then stored and retrieved locally from the database at that point. There will be some additional external data pulls as scheduled intervals.
Based on this brief description, should I even bother starting on GAE? In general, what are the rules of thumb to try and decide if developing on GAE is suitable for a problem at hand? Also, what are the good examples of Apps in Production that use GAE. It looks like GAE App Gallery is not around anymore, but I would definitely appreciate any Web 2.0 App examples running on the app engine.
In your specific case I would double check these factors:
a. Is the mathematical analysis a long running CPU intensive job?
GAE is not designed for long running CPU intensive computational Jobs; this would lead to have an high billing cost and would force you to design your application to avoid some GAE limitations (10 minutes max per job, limited soft memory, CPU quota, etc. etc.).
b. Are you planning to retrieve external data using a mainstream API (twitter, yahoo, facebook)?
Your application shares the same pool of IPs with other applications; if the API you want to adopt does not allow authenticated request, your application will suffer hiccups caused by throttling/quota limits errors. I faced this problem here.
App Engine should work fine for your application. It's generally designed to serve, and to scale, sites that serve mostly user-facing traffic. Applications that it's not suitable for are things such as video transcoding, which rely heavily on backend processing, or things that have to shell out to native code, such as 3D graphics, etcetera.
Depends on what type of mathematical analysis are you doing. If your application is heavy in I/O, I would give it some pause. On GAE, you're kind of limited in your I/O options. You basically have the following:
RAM: I can't recall exactly, but GAE imposes a hard limit of around 200MB of RAM.
Datastore: You get plenty of space here, but it's slow compared to a cached local file system.
Memcache: Faster than datastore, but not nearly as fast as a cached disk. And worse, it's a cache, so there's no guarantee that it won't get wiped out.
External sources: These include calling out to external web-pages. Lots of flexibility, but very slow.
In sum, I would perhaps look at other options if you're doing heavy I/O on a medium-size dataset (>20MB and ~<2GB). These are probably non-issues for 90% of web-apps, although you should be aware of them.
All the negatives aside, working on GAE is a joyous experience. You spend more time programming and less time configuring. And it's really cheap.
Here's the requirement at a very high level.
We are going to distribute desktop agents (or browser plugins) to collect certain information from tons of users (in thousands or possibly millions down the road).
These agents collect data and periodically upload it to a server app.
The server app will allow for analyzing collected data (filter, sort etc based on 4-5 attributes) and summarize in form of charts etc.
We should also be able to export some of the collected data (csv or pdf)
We are looking for an platform to host the server app. GAE seems attractive because of low administrative cost and scalability (as users base increases, the platform will handle the scale... hopefully!).
Is GAE a viable option for us?
One important consideration is that sometimes the volume of uploads from the agents can exceed 50MB per upload cycle. We will have users in places where Internet connections could be very slow too. Apparently GAE has a limit on the duration a request can last. The upload volume may cause the request (transferring data from an agent to the server) to last longer than 30 seconds. How would one handle such situation?
Thanks!
The time of the upload is not considered part of the script execution time, so no worries there.
Google App Engine is very good to perform a vast number of smaller jobs but not so much to do complex long running background jobs (because of the 30 sec limit + even smaller database connection time limit). So probably GAE would be a very good platform to GATHER the data but not for actually ANALYZING it. You probably would like to separate these two.
We went ahead an implemented the first version on GAE anyway. The experience has been very much what is described here http://www.carlosble.com/?p=719
For a proof-of-concept prototype, what we have built so far is acceptable. However, we have decided not to go with GAE (at least in its current shape) for the production version. The pains somewhat outweigh the benefits in our case.
The problems we faced were numerous. Unlike my experience dealing with J2EE stacks, when you run into an issue, many a times it is a dead end. Workarounds are very complicated and ugly, if you can find one.
By writing good prototypes one could figure out whether GAE is right for the solution being built, however, the hype is a problem. Many newcomers are going get overly excited about GAE due to its hype and end up failing badly. Because they will choose GAE for all kinds of purpose that it is not suitable for.