I want my Smooth Streaming video to play very well on low-end devices. However, the default settings seem very optimistic and continuously retry to play a too-high quality level, resulting in a very bad playback experience.
I know that the Silverlight Smooth Streaming media engine is fairly configurable. At the moment, I can only go at it with trial and error. Therefore, I figured I should ask for existing knowledge. Does anyone have any recommendations for me on this front - what sort of configuration to use?
My goal is to make the CPU heuristics very paranoid, so it will rarely try to upgrade the quality level. Even if it does, it should only upgrade by one step (however, I am not sure if there's any setting for that... it appears to upgrade in very large jumps right now - occasionally from 500 kbps straight to 3 mbps).
Take a look to http://forums.iis.net/t/1172146.aspx to get an idea of which settings can be tweaked.
Thanks,
Ez.
http://blogs.southworks.net/ejadib
Related
Does anyone know a good mechanism for measuring or reporting on page sizes?
I have a low bandwidth (humanitarian client) use case and trying to evaluate my pages, hi-res imagery or other page size issues, across the org. As an example, even a standard Lightning page view seems to be coming in at around 700kb, which seems high.
If there’s something on the AppEchange that would be great, but otherwise any direction in reporting, API tools or creating this through other mechanisms would be really helpful.
I have searched the Salesforce AppExchange, and available metadata/other API and so far haven't found anything. Event Monitoring has logs that help general page load performance and I found an article around improving performance, but haven't found ways to identify SIZE as would be needed for low bandwidth scenarios.
Don't know where to start yet, unfortunately. This could be a programmatic solution, in which case I'd love some direction, but it could also be tools available elsewhere I'm not aware of.
In Chrome dev tools (F12), in the Network tab you can simulate a low bandwith and long latency connection in order to measure the download time of a web page or web application.
You can also visualize the size and download time of every resources downloaded to identify the biggest images and the most time consumming requests.
In Salesforce, there's an administrative tool call Lightning Usage that can be activated. It generates diffrente dashboard and performance stats by page. You can found some screenshots in that Salesforce description of the service: https://developer.salesforce.com/blogs/2018/10/understanding-experienced-page-time.html. The metric EPT could meet your needs.
My problem is this...
I have two sites, one acting as an "Admin" site, the other as general "User" site. I need to broadcast live audio from the "Admin" site to all clients of the "User" site. I need to do this with <1 sec of latency.
Some restrictions include:
No install on "User" machines (the idea being the whole thing sits on the web)
If there needs to be a 3rd party plugin then Silverlight is preferred*
Any help much appreciated here
*I have tried IceCast with a flash client, IIS Smooth Streaming, Internet radio, all of which give us a latency of >5 secs.
Have you tried Flash with a server like Red5? You're generally going to get subsecond latency (though not much less than that), as it's designed for realtime communications. There's a learning curve with Flex and ActionScript, but if you're at all familiar with XAML, you can pick it up from the sample apps that come with Red5 pretty quickly.
Failing that, if there aren't too many clients, you can use one of the two real-time peer-to-peer solutions out there, namely Flash over RTMFP or WebRTC over JSEP/ICE/RTP. If you can ensure that all the clients are using Chrome, then WebRTC is probably your best bet. If you can ensure that they're not using Chrome, then Flash is a good choice. The current Flash Pepper client on Chrome is buggy up the wazoo when it comes to audio processing, and no sign of a fix in sight. (It doesn't support echo cancellation, and the volume of the audio goes up and down horribly.) So if you're using Flash, steer clear of recording and broadcasting your audio on Chrome. And I wouldn't recommend either approach if you have more than half a dozen clients - the number of audio streams is gonna overwhelm your "Admin" browser pretty quickly, I think. Better to push that out to something like a Red5 server.
Silverlight is a bad choice for more reasons than I can count. I'm saying this as a guy who spent several years trying to implement a realtime communication solution on Silverlight. Don't do it.
I understand that polling will inevitably have delays in getting real time updates. Currently I am using websync from frozen mountain dot com for achieving this and it works very well. But I would like to still know if PollingDuplexHttpBinding is a worthwhile one.
Has some one used this in critical systems?
Is its performance better over comet? When I say performance, how many simultaneous client connections it can handle?
Is it possible to configure the time interval of polling? I mean once in 30 seconds, 60 seconds etc.,
We're currently using this for our Alanta web conferencing platform. Our default binding is Net.TCP, because it's faster and more performant. But not everyone lets Net.TCP through on the right ports, so if we can't connect over Net.TCP, we fallback to the PollingDuplexHttpBinding. And it seems to work reasonably well -- no major complaints so far at least.
With respect to performance, the PollingDuplex binding is roughly similar to what you'll find in other long-poll based systems. You can find more details on its performance on Tomek's blog.
And yes, it's possible to configure almost every aspect of the binding. The particular properties that control the polling interval are ClientPollTimeout and ServerPollTimeout. See here for more details.
I want to scale an e-commerce portal based on LAMP. Recently we've seen huge traffic surge.
What would be steps (please mention in order) in scaling it:
Should I consider moving onto Amazon EC2 or similar? what could be potential problems in switching servers?
Do we need to redesign database? I read, Facebook switched to Cassandra from MySql. What kind of code changes are required if switched to Cassandra? Would Cassandra be better option than MySql?
Possibility of Hadoop, not even sure?
Any other things, which need to be thought of?
Found this post helpful. This blog has nice articles as well. What I want to know is list of steps I should consider in scaling this app.
First, I would suggest making sure every resource served by your server sets appropriate cache control headers. The goal is to make sure truly dynamic content gets served fresh every time and any stable or static content gets served from somebody else's cache as much as possible. Why deliver a product image to every AOL customer when you can deliver it to the first and let AOL deliver it to all the others?
If you currently run your webserver and dbms on the same box, you can look into moving the dbms onto a dedicated database server.
Once you have done the above, you need to start measuring the specifics. What resource will hit its capacity first?
For example, if the webserver is running at or near capacity while the database server sits mostly idle, it makes no sense to switch databases or to implement replication etc.
If the webserver sits mostly idle while the dbms chugs away constantly, it makes no sense to look into switching to a cluster of load-balanced webservers.
Take care of the simple things first.
If the dbms is the likely bottle-neck, make sure your database has the right indexes so that it gets fast access times during lookup and doesn't waste unnecessary time during updates. Make sure the dbms logs to a different physical medium from the tables themselves. Make sure the application isn't issuing any wasteful queries etc. Make sure you do not run any expensive analytical queries against your transactional database.
If the webserver is the likely bottle-neck, profile it to see where it spends most of its time and reduce the work by changing your application or implementing new caching strategies etc. Make sure you are not doing anything that will prevent you from moving from a single server to multiple servers with a load balancer.
If you have taken care of the above, you will be much better prepared for making the move to multiple webservers or database servers. You will be much better informed for deciding whether to scale your database with replication or to switch to a completely different data model etc.
1) First thing - measure how many requests per second can serve you most-visited pages. For well-written PHP sites on average hardware it must be in 200-400 requests per second range. If you are not there - you have to optimize the code by reducing number of database requests, caching rarely changed data in memcached/shared memory, using PHP accelerator. If you are at some 10-20 requests per second, you need to get rid of your bulky framework.
2) Second - if you are still on Apache2, you have to switch to lighthttpd or nginx+apache2. Personally, I like the second option.
3) Then you move all your static data to separate server or CDN. Make sure it is served with "expires" headers, at least 24 hours.
4) Only after all these things you might start thinking about going to EC2/Hadoop, build multiple servers and balancing the load (nginx would also help you there)
After steps 1-3 you should be able to serve some 10'000'000 hits per day easily.
If you need just 1.5-3 times more, I would go for single more powerfull server (8-16 cores, lots of RAM for caching & database).
With step 4 and multiple servers you are on your way to 0.1-1billion hits per day (but for significantly larger hardware & support expenses).
Find out where issues are happening (or are likely to happen if you don't have them now). Knowing what is your biggest resource usage is important when evaluating any solution. Stick to solutions that will give you the biggest improvement.
Consider:
- higher than needed bandwidth use x user is something you want to address regardless of moving to ec2. It will cost you money either way, so its worth a shot at looking at things like this: http://developer.yahoo.com/yslow/
- don't invest into changing databases if that's a non issue. Find out first if that's really the problem, and even if you are having issues with the database it might be a code issue i.e. hitting the database lots of times per request.
- unless we are talking about v. big numbers, you shouldn't have high cpu usage issues, if you do find out where they are happening / optimization is worth it where specific code has a high impact in your overall resource usage.
- after making sure the above is reasonable, you might get big improvements with caching. In bandwith (making sure browsers/proxy can play their part on caching), local resources usage (avoiding re-processing/re-retrieving the same info all the time).
I'm not saying you should go all out with the above, just enough to make sure you won't get the same issues elsewhere in v. few months. Also enough to find out where are your biggest gains, and if you will get enough value from any scaling options. This will also allow you to come back and ask questions about specific problems, and how these scaling options relate to those.
You should prepare by choosing a flexible framework and be sure things are going to change along the way. In some situations it's difficult to predict your user's behavior.
If you have seen an explosion of traffic recently, analyze what are the slowest pages.
You can move to cloud, but EC2 is not the best performing one. Again, be sure there's no other optimization you can do.
Database might be redesigned, but I doubt all of it. Again, see the problem points.
Both Hadoop and Cassandra are pretty nifty, but they might be overkill.
So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations.
I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus.
I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing.
The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window.
I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool:
For applications without animation,
this value should be near 0.
The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations).
(Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question).
I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance.
So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are:
What are some general things to look
for (or avoid) in the code to improve
performance?
What steps can I take using the WPF
performance tool to narrow down the
problem?
Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?
Are you applying any BitmapEffect-s to your UI elements?
They are not handled by GPU, so CPU takes care of rendering them. If not used properly (e.g. having a OuterGlowBitmapEffect applied to a large complex element) they can have terrible impact on performance.
Also, you still might want to try profiling your app with a performance profiler. Just to see if it's not your code that causes this.
This is not normal for WPF - I'd suspect one of your developers has written code that runs a timer in the background (or more likely given your description, a mouse move handler) which is affecting the UI in some way.
If you have ANTS performance profiler (it's really nice) I'd run that over your app and reproduce the problem.
Once you've done that, ANTS should tell you fairly quickly what the problem is.
If ANTS doesn't reveal anything at all, and shows you that in fact none of your code is running during this time, then I'd suspect buggy graphics card drivers.
You can test for this by disabling hardware acceleration by setting the following registry key, and trying again:
HKEY_CURRENT_USER\Software\Microsoft\Avalon.Graphics\DisableHWAcceleration to 1
Note: the DisableHWAcceleration value should be a DWORD