My team is using Ruby Watir Webdriver to create automation scripts for our applications. IE11 is our browser that we must script to. We have noticed that our same script runs 30% faster in Firefox than it does in IE. This difference is speed, affects our script's ability to "see" elements fast enough. Are there any add-ins that increase IE's speed? We are using IEDriverServer version 2.48.0
Surely 30% is a tolerable difference between two completely different browsers? I'm very surprised the difference is so small.
As these posts from Jim Evans of Selenium show, working with the IE Driver hasn't always been easy:
http://jimevansmusic.blogspot.co.uk/2014/09/screenshots-sendkeys-and-sixty-four.html
http://jimevansmusic.blogspot.co.uk/2014/09/using-internet-explorer-webdriver.html
http://jimevansmusic.blogspot.co.uk/2012/06/whats-wrong-with-internet-explorer.html
Perhaps this is a legitimate reason for upping the hardware spec for your IE test environment, so that the cost is mitigated. Or else concentrating your tests so that only the IE-specific ones need IE, while the uncontroversial bulk can run on faster environments.
Speed should never be allowed to influence test reliability or variability, but provided you are using proper waits, that shouldn't be an issue. It wouldn't be fair to blame the IE Driver for that.
I doubt that the use of any particular testing framework (Watir) will make too much difference.
Related
I am building a server to run my Selenium/Appium automation scripts. I was told that I would need a more sophisticated server, like machine learning, because Webdrivers require more processing. Just wondering what everyone uses and if it works
Best
Using a server with multiple processing power like 4-8 cores or GPU will help for parallel testing. Also a sufficient amount of RAM is needed. At least 4-8 gigs.
Jmeter has a webdriver sampler. We have to write the scripts. We have scripts written in opkey, a selenium based tool. Can we integrate both of them. So that we don't have to write scripts in jmeter.
Not familiar with opkey, but Selenium is a bad idea for any load tests beyond very trivial loads:
Selenium was never intended for large-scale performance testing. Selenium and its newer avatar webdriver, launch a browser engine per user and then replay all the user interactions inside it. This is great for functional testing because you are executing client side code inside a real browser engine - but at the same time this is very bad news for performance testing. Browser instances are resource intensive, and scaling becomes hard and expensive.
Even using something like Selenium Grid is really meant for cutting your test execution time by running in parallel, but not really for generating any sort of loads. They say this at the very top of their FAQ.
It is not just scale, when your load driver itself is very resource intensive, the applied load becomes inconsistent. If you see drop in performance, it could very well be that your load driver is the bottleneck and not the application under test.
Having said that, you can definitely use the JMeter Sampler, or Se Grid, or something else to drive your performance test, as long as you're in the scale of 10s of parallel users. Again quoting from Grid FAQ:
To simulate 200 concurrent users for instance, you would need 200 concurrent browsers with a load testing framework based on Selenium Grid. Even if you use Firefox on Linux (so the most efficient setup) you will probably need at least 10 machines to generate that kind of load. Quite insane when JMeter/Grinder/httperf can generate the same kind of load with a single machine.
Note that when they say JMeter, they are referring to the HTTP sampler or one of the simpler, more efficient samplers - Because even the webdriver sampler documentation says this:
JMeter allows the creation of multiple threads, and each thread is responsible for creating load on the server. However, for the Web Driver use case, the reader should be prudent in the number of threads they will create as each thread will have a single browser instance associated with it. Each browser consumes a significant amount of resources, and a limit should be placed on how many browsers the reader should create.
and then goes on to recommend using a maximum of (1 less than the #of processor cores) - which is a very small number for most non-elastic setups.
I have come across a strange situation and do not know what or how to look for.
We are having a Silverlight project hosted in a web project. This Silverlight project communicates using REST services hosted by the web project.
Now when we run this in debug mode, Everything runs fine as expected. So I thought of profiling it and checking which all places I might be loosing performance. So here is the interesting part.
I ran VS2012 Profiler and its is collecting all information related to methods executed, time and so on. But this time my project is lightning fast. Queries which used to take under normal debug about 1 sec to execute are now taking less than 200ms. There is one very intensive query which takes about 20 sec to execute in normal mode, but under profiling it takes less than 600ms.
So what I make out of this is that my code and project is capable of running this fast but for some reason it is not that fast under normal debug scenarios.
Can somebody throw light as what is happening under the hood and how can I achieve this performance in normal scenarios.
I would also like to mention that I have also tried release mode and publishing to IIS but none of these give as good performance as when in profiling mode.
Technically what I thought earlier is under profiling mode, performance should be less than normal as at that instant VS2012 is also collection other data.
I am confused. Please help.
Thanks
I know you probably don't need help at this point, but for anyone else who stumbles upon this post, I'll give my two cents.
I had this same problem with an XNA project I'm working on. Debug and Release modes both saw MASSIVE slowdowns in a certain situations. It pulled me down to less than 1 FPS. I was trying to profile the problem to solve it, but the issue never occurred during profiling.
I finally discovered the slowdowns were caused by a Console.WriteLine() I was calling in the situation. Commenting it out solved the issues on both Debug and Release build. Apparently, Console.WriteLine is just INCREDIBLY slow.
CouchDB is great, I like its p2p replication functionality, but it's a bit larger(because we have to install Erlang) and slower when used in desktop application.
As I tested in intel duo core cpu,
12 seconds to load 10000 docs
10 seconds to insert 10000 doc, but need 20 seconds to update view, so total is 30 seconds
Is there any No SQL implementation which has the same p2p replication functionality, but the size is very small like sqlite, and the speed is quite good(1 second to load 10000 docs).
Have you tried using the Hovercraft and/or the Erlang view server? I had a similar problem and found staying within the Erlang VM (thereby and avoiding excursions to SpiderMonkey) gave me the boost I needed. I did 3 things...
Boosting Queries: Porting your mapreduce functions from js to "native" Erlang usually gives tremendous performance boost when querying couch (http://wiki.apache.org/couchdb/EnableErlangViews). Also, managing views is easier coz you can call external libs or your own compiled modules (just add them to your ebin dir) reducing the number of uploads you need to do during development.
Boosting Inserts: Using Hovercraft for inserts gives upto X100 increase in performance (https://github.com/jchris/hovercraft.) This was mentioned in the CouchDB book (http://guide.couchdb.org/draft/performance.html)
Pre-Run Views: The last thing you can do for desktop apps is run your views during application startup (say, when the splash-screen is showing.) The first time views are run is always the slowest, subsequent runs are faster.
These helped me a lot.
Edmond -
Unfortunately the question doesn't offer enough details about your app requirements so it's kind of difficult to offer an advise. Anyways, I'm not aware of any other storage solution offering a similar/advanced P2P replication.
A couple of questions/comments about your your requirements:
what kind of desktop app requires 10000 inserts/second?
when you say size what exactly are you referring to?
You might want to take a look at:
Redis
RavenDB
Also check some of the other NoSQL-solutions listed on http://nosql.mypopescu.com against your app requirements.
So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations.
I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus.
I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing.
The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window.
I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool:
For applications without animation,
this value should be near 0.
The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations).
(Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question).
I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance.
So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are:
What are some general things to look
for (or avoid) in the code to improve
performance?
What steps can I take using the WPF
performance tool to narrow down the
problem?
Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?
Are you applying any BitmapEffect-s to your UI elements?
They are not handled by GPU, so CPU takes care of rendering them. If not used properly (e.g. having a OuterGlowBitmapEffect applied to a large complex element) they can have terrible impact on performance.
Also, you still might want to try profiling your app with a performance profiler. Just to see if it's not your code that causes this.
This is not normal for WPF - I'd suspect one of your developers has written code that runs a timer in the background (or more likely given your description, a mouse move handler) which is affecting the UI in some way.
If you have ANTS performance profiler (it's really nice) I'd run that over your app and reproduce the problem.
Once you've done that, ANTS should tell you fairly quickly what the problem is.
If ANTS doesn't reveal anything at all, and shows you that in fact none of your code is running during this time, then I'd suspect buggy graphics card drivers.
You can test for this by disabling hardware acceleration by setting the following registry key, and trying again:
HKEY_CURRENT_USER\Software\Microsoft\Avalon.Graphics\DisableHWAcceleration to 1
Note: the DisableHWAcceleration value should be a DWORD