Identifying Elements Effectively - AngularJS Automation with Ranorex - angularjs

I have a relatively big web app is written in AngularJS. Our automation is using Ranorex to run BDD tests.
We are in a constant dilemma on how to enable the Ranorex tests to find UI elements effectively. Currently we're mainly using a custom attribute for it, testid (e.g. <div testid="done_btn">), but in some cases it performs very poorly, not sure whether it's because the querying XPaths aren't optimized or some other Ranorex configuration.
Using the id attribute is way faster but it just doesn't feel right to use it for automation. Besides, keeping it unique might require some coding in some cases (e.g. when using ng-repeat).
Is using id just for automation purposes OK or is it a bad practice? Why?
Any way to make Ranorex relate to custom attributes such as testid differently so that it'll be found faster? I've read about Ranorex Weight Rule but couldn't find a way to define custom attributes.

The quick answer: YES! Use the Id's.
I'm not actually familiar with the dilemma you are facing. Can you please further explain the reasoning behind not wanting to use the id-s.
And if you face issues related to id's not being unique in some cases you can combine different elements to the path (eg."#id='save_btn' and #innerText='Save'").
Usually if you can improve the speed and stability of your automation with no visual or functional expense - do it.

Related

finding xpath for elements in applications built on angularjs

The application under test is built on angular because of which the xpaths are failing to locate the element . I am using selenium web diver for automating the tests and google chrome browser.
Can anybody kindly specify how to break down angularjs components to basic html elements while creating the xpath or any other way i can adapt to find the exact element on the page.
Any link or path or tips to follow.
I have searched a lot but no luck till now.
Starting with covering some general concerns.
When testing AngularJS applications, the timing issues are usually the most common - this is partially why we have tools like Protractor built mainly to test AngularJS applications. What makes it unique is that it works in sync with Angular, knowing when it is ready to be interacted with. It also provide AngularJS specific locators which makes locating elements much easier, samples:
element.all(by.repeater("item in items"));
element(by.binding("mybinding"));
element(by.model("mymodel"));
If you can, you should consider switching to Protractor - the test flow is natural - almost no explicit waits, a lot of convenient syntactic sugar and much more element-locating options.
If not, you can still use Angular-specific attributes to locate elements, examples:
driver.findElements(By.xpath("//*[#ng-repeat='item in items']"));
driver.findElement(By.xpath("//*[#ng-binding='mybinding']"));
driver.findElement(By.xpath("//*[#ng-model='mymodel']"));
Though, as usual, you should prefer to use ids, classes and other "data-oriented" attributes.
Things can easily go wrong with the above sample expressions - imagine web developers add a "tracking" to the repeater, or rename the ng-repeater to data-ng-repeater.
As a side note, using Angular specific things like ng-* attributes in your tests, would make the test codebase tight to this specific technology used to build the application under test. It is not necessarily a bad thing but you should keep it in mind.

Why would you use lodash in AngularJS?

Simple enough question I think. I see people raving about it but I haven't seen anything on the "why" use it. It doesn't seem to me (from my naïve outside perspective) ng-repeat, if not in that nested layer do ng-repeat inside another. I looks like that it doesn't add functionality that angular doesn't already have—I'm sure I'm wrong—
I see the term "lazy loading" being used with it and it doesn't seem like it's that much easier after seeing there docs. What are some things lodash makes significantly easier in AngularJS specifically that I would make it work adding another lib to my project? And what can you do with it that you cannot with angular out of the box?
They're just not the same, and exist for distinct reasons. I think you already know what AngularJS works for, so about your questions:
What can you do with it that you cannot with angular out of the box?
Well, if you need to deal with several data in structures like arrays, objects or mixed/nested shapes, lodash will save you a lot of time and effort.
Maybe there's a lot of items in collections which should be presented to your client application in some particular way, you would have to write a lot of JS code in Angular controllers or services with out the aid of lodash.
If there's a lot of logic tied to your data structures and/or complex algorithms and coding workflow, go for lodash.
Lodash is a great tool, you can get some intro here and of course, just check out the API reference. You can use it at everywhere, either Angular or any other framework, and of course, also at Node too!

When will AngularJS get faster?

I've been developing in Angular for a bit now and loving it. I do understand the performance implications of data binding and use the bind-once plugin and other means of minimizing watches, etc.
However, I'm wondering if, how, and when Angular applications will become significantly faster? By that I mean with all things being equal performance almost or on par with non-declarative frameworks.
Is this matter of browsers catching up to the declarative model?
Do any browser currently have any plans for that?
Is it a matter of EcmaScript 6 or beyond to eventually support native "watching" of variables. (Any hope?)
Or will that take too long and Angular itself needs to improve in some clever way (any plans from the Angular team?)
Of course eventually browsers and computers in general get faster but is there anything on the horizon for Angular performance?
The way I see it, regarding bindings / change listeners etc., there are 3 possible areas of optimization:
Tweaking special cases of scope watches / bindings:
This includes for example bindonce or the optimization of specific data structures, such as $watchGroups, which is like one watch for two or more properties (https://github.com/angular/angular.js/pull/7158). These are changes that basically anybody can implement and propose, but they don't have very high priority for the core team (although they are working on a bindonce like feature).
Improving the general change detection / application architecture
In angularjs 2.0, the scope change detection algorithm will be much faster than now (http://blog.angularjs.org/2014/03/angular-20.html). Another major improvement in the architcture I see would be batched DOM updates.
Browser features
Object.observe() is a standardized way to listen for changes in objects. It's currently not supported by all major browsers, so this can't be used yet.
The most promising area is currently (2) as the general architecture changes will incorporate or obsolete changes in (1). I think a public alpha of angular 2.0 will be available at the earliest at the end of the year.
You are asking a lot of what if/theory questions. Angular is as fast as you make it. Performance is directly tied to how you application was developed. I am woking on a project with 50+ controllers that had serious performance issues as we were dealing 100,000+ records in a table. We were able to get sorts and filters from multi second transactions down to 60ms just by being smart about our decisions. Bind once is only a piece of the puzzle with performance. You really need to look at your watches. The flame chart is a great tool for this. I wrote up a pro tip on Coderwall.com about it:
https://coderwall.com/p/nsanaa?i=1&p=1&q=author%3Abreck421&t%5B%5D=breck421
Also you need to be mindful and purposeful about your DOM manipulations (ng-repeat is slow).
Brian Ford just did a lightning talk about this topic:
https://github.com/breck421/brian-talks-about-angular-with-lots-of-data
Hope this helps,
Jordan

How to improve Silverlight development process productivity of UI & MVVM in particular

I am currently developing for Silverlight 4.0 and after mostly creating class libraries with TDD in usual C# (before SL) I can say that my current process is way slower than I am used to. (I think this can be said about any UI code compared to library classes, but here I think its really serious issue for me.)
I am wondering what techniques can be recommended to increase SL development performance.
I am mainly concerned about hard to test code (from my POV) - MVVM & UI - what can be done to improve performance here, I am thinking maybe theres a way to use a smaller sandbox somehow and test/debug control behaviour outside of scope of whole application, its pretty clear to me that me running whole application to test whether a new dialog box works correctly isnt fastest way and I could improve performance if I had a way to test this dialog alone for example, and there are probably other ways I cannot think of that can be a solution too.
EDIT: 1)here is something that I found useful , for TDD there is now a project that allows console runner to run tests so you dont have to run silverlight tests in browser & can integrate in your build process LightHouse
2) found following page, it provides some idea about a possible approach one could use to test view:
http://fohjin.blogspot.com/2008/09/how-to-test-your-xaml-behavior-using.html
there is no magic beautiful way and this one can be utilized but having to name all controls for example is a must to get this to work which isnt very good often
Statlight for the build server.
AgUnit to allow resharper to run silverlight tests.
WebAii for automation testing.
I'm not a fan of SLUT, as to run an individual test you have to cut and paste its name, and it doesnt remember it until you let it run all the way through, which I rarely do if I'm debugging.
Have you tried to use slut?
http://archive.msdn.microsoft.com/silverlightut
she will do what you want and pretend she enjoys it

Google Optimizer - server-side/client side dilemma

I'm hoping to test different versions of a form with Google Optimizer (Multivariate testing). The form is in an ASP include, but what the server-side code will load before Google's JS does. Any ideas about how to approach this? Thanks.
I think I understand your problem. The trouble is that Google's multivariate test is designed to allow you to test two or more snippets of HTML and the multivariate test itself is controlled via Javascript.
So I'm afraid the only way this will work is if you can specify the HTML for Google Optimizer to use for the two versions of the form. This could be made to work with basic forms, but if the form itself is dynamically created in ASP due to some other application requirement then I don't think Google Optimizer will work so well for you.
I think this helps you
Gwo Tricks
My answer is probably out-topic, but I guess you should implement the Multivariate testing by yourself in the ASP.
In the simple case, It is quite simple to implement on top of your existing application, so maybe this can do the trick in your case.
I've made progress on a similar problem. Basically, we want to A/B test 2 different page layout headers. There are some structural differences (it's not just switching out the CSS). We also don't want to wait for Google to flip the coin to determine which variation the visitor should see; instead, we want to choose the variation server-side and avoid a page redirect. I'm now using Google Analytics Content Experiments instead of the deprecated Google Website Optimizer. I don't know whether it's completely working yet, but my code is here:
Google Analytics Content Experiments A/B testing server-side code without page refresh

Resources