After changing test case, test case in test run not changed. Does it work as designed? - kiwi-tcms

Add a test case to a test run, then change test cases, e.g., steps, or expected results. But test case in test run displays changed before.
At first, I think it maybe a bug. Discuss with my workmate, he think it right. Test run keep a copy after test case add to run.
Yes, seems reasonable. Does this as designed?

This is working as designed. TestRun keeps a copy of the TestCase text from the time when TR was created. You see for build 1 I may want a very basic test for say the login screen, but for build 2, 3, etc I want to improve this.
This is why such functionality exists.
In TestRun page there is the Cases -> Update menu which will refresh TestCase text for IDLE entries.
For the ones that have already been executed either change them to IDLE before update or just create a new TestRun.

Related

How can I optimize Robot Framework to speed up testing of an Angular application?

I know the basics of optimizing Robot Framework for speed on normal applications, but this is not a normal application. It's not a question of going as fast as possible, because if the code executes too fast on an Angular application, it'll try to click an element that isn't enabled or visible, or an element that doesn't exist yet. Timing issues abound, and the result is that I'm using a keyword (below) to slow down my program universally. The problem is that it's hard-coded, and I'm looking for a more "programatic" (programatical? I don't know the exact term) solution that will wait for an element to be clickable and then click it as soon as it is.
This is the keyword I use after every single click (${SLOW_TIME} is a global variable set to 0.5s):
Slow Down
# EXAMPLE USAGE
# Slow Down ${SLOW_TIME}
[Arguments] ${SLOW_TIME}
Sleep ${SLOW_TIME}
This is my current solution, which was written to verify that an element is ready to be clicked for test verification purposes, not speed. It's not complete (needs "Is Clickable") and occasionally causes the program to wait longer than it has to:
Verify Element Is Ready
# EXAMPLE USAGE
# Verify Element Is Ready id=myElementId
# Click Element id=myElementId
[Arguments] ${element}
Variable should exist ${element}
Wait until element is visible ${element}
Wait until element is enabled ${element}
I'm aware that Robot Framework isn't built for speed, but for long tests I'm tired of doing nothing for 10 minutes waiting for it to finish, only to see that I have an incorrect [Fail]. If the solution involves Python, Javascript, or Java, I can work that in.
EDIT: I'm currently using ExtendedSelenium2Library, but its implicit waits don't always work, so I wanted a second layer of waiting, but only as long as necessary.
First solution to explore would be to use libraries specifically designed for Angular based web applications, such as AngularJsLibrary or ExtendedSelenium2Library. As far as I know, ExtendedSelenium2Library is the one that works best (but perhaps not without any issues, I think it does have a few issues)
Next thing to know is, given that your element indeed is visible, it doesn't necessarily mean that it's ready to be clicked. There are quite a few ways to get around this kind of issues.
One way is to put a sleep in your test setup, to give the page some time to fully initialize. I'm personally not a fan of this solution. This solution also doesn't work well for pages that load new content dynamically after the initial document was initialized.
Another way is to wrap your click element in a wait, either by writing your own in Python or, using something like Wait Until Keyword Succeeds

protractor sendKeys sends keys in the wrong order

I am trying to test some form by adding appropriate text to my input fields, but for some reason I am not getting what I'm sending. The problem I am having is with ['name=type_input]' and sendKeys('Ciężarówka'). When I run the test the input field gets filled with letters in the wrong order, for example "kaCiężaów" or "aCiężarówk" and that causes my entire test to fail. Sometimes the order is correct and than the test passes. Can someone explain what is happening?
it('should add vehicle', function() {
element(by.css('[name=type_input]')).sendKeys('Ciężarówka').sendKeys(protractor.Key.TAB);
element(by.css('[name=name]')).sendKeys('Nie Super Auto 555').sendKeys(protractor.Key.TAB);
element(by.model('model.carId')).sendKeys('54536');
element(by.css('[name=numberPlate]')).sendKeys('KU PAA').sendKeys(protractor.Key.TAB);
helpers.selectAnyFromKendoComboBox('vehicle', 'haulier');
helpers.save('vehicle');
alertify.expectSuccessMessage('Zapisano');
});
First of all, isolate the problem:
is this input the only one with this behavior in your application? Does it have an input validation or any animations related to typing?
would the test fail if you would type, say, test instead of Ciężarówka?
can you reproduce the problem in other browsers or other machines?
Here is a couple of things to try:
use "slow typing". This helped me couple of times to make the typing work reliably across all the target browsers.
disable all Angular animations

Is it a good idea to iterate through several browsers in one test using Selenium WebDriver?

I am trying to run a simple test on multiple browsers, here is a mock up of the code I've got:
String url = "http://www.anyURL.com";
WebDriver[] drivers = { new FireFoxDriver(), new InternetExplorerDriver,
newChromDriver() };
#Test
public void testTitle() {
for (int i = 0; i < drivers.length; i++) {
// navigate to the desired url
drivers[i].get(url);
// assert that the page title starts with foo
assertTrue(drivers[i].getTitle().startsWith("foo"));
// close current browser session
drivers[i].quit();
}// end for
}// end test
For some reason this code is opening multiple browsers seemingly before the first iteration of loop is completed.
What is actually happening here? and what is a good/better way to do this?
Please understand that I am by no means a professional programmer, and I am also brand new to using Selenium, so if what I am attempting is generally bad practice please let me know, but please don't be rude about it. I will respect your opinion much more if you are respectful in your answers.
No it's not.
In fact, most of the test frameworks have convenient ways to handle sequential/parallel executions of test. You can parametrize test class to run the same tests on multiple browsers. There is an attribute in TestNG called Parameters which can be used with setting.xml for cross browser testing without duplicating the code. An example shown here
I would no do that.
Most of the time it is pointless to immediately run your test against multiple browsers. Most of the problems you run into as you are developing new code or changing old code is not due to browser incompatibilities. Sure, these happens, but most of the time a test will fail because, well, your logic is wrong, and it will not just fail on one browser but on all of them. What do you gain from getting told X times rather than just once that your code is buggy? You've just wasted your time. I typically get the code working on Chrome and then run it against the other browsers.
(By the way, I run my tests against about 10 different combinations of OS, browser and browser version. 3 combinations is definitely not good enough for good coverage. IE 11 does not behave the same as IE 10, for instance. I know from experience.)
Moreover, the interleaving of tests from multiple browsers just seems generally confusing to me. I like one test report to cover only one configuration (OS, browser, browser version) so that I know if there are any problems exactly which configuration is problematic without having to untangle what failed on which browser.

Mercurial pre-merge hook

Is there a way to do some checks before allowing a merge in Mercurial?
I have found the pre-update hook and have a script that runs before an update is allowed, by adding the following to ~/.hg/hgrc:
[hooks]
pre-update = ~/hg_pre_update.sh
But I'd like to run the check before allowing a merge as well, and currently it just allows the merge to go through without running my checks.
Background
In case there are alternative ways to solve the problem...
We have been having a number of problems with 'lost' edits under Mercurial. I've tracked down most of them now to the same underlying cause: someone has vim edit sessions open while either they or someone else does a hg update or merge. The editor warns the file has changed externally, the user ignores the warning and saves their changes.
When these changes are committed, for Mercurial there is nothing controversial. The user has simply reverted all the changes brought in with the last update and put in their own changes.
Some time later, we notice the code has gone walkabouts. Cue assorted insults flung the way of mercurial...
Set vim to autoreload changes if no local changes where done. (otherwise ask, or force a merge)
that's how I avoid such issues in any editor...
Sorry just worked out there is a pre-merge hook that works just the same as pre-update. I tried it before asking the question, but now just looking at my hgrc I realise I put the script being called for that hook to ~/hg_pre_merge.sh which doesn't exist.
I can't find the existence of pre-merge documented anywhere but still feeling like a bit of a muppet now.

xdebug code coverage analysis with simpletest framework

I am doing unit testing with simpletest framework and using xdebug for code coverage reports. let me explain you my problem:
I have a class which I want to test lets assume name of class is pagination.php.
I write another class for testing. I wrote two test cases to test pagination class.
there are around 12 assertion in two test cases which giving me correct result "Pass".
Now I want to generate code coverage report, for this I use xdebug to show that my test cases covering all code or not. I use xdebug_start_code_coverage() function and for showing result I use xdebug_get_code_coverage() function.
Now the problem is that, when I print xdebug_get_code_coverage() Its give me 2 dimension assosiative array with filename, line no and execution times. the result is like this:
array
'path/to/file/pagination.php' =>
array
11 => int 1
113 => int 1
line 11 is start of class and line 113 is end of class. I don't know why it is not going inside class and why it is not giving the statement coverage for class functions. However, my test cases looks ok to me and I know all condition and branching covering are working.
I will really appreciate if you help me in this regard and guide me how to solve this problem.
Maybe I missed something here. If you want something more please let me know.
I implemented an XDebug-CC for a class with invoked methods and it works fine. Though I have to say that I am a bit confused about how this tool defines "executable code", it definitely takes account of methods.
You might check the location of your xdebug_start_code_coverage() and xdebug_get_code_coverage(), as those have to be invoked at the very beginning and the very end.
Also you might check your XDebug-version, as there has been some accuracy-improvements since the feature has been released.
Best
Raffael
SimpleTest has a coverage extension that is fairly easy to setup. IIRC it is only in svn and not the normal packaged downloads. (normally in simpletest/extensions/coverage/)
You can see articles for examples of how to implement it:
http://www.acquia.com/blog/calculating-test-coverage
http://drupal.org/node/1208382

Resources