Any one else having this issue?
I have used [Scope(Feature = "FeatureName")] at the top of my steps so that only these steps are usable for that feature. I have then used a [BeforeScenario("taggedScenario")]. Multiple tests have the same tag so i would expect this to be run each time before each of the tagged tests. However when i go to run a single test it runs through the before scenario multiple times before the starting. I would expect it to run once, then run my test. If I was to run 2 tests with the same tag i would then expect it to run the before scenario before each test.
Has anyone else come across this issue and if so has anyone managed to resolve it?
Related
I'm automating Selenium tests using TestNG's data-provider-thread-count which runs tests in parallel. But I could see that username, password of one test i.e. driver is being passed to another one.
I analyzed my code and there I could see that for each thread I've Webdriver instance and there is no objects which are shared across, hence I think there is no chance of sharing test data of test with another.
Could you please let me know what could be other possible reasons so accordingly I'll analyze my code and try to find root cause?
For a Website which is made using angular js , our organization used protractor as the tool to automate test cases.
Our organization has come up with a new tool named 'HipTest' to manage test cases automation.
How to integrate protractor test cases with HipTest. I went to following links but was unable to fetch some useful information.
https://docs.hiptest.net/automate-your-tests/
https://github.com/hiptest/hiptest-publisher
Can Anyone help me how to start ?
I'm one of the main contributor or hiptest-publisher, so I should be able to help you.
The quick way to start with hiptest-publisher is to download the bootstrap of the tests from Hiptest (under the automation tab, you will have a "Javascript/Protractor" link).
You will get a zip file with four files (you should add all of them to your version control system, alongside the code of the application you are testing):
- one for the configuration of hiptest-publisher to use the command-line tool
- one for all the tests (you can split them later on, using the --with-folders option in the config file)
- one for the action words: that's the place where you will do the automation
- one for storing the status of the action words you exported (which is used with hiptest-publisher to see which action words have been updated since the last update)
Once the action words are implemented, the test files generated can be integrated in your test suite like any other Protractor test.
On the Hiptest side itself, the only requirement you have is that your tests should only be written using action words only. From what I understand from your post, you do not work directly in Hiptest yourself and you only manage the automation part (or did I get that wrong ? )
For pushing the execution results back to Hiptest, the principle is pretty simple:
- create a test run dedicated to the CI
- run the command "hiptest-publisher --config-file --test-run-id " before the tests (so only the tests inside the test run are executed, you do not want to run a test that someone is currently writing to be executed on fail of course)
- run your tests
- run the command "hiptest-publisher --config-file --push " to push the results back to hiptest.
Note that those two commands (including the test run ID) can be found directly inside Hiptest, from the "Automate" button in the test run.
If you have an Hiptest account, you can contact us directly on the chat, that might be easier to help you through the process.
Ho and I have a recording of the last webinar I made about automation, I guess you could find some useful information there too :)
My test suite fails intermittently throwing error as 'Element not found' even after putting all sort of waits(waitForPageLoad , Thread.sleep and explicit wait for element to be loaded in a page) it fails intermittently.
When running the same test case individually runs fine and also sometimes along with n number of test cases it works fine.There is no consistency to which test case will fail when I try running the full automation test suite consisting of 30 to 40 test cases.
Can anyone please help me to find the root cause for the same.
I am running the test suite through maven in following phase:
<executions><execution>
<phase>site</phase>
<goals>
<goal>send-mail</goal>
</goals>
</execution> </executions>
Does it have any relation in which Maven phase am running ?
I don't know about Maven, but you are waiting for the page to load but that doesn't necessarily mean that the element you want has loaded.
I think you need to be certain that the element you wanted was actually present, and if it was present, was it interactive.
You say you are using Thread.sleep() but this will only wait a specific period of time without any intelligence regarding the presence of elements on a page.
What you should be doing is polling for an element to become avaialable. Try looking at Explicit Waits http://www.seleniumhq.org/docs/04_webdriver_advanced.jsp
Here you combine two elements WebDriverWait and ExpectedCondition so that you wait until the element is clickable for up to say 30 seconds then fail.
Also you say that the test runs fine in isolation but poorly in a run. This tells me that you aren't resetting your environment to a good known state. A simple example would be closing your browser after each test and a better way would be to fire up an immutable VM (or docker container) for each test case so that you get the same thing over and over again.
Although there are cases where you would want to not do that and see how your tests handle a soak.
Getting stable and reproducible though should be your main concern. Have a look at when the test fails and confirm that the element you wanted was actually available at that particular time. Memory leaks etc.. can slow the browser down and make a sleep of 3 seconds more like 20. If you are going to wait to cover a defect use huge sleeps (remember polls are better) to cover it and get you stable while the defect is worked on.
I am using Selenium framework for my test cases execution.
I need an instant report of test cases that are passed while the full suite is in execution currently.
For Eg: There are 100 test cases in suite and five have run of which 3 passed, 2 failed and I need these instant report while the suite is in-progress. Can you please help me with this task?
You can use ExtentReport.
You can use it to log your test steps and once its done it will generate a report to show your results.
For what your looking for, ExtentReport uses a "flush".
If you call this flush after each test step it will amend the step and create the report.
This is something I'm looking into myself at the moment, so I wouldn't consider this an answer but something I've stumbled across myself, hope it helps.
Here is how to set up ExtentReports on your project with examples - http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/
You must use it in conjunction with a test runner eg. TestNG or JUnit.
For what you are trying to achieve is slightly different to the example. You need to call a flush after every test step so it will amend to the report after the step is completed rather than when all the tests are completed. Its not something I have done before but it was explained to me like the following
Just call .flush() after every test instead of once at the end of your test run. BUT you need to make sure the ExtentReports object itself is only initialized once, instead of being reinitialized at the start of every test. For example, I used TestNG. The ExtentReports is called once using #BeforeSuite, but the .flush() is called after every test using #AfterMethod. I hope this makes sense.
The only thing that can’t be solved via code is the HTML refresh as this is outside the control of the ExtentReports library (it doesn’t know where you’ve opened the actual HTML file). But this can be taken care of by using a simple browser plugin as I said. At least for Chrome there are a lot of them, just do a Google search for ‘chrome auto refresh’.
Hope this helps. If you need anymore advice don't hesitate to contact me.
Currently, i am running my selenium webdriver - java scripts, there is a strange issue which is cropping up these days. My scripts run absoulutely fine and when I re-run them.. sometimes my scripts enter the values via sendkeys() in some other fields as a result of which my entire script fails.
I dont know the real reason behind it, I know the scripts what i am running are pretty simple and straight flows.. Is this because of my application response issue? Because I have given wait commands also to tackle the same.But when I have re-run the same scripts again it enters the values in some irrelevant fields..
Note: I dont change any of my codes while rerunning it... Its more frustrating
Is this normal when you run Selenium webdriver-java scripts??
Please advise me how to tackle this issue because I am not aware to deal with this issue
You might need to figure out if your elements locators are changing dynamically each time you run and then look at your code locators to be more appropriately handling the change. Other than that I see no reason why the elements being interacted with, change randomly.