How to execute tests in TeamCity one by one - selenium-webdriver

I have 17 tests executed in one job. 3-4 tests randomly fail each execution with a Timeout exception. It happens because they all are started at once and all don't get enough time/power to execute entirely.
How can I make sure each test is executed one by one?

The first option you can enable the 'Parallel tests' feature if you have TeamCity 2022.04+.
Read here more: https://www.jetbrains.com/help/teamcity/parallel-tests.html
The second option is to handle parallel execution through NUNit itself. You can do that by adding a parameter
[assembly: LevelOfParallelism(5)]
in the Properties/AssemblyInfo.cs class (which is autogenerated)

Related

TestNG - Retry test when configuration fails

I have Several hundred Selenium automated tests that (obviously) run in a browser, and sometimes they fail for no good reason. I have a retry analyzer that will retry the test when the #Test fails, but is there anyway to retry the test if the #BeforeMethod or #AfterMethod fails? I may an account creation method that runs in the #BeforeMethod that might fail randomly (but will pass if ran again) but since it was ran in the #BeforeMethod the entire test isn't retried. I do have configfailurepolicy="continue" set in the XML file, so at least the rest of the tests will continue to run.
I think you should delete and re-add the library to the project.
Note: Make sure you selected the correct path to the project directory which contain the library of that project

Protractor application get paused when system sleeps

I am working in a large project. There are about minimum 160 specs to be executed with protractor, and it consumes more than one hour to finish automation testing. But issue is my system get sleeps in between it when no action is performed. Is there any way to make system alive until protractor finishes its execution.
So that I could run it without system sleep. I cant increase my sleep time because it had to be run in many sytem. Please let me know how could I handle it ?
I am using chrome for running the automation.
Cheers.
Would it be possible to let me know what OS you are using? So that I can give step by step information. For example: if you are using windows machine then
Goto --> Control Panel\Hardware and Sound\Power Options --> Change the Sleep settings make it to "Never"
It will remain open for your lifetime till you switchoff your computer.
By the way there is no relation this question to Protractor.

Setup priority on Azkaban parallel flows/depedencies

I'm using Azkaban 3.4.1 and one of my flow has more than 30 dependencies. Some dependencies are takes more longer than another. So, I want to prioritize these flows to started before another flows. (because the running thread is limited)
Currently the number of parallel execution is limited with flow.num.job.threads which is 10 by default. I tried increase that property and make sure the long process started right away, but the cpu get very high, so I am not sure that is a good option.
Using this fork https://github.com/hanip-ss/azkaban/releases/tag/3.4.2.
I can now add job.priority value in job properties file.

Is there a way to stop protractor after it throws a timeout exception?

Quite often there is the chance that protractor test specs throws a timeout exception.
To make debugging and troubleshooting easier, I would like to stop protractor just after a timeout exception and prevent it to continue running test.
But trying to catch timeout exception at each promise looks quite ugly to do.
Is there any other way to stop protractor when it throws a timeout exception?
Another option will be protractor-fail-fast Only in case jasmine-fail-fast doesn't work for you.
This Protractor plugin is essentially a wrapper around jasmine-fail-fast, solving the problem of halting multiple Protractor instances once one of them has failed. Otherwise, a multi-capability Protractor test will take as long as the longest running test instance, potentially as long as if jasmine-fail-fast wasn't applied at all.
One option would be to let jasmine exit on the first failure via jasmine-fail-fast:
Allow Jasmine tests to "fail-fast", exiting on the first failure
instead of running all tests no matter what. This can save a great
deal of time running slow, expensive tests, such as Protractor e2e
tests.

Lighthouse (Silverlight Unit Test Runner) hangs then performs no tests -- why?

We are using Lighthouse to run unit tests on Silverlight 4 .xap files.
Regularly, but seemingly randomly, on our build server it does the following:
10:18:08 C:\Program Files (x86)\Jenkins\jobs\******\workspace>Lighthouse.exe "******\Bin\Release\******.xap" "TestResults\******.xml"
10:18:10 Test Results file name: TestResults\******.xml
10:18:10 Sending signal to Lighthouse Test Executor to start executing tests.
10:21:54 Lighthouse v1.01 (c) 2011 - Remote Unit Test Run Started.
10:21:54 Total Test Assemblies: 1 Total Test Methods: 61.
10:21:55 Testing results saved to file: TestResults\******.xml
10:21:55 Total Tests: 61 | Tests Passed: 0. | Tests Failed: 0
10:21:55 Exiting (-1) because no Unit Tests were executed - this can't be right, right?
So it hangs for about 4 minutes, says the run has started, then runs no test and immediately stops.
I cannot find any clue on what is going wrong-- this also occurs when no other build is running in parallel, and on developers' machines the tests are executed fine. (Update: After a reboot of our build server, the first Lighthouse test failed, and from then on all others seem to succeed. This feeds my suspicion that some process is hanging on to some resource which Lighthouse needs.) (Update: For completeness: Without making any changes to the code or the tests, for me Lighthouse sometimes succeeds and sometimes fails. As can be seen from the console output, it is very likely that Lighthouse does not even start any test: "Tests Passed" and "Tests Failed" are both 0.)
Does anyone have any clue where to start looking for a possible cause?
Thanks!!
(I'm not tagging this question with lighthouse to prevent confusion with more well-known tools of the same name.)
To determine if it is an environmental issue or a code issue, checkout your sourcecode from last month, and run lighthouse multiple times and see how many time the failure occurs.
Perhaps some faulty unit test logic has been checked in?

Resources