I have Several hundred Selenium automated tests that (obviously) run in a browser, and sometimes they fail for no good reason. I have a retry analyzer that will retry the test when the #Test fails, but is there anyway to retry the test if the #BeforeMethod or #AfterMethod fails? I may an account creation method that runs in the #BeforeMethod that might fail randomly (but will pass if ran again) but since it was ran in the #BeforeMethod the entire test isn't retried. I do have configfailurepolicy="continue" set in the XML file, so at least the rest of the tests will continue to run.
I think you should delete and re-add the library to the project.
Note: Make sure you selected the correct path to the project directory which contain the library of that project
Related
I have 17 tests executed in one job. 3-4 tests randomly fail each execution with a Timeout exception. It happens because they all are started at once and all don't get enough time/power to execute entirely.
How can I make sure each test is executed one by one?
The first option you can enable the 'Parallel tests' feature if you have TeamCity 2022.04+.
Read here more: https://www.jetbrains.com/help/teamcity/parallel-tests.html
The second option is to handle parallel execution through NUNit itself. You can do that by adding a parameter
[assembly: LevelOfParallelism(5)]
in the Properties/AssemblyInfo.cs class (which is autogenerated)
This is the first time I've used multithreading in Powershell and Runspaces. My first foray into this is via a form-builder and pre-built runspace code from PoshGUI.com.
I have a form, where a button calls a Microsoft.Graph cmdlet requiring the Microsoft.Graph.Identity.SignIns module. This cmdlet is wrapped up in an Async { } code block (per PoshGUI's implementation of Runspaces), and all seems to work fine.
However I added code to handle module checks and install/import the necessary modules when the script runs. Adding this code to my script seems to break it. First run works fine, but subsequent runs fail with the Async code hanging and never returning. I have to open new PS consoles to re-run the script successfully again. But after each run I have to repeat closing/reopening a new session.
I've since found if remove the import code from the script, it works again as expected, however once I close the script, if I simply manually import that module in the existing PS console I ran the script in initially, the script breaks again until I forcibly unload the module.
In short: forcibly loading a needed module within my script (or the session launching the script) breaks the code running within a runspace that requires that module.
I found a post that suggests there can be issues with reusing runspaces, specifically with certain functions, including importing modules, but no further detail.
Anyone have a clue what is going on here? Even though the app closes, where I assume all the runspaces are destroyed something sticks around it seems.
UPDATE For now I've fixed the issue by not importing the module at all. My module check just installs them if not already installed. When the cmdlets requiring the modules are run, it appears they automatically load the modules those commands belong to (not sure how that works), but the script now runs fine, including subsequent runs.
Still don't understand why it's an issue. Further I thought importing of non-default modules was required. Win PowerShell 5.1 fyi.
Quite often there is the chance that protractor test specs throws a timeout exception.
To make debugging and troubleshooting easier, I would like to stop protractor just after a timeout exception and prevent it to continue running test.
But trying to catch timeout exception at each promise looks quite ugly to do.
Is there any other way to stop protractor when it throws a timeout exception?
Another option will be protractor-fail-fast Only in case jasmine-fail-fast doesn't work for you.
This Protractor plugin is essentially a wrapper around jasmine-fail-fast, solving the problem of halting multiple Protractor instances once one of them has failed. Otherwise, a multi-capability Protractor test will take as long as the longest running test instance, potentially as long as if jasmine-fail-fast wasn't applied at all.
One option would be to let jasmine exit on the first failure via jasmine-fail-fast:
Allow Jasmine tests to "fail-fast", exiting on the first failure
instead of running all tests no matter what. This can save a great
deal of time running slow, expensive tests, such as Protractor e2e
tests.
I'm trying to write some integration tests in NodeUnit. My tests work fine, but the test runner hangs because knex is keeping a PostgreSQL DB connection open.
I can get it to release by calling knex.destroy() in my tearDown, but unfortunately then the DB is no longer available for the rest of my test suite (and tests in other files as well).
Is there a way to implement a tearDown that runs only once, after ALL tests have run?
We are using Lighthouse to run unit tests on Silverlight 4 .xap files.
Regularly, but seemingly randomly, on our build server it does the following:
10:18:08 C:\Program Files (x86)\Jenkins\jobs\******\workspace>Lighthouse.exe "******\Bin\Release\******.xap" "TestResults\******.xml"
10:18:10 Test Results file name: TestResults\******.xml
10:18:10 Sending signal to Lighthouse Test Executor to start executing tests.
10:21:54 Lighthouse v1.01 (c) 2011 - Remote Unit Test Run Started.
10:21:54 Total Test Assemblies: 1 Total Test Methods: 61.
10:21:55 Testing results saved to file: TestResults\******.xml
10:21:55 Total Tests: 61 | Tests Passed: 0. | Tests Failed: 0
10:21:55 Exiting (-1) because no Unit Tests were executed - this can't be right, right?
So it hangs for about 4 minutes, says the run has started, then runs no test and immediately stops.
I cannot find any clue on what is going wrong-- this also occurs when no other build is running in parallel, and on developers' machines the tests are executed fine. (Update: After a reboot of our build server, the first Lighthouse test failed, and from then on all others seem to succeed. This feeds my suspicion that some process is hanging on to some resource which Lighthouse needs.) (Update: For completeness: Without making any changes to the code or the tests, for me Lighthouse sometimes succeeds and sometimes fails. As can be seen from the console output, it is very likely that Lighthouse does not even start any test: "Tests Passed" and "Tests Failed" are both 0.)
Does anyone have any clue where to start looking for a possible cause?
Thanks!!
(I'm not tagging this question with lighthouse to prevent confusion with more well-known tools of the same name.)
To determine if it is an environmental issue or a code issue, checkout your sourcecode from last month, and run lighthouse multiple times and see how many time the failure occurs.
Perhaps some faulty unit test logic has been checked in?