Check docs explains how to selectively run test suites or test cases, but not how to selectively run individual tests. My test cases can contain dozens of tests, so when debugging using printf statements that just produces a mess.
I need to be able to run specific tests, preferably by name, e.g.:
CK_RUN_TEST=check_readbuf_allocation make check
The hypothetical CK_RUN_TEST environment variable would cause only the test function check_readbuf_allocation to be executed.
Each individual test is completely separate from all other tests by design, so I see no reason why this shouldn't be possible.
Is there any way to run individual tests short of creating a whole new executable with all the Check boilerplate code for every test function?
Related
I'm automating Selenium tests using TestNG's data-provider-thread-count which runs tests in parallel. But I could see that username, password of one test i.e. driver is being passed to another one.
I analyzed my code and there I could see that for each thread I've Webdriver instance and there is no objects which are shared across, hence I think there is no chance of sharing test data of test with another.
Could you please let me know what could be other possible reasons so accordingly I'll analyze my code and try to find root cause?
We have an integration test suite that contains numerous tests executing repository classes.
The objective is to have a [OneTimeSetup] method in the BaseTestFixture, that will create/populate each target database (Postgres/SQL Server) only once before all tests and teardown after all tests.
Got this error:
nunit OneTimeSetUp: SetUp and TearDown methods must not have parameters
How can we run the entire test suite against Postgres, SQL server and both without duplicating tests?
Thanks.
Interesting question. I can't really think of an 'out-the-box' solution myself
One simple workaround would be to do two separate console runs and use the --params flag. That way, you could run a different setup for each database type, dependent on the TestParameters value passed in.
A nicer alternative may be to implement a custom attribute, which would allow you to parameterize SetUpFixtures. (There's an existing discussion on adding this feature here- although it hasn't garnered much interest since 2016.) I think it would be reasonably possible to do this as a custom attribute without modifying NUnit however.
Take a look at how SetUpFixtureAttribute is implemented. I would think you'd want to create your own IFixtureBuilder attribute which works in a similar way, except can be parameterised, and return two suites, with a different setup for each database. I think that would work, although it's not functionality I'm totally familiar with myself.
For a Website which is made using angular js , our organization used protractor as the tool to automate test cases.
Our organization has come up with a new tool named 'HipTest' to manage test cases automation.
How to integrate protractor test cases with HipTest. I went to following links but was unable to fetch some useful information.
https://docs.hiptest.net/automate-your-tests/
https://github.com/hiptest/hiptest-publisher
Can Anyone help me how to start ?
I'm one of the main contributor or hiptest-publisher, so I should be able to help you.
The quick way to start with hiptest-publisher is to download the bootstrap of the tests from Hiptest (under the automation tab, you will have a "Javascript/Protractor" link).
You will get a zip file with four files (you should add all of them to your version control system, alongside the code of the application you are testing):
- one for the configuration of hiptest-publisher to use the command-line tool
- one for all the tests (you can split them later on, using the --with-folders option in the config file)
- one for the action words: that's the place where you will do the automation
- one for storing the status of the action words you exported (which is used with hiptest-publisher to see which action words have been updated since the last update)
Once the action words are implemented, the test files generated can be integrated in your test suite like any other Protractor test.
On the Hiptest side itself, the only requirement you have is that your tests should only be written using action words only. From what I understand from your post, you do not work directly in Hiptest yourself and you only manage the automation part (or did I get that wrong ? )
For pushing the execution results back to Hiptest, the principle is pretty simple:
- create a test run dedicated to the CI
- run the command "hiptest-publisher --config-file --test-run-id " before the tests (so only the tests inside the test run are executed, you do not want to run a test that someone is currently writing to be executed on fail of course)
- run your tests
- run the command "hiptest-publisher --config-file --push " to push the results back to hiptest.
Note that those two commands (including the test run ID) can be found directly inside Hiptest, from the "Automate" button in the test run.
If you have an Hiptest account, you can contact us directly on the chat, that might be easier to help you through the process.
Ho and I have a recording of the last webinar I made about automation, I guess you could find some useful information there too :)
I am trying to test some legacy database code. I have many situations where I want to
Set database in empty state
For a group of tests do the following for each group:
for a group of tests, set the db to an inital test state
run those tests
return db back to empty state
In NUnit, when using a TestFixture am I guaranteed that the entire fixture's test will be run together as well as the TestFixtureTearDown before the next TestFixture gets processed? I've tried this using the Visual Studio test stuff and this doesn't appear to be the case.
The main reason for trying to do this, is that sometimes the process of getting the db to the state in step 2 can be expensive and I don't want to have to run this for each of the test cases.
TestFixtureTearDown will be executed once the tests with the TestFixture are completed. Coupling this with TestFixtureSetUp should provide the beahivor your are seeking on a TestFixture basis.
The Unit Testing Framework in Visual Studio does not have the same syntax as NUnit. While the test approach is the same for all xUnit based frameworks the syntax will vary.
Does anyone know of a tool that can help determine which unit tests should be run based on the diffs from a commit?
For example, assume a developer commits something that only changes one line of code. Now, assume that I have 1000 unit tests, with code coverage data for each unit test (or maybe just for each test suite). It is unlikely that the developer's one-line change will need to run all 1000 test cases. Instead, maybe only a few of those unit tests actually come into contact with this one-line change. Is there a tool out there that can help determine which test cases are relevant to a developer's code changes?
Thanks!
As far as I understand, the key purpose of unit testing is to cover the entire code base. When you make a small change to one file all test have to be executed to make sure your micro-change doesn't break the product. If you break this principle there is little reason in your unit testing.
ps. I would suggest to split the project onto independent modules/services, and create new "integration unit tests", which will validate interfaces between them. But inside one module/service all unit tests should be executed as "all or nothing".
You could probably use make or similar tools to do this by generating a results file for each test, and making the results file dependent on the source files that it uses (as well as the unit test code).
Our family of Test Coverage tools can tell you which tests exercise which parts of the code, which is the basis for your answer.
They can also tell you which tests need to be re-run, when you re-instrument the code base. In effect, it computes a diff on source files that it has already instrumented, rather than using commit diffs, but it achieves the effect you are looking for, IMHO.
You might try running them with 'prove' which has a 'fresh' option which is based on file modification times. Check the prove manpage for details.
Disclaimer: I'm new to C unit testing and haven't been using prove, but have read about this option in my research.