How to make a scenario that depends on another scenario using cucumber, selenium and java - selenium-webdriver

I am learning cucumber through some tutorials and there is something that I don't know how to do.. I need to make a scenario that depends on another scenario (for example log out scenario I have to be logged in first before logging out) so what should I do? should I write steps of login at log out scenario (at the feature file) or there is a way to call the whole log in scenario at the log out scenario
also I need to know should I set up driver before each scenario and quit the driver after each scenario?

There is no support for creating a scenario that depends on another scenario in Cucumber-JVM. I think still is supported in the Ruby implementation in Cucumber. It is, however, a practice that is dangerous. The support for calling a scenario from another scenario will not be supported in future versions of Cucumber.
That said, how do you solve your problem when you want to reuse functionality? You mention logout, how do you handle that when many scenarios requires the state to be logged out for a user?
The solution is to implement the functionality in a helper method or helper class that each step that requires a user to be logged out calls.
This allow each scenario to be independent of all other scenarios. This in turn will allow you to run the scenarios in a random order. I don't think the execution order for the scenarios is guaranteed. I definitely know that it is discussed to have the JUnit runner to run scenarios in a random order just to enforce the habit of not having scenarios depend on other scenarios.
Your other question, how to set up WebDriver before a scenario and how to tear it down, is solved using the Before and After hooks in Cucumber. When using them, be careful not to import the JUnit version of Before and After.

Take a look into cucumber hooks, this allows you to set up global 'before' and 'after' steps, which will run for every scenario without having to specify them in your feature files.
Because they run for every scenario, they're ideal for something like initialising the driver at the start of each test. It may be suitable for running your logon, but if there's a chance you'll have a scenario which doesn't involve logging on then it wouldn't be the way to go (alternative further down). The same applies for the after scenario, which is where you could perform the log off and shut down your driver. As an example:
/**
* Before each scenario, initialise webDriver.
*/
#Before
public void beforeScenario() {
this.application.initialiseWebDriver();
}
/**
* After each scenario, quit the web driver.
*/
#After
public void afterScenario() {
this.log.trace("afterScenario");
this.application.quitBrowser();
}
In my example, I'm simply starting the driver in the before scenario, and closing it in the after, But in theory these before and after methods could contain anything, you just need to have them in your step definitions class and annotate them with the '#Before' and '#After' tags as shown.
As well as these, you can also have multiple before and after tags which you can call by tagging the scenario. As an example:
/**
* Something to do after certain scenarios.
*/
#After("#doAfterMethod")
public void afterMethod() {
this.application.afterThing();
}
You can set up something like this in your step defs, and as standard it won't run. However, you can tag your scenario with '#doAfterMethod' and it will run for the tagged scenarios, which makes this good for a common scenario that you will need at the end of tests, but not all of them. The same would work for methods to run before a scenario, just change the '#After' to '#Before'.
Bear in mind that if you do use these, the global Before and After (so in this example the driver initialisation and quitting) will always be the first and last things to run, with any other before/afters in between them and the scenario.
Further Reading:
https://github.com/cucumber/cucumber/wiki/Hooks
https://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/

You can set test dependency with qaf bdd. You can use dependsOnMethods or dependsOnGroups in scenario meta-data to set dependency same as in TestNG, because qaf-BDD is TestNG BDD implementation.

Related

Determine Minimum Tests To Be Run For Salesforce Deploy

I have set up a GitHub action that validates code changes upon a pull request. I am using the Salesforce CLI to validate (on PR) or deploy (on main merge).
The documentation gives me several options to determine testing for this deploy.These options are NoTestRun, RunSpecifiedTests, RunLocalTests, RunAllTestsInOrg. I am currently using RunLocalTests as so:
sfdx force:source:deploy -x output/package/package.xml --testlevel=RunLocalTests --checkonly
We work with some big orgs whose full tests take quite a while to complete. I would like to only RunSpecifiedTests for validation but am not sure how to set up my GitHub action to dynamically know which tests to pull in. I haven't seen anything in the CLI docs to determine this.
There really isn't a way to do this with 100% reliability. Any change in Apex code has the potential to impact any other Apex code. A wide variety of declarative metadata changes, including for example Validation Rules, Lookup Field Filters, Processes and Flows, Workflow Rules, and schema changes, can impact the execution of Apex code.
If you want to reduce your test and deployment runtime, some key strategies are:
Ensure your tests can run in parallel, which is typically orders of magnitude faster.
Remove any tests that are not providing meaningful validation of your application.
Modularize your application into packages, which can be meaningfully tested in isolation. Then, use integration tests (whether written in Apex or in some other tooling, such as Robot Framework) to validate the interaction between new package versions.
It's only this last that can provide you real ability to establish a boundary around specific code behavior and test it in isolation, although you'll always still need integration tests as well.
At best, you can establish some sort of naming convention that maps between Apex classes and related test classes, but per the point above, using such a strategy to limit test runs has a very real possibility of causing you to miss bugs (i.e., false positives).

How to run NUnit onetimesetup multiple times for different databases

We have an integration test suite that contains numerous tests executing repository classes.
The objective is to have a [OneTimeSetup] method in the BaseTestFixture, that will create/populate each target database (Postgres/SQL Server) only once before all tests and teardown after all tests.
Got this error:
nunit OneTimeSetUp: SetUp and TearDown methods must not have parameters
How can we run the entire test suite against Postgres, SQL server and both without duplicating tests?
Thanks.
Interesting question. I can't really think of an 'out-the-box' solution myself
One simple workaround would be to do two separate console runs and use the --params flag. That way, you could run a different setup for each database type, dependent on the TestParameters value passed in.
A nicer alternative may be to implement a custom attribute, which would allow you to parameterize SetUpFixtures. (There's an existing discussion on adding this feature here- although it hasn't garnered much interest since 2016.) I think it would be reasonably possible to do this as a custom attribute without modifying NUnit however.
Take a look at how SetUpFixtureAttribute is implemented. I would think you'd want to create your own IFixtureBuilder attribute which works in a similar way, except can be parameterised, and return two suites, with a different setup for each database. I think that would work, although it's not functionality I'm totally familiar with myself.

Unit tests in a database driven CodeIgniter web-application

CodeIgniter comes with a Unit Testing class built in, and I would very much like to use it. However, almost all functions I would want to test interact with the database by adding records, deleting records, etc. How would I, for example, write tests for the 'create user' function without actually creating users every time I run the test?
Upon some further research, it seems I need to be using Mock objects for external services like the database, etc. I haven't been able to find much in the way of docs on how to do that besides this one forum thread:
http://codeigniter.com/forums/viewthread/106737
Is there any actual documentation?
If your database driver allows transactions, use them. Do whatever needs to be tested, then rollback (on success or failure).
I've found that it's hard to run unit tests with controller actions. If you find a good way of doing that, let us know!

How to Test Web Code?

Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state?
Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time).
Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set?
I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience.
Are there good articles out there that discuss this issue of web-based development in general?
I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.
Update: A quick google search showed a DB unit extension for PHPUnit.
If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.
I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.
You can then write your unit tests to make use of this reliable, repeatable data.
As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!
We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..
I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.
I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.
If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.
Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.
It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.
You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.
Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):
I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.
I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).
A way to test database code is:
Insert a few rows (using SQL) to initialize state
Run the function that you want to test
Compare expected with actual results. Here you could use your normal unit testing framework
Clean up the rows that were changed (so the next run won't see the previous run)
The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.
In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...

How do I unit test persistence?

As a novice in practicing test-driven development, I often end up in a quandary as to how to unit test persistence to a database.
I know that technically this would be an integration test (not a unit test), but I want to find out the best strategies for the following:
Testing queries.
Testing inserts. How do I know that the insert that has gone wrong if it fails? I can test it by inserting and then querying, but how can I know that the query wasn't wrong?
Testing updates and deletes -- same as testing inserts
What are the best practices for doing these?
Regarding testing SQL: I am aware that this could be done, but if I use an O/R Mapper like NHibernate, it attaches some naming warts in the aliases used for the output queries, and as that is somewhat unpredictable I'm not sure I could test for that.
Should I just, abandon everything and simply trust NHibernate? I'm not sure that's prudent.
Look into DB Unit. It is a Java library, but there must be a C# equivalent. It lets you prepare the database with a set of data so that you know what is in the database, then you can interface with DB Unit to see what is in the database. It can run against many database systems, so you can use your actual database setup, or use something else, like HSQL in Java (a Java database implementation with an in memory option).
If you want to test that your code is using the database properly (which you most likely should be doing), then this is the way to go to isolate each test and ensure the database has expected data prepared.
As Mike Stone said, DbUnit is great for getting the database into a known state before running your tests. When your tests are finished, DbUnit can put the database back into the state it was in before you ran the tests.
DbUnit (Java)
DbUnit.NET
You do the unit testing by mocking out the database connection. This way, you can build scenarios where specific queries in the flow of a method call succeed or fail. I usually build my mock expectations so that the actual query text is ignored, because I really want to test the fault tolerance of the method and how it handles itself -- the specifics of the SQL are irrelevant to that end.
Obviously this means your test won't actually verify that the method works, because the SQL may be wrong. This is where integration tests kick in. For that, I expect someone else will have a more thorough answer, as I'm just beginning to get to grips with those myself.
I have written a post here concerning unit testing the data layer which covers this exact problem. Apologies for the (shameful) plug, but the article is too long to post here.
I hope that helps you - it has worked very well for me over the last 6 months on 3 active projects.
Regards,
Rob G
The problem I experienced when unit testing persistence, especially without an ORM and thus mocking your database (connection), is that you don't really know if your queries succeed. It could be that you your queries are specifically designed for a particular database version and only succeed with that version. You'll never find that out if you mock your database. So in my opinion, unit testing persistence is only of limited use. You should always add tests running against the targeted database.
For NHibernate, I'd definitely advocate just mocking out the NHibernate API for unit tests -- trust the library to do the right thing. If you want to ensure that the data actually goes to the DB, do an integration test.
For JDBC based projects, my Acolyte framework can be used: http://acolyte.eu.org . It allows to mockup data access you want to tests, benefiting from JDBC abstraction, without having to manage a specific test DB.
I would also mock the database, and check that the queries are what you expected. There is the risk that the test checks the wrong sql, but this would be detected in the integration tests
I usually create a repository, use that to save my entity and retrieve a fresh one. Then I assert that the retrieved is equal to the saved.
Technically unit tests of persistance are not unit tests. They are integration tests.
With C# using mbUnit, you simply use the SqlRestoreInfo and RollBack attributes:
[TestFixture]
[SqlRestoreInfo(<connectionsting>, <name>,<backupLocation>]
public class Tests
{
[SetUp]
public void Setup()
{
}
[Test]
[RollBack]
public void TEST()
{
//test insert.
}
}
The same can be done in NUnit, except the attribute names differ slightly.
As for checking, if your query iss successful, you normally need to follow it with a second query to see if the database has been changed as you expected.

Resources