Parallel execution causing test data mixing across tests - selenium-webdriver

I'm automating Selenium tests using TestNG's data-provider-thread-count which runs tests in parallel. But I could see that username, password of one test i.e. driver is being passed to another one.
I analyzed my code and there I could see that for each thread I've Webdriver instance and there is no objects which are shared across, hence I think there is no chance of sharing test data of test with another.
Could you please let me know what could be other possible reasons so accordingly I'll analyze my code and try to find root cause?

Related

Selenium Webdriver + Jmeter + StormRunner for Performance test

I wanted to try out integration of Selenium Jmeter and StormRunner. My end goal is to do Load testing with 'n' number of users on StormRunner
What ? - For e.g. I have Selenium Script, convert it in to Jmeter (I can get this information from many sources)
Then my Jmeter script should get ready
Then upload Jmeter script in to StormRunner and pass the necessary parameter through Jenkins and run the load test.
I really want the opinion here about feasibility and whether it is in right direction or not.
Idea here is that Automated Load/Performance test
Selenium is a browser automation framework and JMeter acts on HTTP protocol level so your "Automated" requirement might not be fulfilled especially if your tests are relying on client-side checks like sorting or waiting for element to appear.
Theoretically given you properly configure JMeter it can behave like a real browser, but it still not be executing client-side JavaScript.
If you're fine with this constraint - your approach is valid, if not and the "automated functional test" requirement is the must - consider migrating to TruClient Protocol instead
Why wouldn't you covert your script to a native Loadrunner/Stormrunner form of virtual user?
You should look at the value of what you are trying to achieve. The end value of a performance test is in analysis. Analysis simply takes the timing records and the resource measurements produced during the test, bringing them together on a common timestamp, and then allowing you to analyze what resource "X" is being impinged when timing record "Y" is too long. This then points to some configuration or code which locks up on resource, "X."
What is your path to value in your model? You speak about converting a functional test script to a performance one. Realistically, you should already know that your code, "works for one," before you get to asking, "Does it work for many?" There is a change in script definitions which typically accompanies this understanding.
Where are your collection of resources noted? Which Resources? On which Hosts? This is on the "path to value" problem where you need to have the resource measurements to diagnose root cause of poor performance.

How to run NUnit onetimesetup multiple times for different databases

We have an integration test suite that contains numerous tests executing repository classes.
The objective is to have a [OneTimeSetup] method in the BaseTestFixture, that will create/populate each target database (Postgres/SQL Server) only once before all tests and teardown after all tests.
Got this error:
nunit OneTimeSetUp: SetUp and TearDown methods must not have parameters
How can we run the entire test suite against Postgres, SQL server and both without duplicating tests?
Thanks.
Interesting question. I can't really think of an 'out-the-box' solution myself
One simple workaround would be to do two separate console runs and use the --params flag. That way, you could run a different setup for each database type, dependent on the TestParameters value passed in.
A nicer alternative may be to implement a custom attribute, which would allow you to parameterize SetUpFixtures. (There's an existing discussion on adding this feature here- although it hasn't garnered much interest since 2016.) I think it would be reasonably possible to do this as a custom attribute without modifying NUnit however.
Take a look at how SetUpFixtureAttribute is implemented. I would think you'd want to create your own IFixtureBuilder attribute which works in a similar way, except can be parameterised, and return two suites, with a different setup for each database. I think that would work, although it's not functionality I'm totally familiar with myself.

SpecFlow issue - Before Scenario/Feature looping through multiple times

Any one else having this issue?
I have used [Scope(Feature = "FeatureName")] at the top of my steps so that only these steps are usable for that feature. I have then used a [BeforeScenario("taggedScenario")]. Multiple tests have the same tag so i would expect this to be run each time before each of the tagged tests. However when i go to run a single test it runs through the before scenario multiple times before the starting. I would expect it to run once, then run my test. If I was to run 2 tests with the same tag i would then expect it to run the before scenario before each test.
Has anyone else come across this issue and if so has anyone managed to resolve it?

Selenium automation report

I am using Selenium framework for my test cases execution.
I need an instant report of test cases that are passed while the full suite is in execution currently.
For Eg: There are 100 test cases in suite and five have run of which 3 passed, 2 failed and I need these instant report while the suite is in-progress. Can you please help me with this task?
You can use ExtentReport.
You can use it to log your test steps and once its done it will generate a report to show your results.
For what your looking for, ExtentReport uses a "flush".
If you call this flush after each test step it will amend the step and create the report.
This is something I'm looking into myself at the moment, so I wouldn't consider this an answer but something I've stumbled across myself, hope it helps.
Here is how to set up ExtentReports on your project with examples - http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/
You must use it in conjunction with a test runner eg. TestNG or JUnit.
For what you are trying to achieve is slightly different to the example. You need to call a flush after every test step so it will amend to the report after the step is completed rather than when all the tests are completed. Its not something I have done before but it was explained to me like the following
Just call .flush() after every test instead of once at the end of your test run. BUT you need to make sure the ExtentReports object itself is only initialized once, instead of being reinitialized at the start of every test. For example, I used TestNG. The ExtentReports is called once using #BeforeSuite, but the .flush() is called after every test using #AfterMethod. I hope this makes sense.
The only thing that can’t be solved via code is the HTML refresh as this is outside the control of the ExtentReports library (it doesn’t know where you’ve opened the actual HTML file). But this can be taken care of by using a simple browser plugin as I said. At least for Chrome there are a lot of them, just do a Google search for ‘chrome auto refresh’.
Hope this helps. If you need anymore advice don't hesitate to contact me.

What is a good approach to write Automated tests that depend on data that needs to be setup before executing the test

I am currently working on writing automated tests using Selenium Webdriver. We use MTM to run our test suites. I need some ideas as to what would be a good way to write these tests.
Currently before running these tests, we perform a basic setup that sets the username and password that would be required to login to the site, set the browser that the test should use, and few other things.
Currently the data that is required for each of the test is setup manually and is already present in the database . The test simply performs a keyword search, finds the necessary data it needs and then performs the assertions. What we would like to achieve is find such data that is already present in the database and use it instead of creating it manually. That way I can run these tests across different environments(dev,qa,production).
The site I am testing is an e-commerce website. I mostly write tests for specific features that my team develops, and thus many of these tests require some specific data. e.g setting up a store that has products with certain shipping rates, with particular offers etc. I would like to find a way to automate or almost remove this manual process of setting up the data. That way I have the flexibility to run these tests across environments. Could you please direct me to some articles/suggestions that can help me achieve this ?
If I am understanding your question correctly, you want to automate the test data setup.
You can achieve this in following ways:
If possible, write a sql script which inserts the desired data in db. Now you can execute this while running your tests. If you are using TestNG framework, then there is already an annotation available like #BeforeTest. You can execute that sql script in this annotation, it will be executed once before your test and data is ready.
Prepare data in a spreadsheet. Create an algorithm, fill the data dynamically in spreadsheet and from there either read directly and fetch it to your test using #BeforeTest or if required, data in spreadsheet can be inserted in db also.

Resources