Having one query for cucumber bdd selenium automation - selenium-webdriver

I have 2 scenarios to be executed.
1st scenario returns me one variable as Card I'd.
This card I'd iwant to consume in 2nd scenario

You can't preserve things between scenarios and connect scenarios together. Each scenario is a separate test and everything is reset between scenarios. This is by design.
You need to change your approach to writing your second scenario, so it has a Given which sets up your card.

Related

Best practices for testing a React modal with React Testing Library and Mock Service Worker?

If I were building a simple function that divided one number by another, I would:
In all cases, call the function. The first test would probably be a happy path, like 10 / 2.
Another test would divide a smaller number by a larger one, resulting in a decimal value.
Some other tests would introduce negative numbers into the mix.
Finally, I would be sure to have one test that divided by zero, to see how that was handled.
I have a series of modals in React that I need to test. They have some commonalities:
They receive a set of props.
They act on those props to display a bunch of elements to the user, pre-populating these elements with some combination of the data in the props.
They use the fireEvent.### functions to simulate what the user would do within the modal.
Near the end of each test that involves a POST or a PATCH, a Submit button is pressed.
After pressing the Submit button, I have two expect functions at the end of each test, like in these examples:
expect(spy).toHaveBeenCalledTimes(1);
expect(spy).toHaveBeenCalledWith({
type: 'SOME_ACTION',
payload: {
property1: 'Some value',
property2: 'Some other value',
property3: 53.2
},
});
This approach entirely makes sense to me because, much like a simple math function, the modal has data coming in and data going out. But some colleagues are insisting that we shouldn't be testing with such expect(spy) functions. Instead they're saying that we should test the end result on the page. But there is no "page" because the modal is being tested in isolation, which I also believe to be a best practice.
Another suggestion was to add a success or failure string to a Mock Service Worker (MSW) response string, like "You've successfully saved the new data". The reason I don't like this approach is because:
Such a string is just an artificial construct in the test, so what does it really prove?
If there are calculations involved with what is sent by the POST or PATCH, don't we want to know whether the correct data is being sent? For example, if the user was prompted to enter the number of hours they worked each day in the past week, wouldn't we want to compare what was entered into the input elements vs. what was sent? And maybe those hours are summed (for some reason) and included in another property that was included - wouldn't we want to confirm that the correct sum was sent?
I do respect the opinions of my colleagues but have yet to hear anything from them that justifies dropping the approach I've employed and adopting one of their alternates. So I'm seeking insight from the community to better understand if this is a practice you would follow ... or not.
This is a case by case scenario with no real answer, as performance and other variables may play a role in making these decisions.
However, based off the information you give both you and your team are right. You would want to validate that the correct data is being sent to the backend yet you would also want to validate that the user is receiving a visual response, such as "Successfully saved data".
Yet if I have to choose one it would be the checking the data as that has the most "code coverage". Checking for the "success" message simply checks if the submit button was pressed where as the data checking assures that most, if not all, data states were correctly set.
I prefer to do both.
But there is no "page" because the modal is being tested in isolation, which I also believe to be a best practice.
I like doing this because for more complex components, TDD with unit tests keep me sane. Sometimes I build out the unit test just to make sure everything is working, then delete it once the integration tests are up. (Because too much unit tests can be a maintenance burden).
Ultimately I prefer integration tests over unit tests because I've experienced situations wherein my unit tests were passing, but once the component's been nested 3 levels deep, it starts to break. Kent Dodds has an excellent article on it.
My favorite takeaway from the article:
It doesn't matter if your component <A /> renders component <B /> with props c and d if component <B /> actually breaks if prop e is not supplied. So while having some unit tests to verify these pieces work in isolation isn't a bad thing, it doesn't do you any good if you don't also verify that they work together properly. And you'll find that by testing that they work together properly, you often don't need to bother testing them in isolation.

Cucumber - testing principals vs speed

after reading many articles, in my understanding all Cucumber tests should be independent from each other and autonomous, so that are rules I follow when I am automating my web app tests.
Lets say I am testing web page that has multiple input fields.
Currently, for CRUD operations I have two types of scenarios:
Scenario: Check page display correct data
Given: I populate DB with data
When: I open the page
Then: Page data should match with data from DB
Scenario: Update page data
Given: I populate DB with data
When: I open the page
And: I update each field with some new data
When: I press save button to save data
Then: Page data should match with data from DB
So in this case I have two scenarios that check if data is displayed properly, and another one that updates data and check it as well, but because step that populates the database takes long (1-3 seconds) I was thinking, why not combine this two type of scenarios, into single one, greatly cutting execution time:
Scenario: Update page data
Given: I populate DB with data
When: I open the page
Then: Page data should match with data from DB
And: I update each field with some new data
When: I press save button to save data
Then: Page data should match with data from DB
As you can see, first I populate the database, than I check if it is properly displayed, next I modify it, and check again, so this way I checked two CRUD operations (read and update) in single scenario, but I believe it would be against principles.
It's perfectly fine to combine two CRUD operations in one scenario if your tests are more focussed on integration and end-to-end behaviour rather than unit / component behaviour (which probably is the case).
Of course you should always consider the balance between putting too much in one scenario versus fragmenting a feature into a lot of scenarios. And of course the trade off of asserting more than one thing in a scenario is that it potentially forces you to debug more when a scenario fails. So it's not about principles but rather a conscious choice that you may have to reconsider depending on the speed and stability of your application under test.
Couple of ideas, I can share.
...
When: I ...
And: I ...
When: ...
...
can become
...
When: I ...
And: I ...
And: ...
Then: ...
even better if you can abstract it to a declarative business function. Which will allow you to see the forest, and not get swamped by the long end-to-end scenarios.
It is good, to think for your BDD journeys from the end-user perspective
Given: I populate DB with data
is something that happens to the usual user very rarely, right? Unless you cover some specific admin/dev case. If, you are using it as precondition, take a look at the xUnit Fixture Setup patterns. DB validations are a recommended consideration, just not at the top most layer of your framework.
And
greatly cutting execution time
can be achieved via parallel execution of your features/scenarios. Not, by cutting test scenarios. Again, the tradeoff is in favor of the meaningful scenarios.

Protactor: should I put assertions in my PageObject?

I have multiple scenarios in which I'd like to test pretty much the same things.
I am testing a backoffice and I have widgets (like autocomplete search for instance). And I want to make sure that the widget is not broken given that:
I just browse an article page
I saved a part of the article, which reloaded the page
1+2 then I played with some other widgets which have possible side effects
...
My first tought was to add some reusable methods to my WidgetPO (testWidgetStillWorksX ~)
After browsing on the subjet: there's some pros & cons on the subject as said in http://martinfowler.com/bliki/PageObject.html
So how do you handle / where do you put your reusable tests and what are the difficulties/advantages you've had with either methods ?
Your question is too much broad to answer. Best way to write tests using PageObject model is to exclude assertions from the PageObject file. To cut short here's a small explanation -
Difficulties -
Assertions are always a part of the test case/script. So its better to put them in the scripts that you write.
Assertions in PageObject disturbs the modularity and reusability of the code.
Difficulty in writing/extending general functions in pageobject.
Third person would need to go to your pageobject from test script each and everytime to check your assertions.
Advantages -
You can always add methods/functions that do repetitive tasks in your pageObject which performs some operation(like waiting for element to load, getting text of an element, etc...) other than assertions and returns a value.
Call the functions of PageObject from your tests and use the returned value from them to perform assertions in your tests.
Assertions in test scripts are easy to read and understand without a need to worry about the implementation of pageobjects.
Here's a good article of pageobjects. Hope this helps.

Random Popups While Automating the Webpage

I was attending an interview and he gave me the following scenarios . If I could get an hint as I could not answer the questions.
Assume that there is an application and popups keep coming up all the time. These are not times, its just random. You never know when they are going to come. How to deal with it?
Assume that the script you wrote is fine. But due to network issues the objects in the page are really slow to load or the page itself is taking long time. How do you deal with such a scenario?
Assume that I have 5-6 pages in the application. In all the pages we have certain text fields. In page 1 and Page 5 there is an object which is a text box. I see that what ever whatever identification method (css, xpath, id etc) you take, the values are same. That is duplicates. How do you deal with this scenario?
What is the basic purpose of "data provider" annotation in TestNG. In genral, what is the purpose of testng annotations?
Thanks.
Assume that the script you wrote is fine. But due to network issues the objects in the page are really slow to load or the page itself is taking long time.
How do you deal with such a scenario. In such situation, You should Wait property of Selenium. Implicit Wait or Explicit wait.
Implicit Wait -- Used for setting Timeout for Webpage loading
Driverobject.manage().timeouts().PageLoadtimeOut(units,TimeUnit.SECONDS);
Explicit Wait-- Used for setting Timeout for particular
Webelement FirefoxDriver f = new FirefoxDriver();
WebDriverWait ww = new WebDriverWait(f,Units);
ww.until(ExpectedConditions.CONDITION);
For second question, Anubhav has answered it.
For third, even if elements are same for the page1 and page5, they can be differentiated. First, switch to page to, whose text field you want to interact with, and then interact with that text field.
For forth, dataprovider is an annotation in TestNG using which you can do data driven testing and using TestNG annotations, you can manage test execution flow of your tests. For more details of dataprovider and TestNG annotation, please go here
For third, If you open 5-6 the pages in different tabs of single browser you will get such a duplication problem. That time only one page is visible to the end user. So we can differentiate that element by visibility and can interact with that element using webdriver
List<WebElement> el=driver.findElements(By.xpath("xpath of that text element"));//you can use other than xpath too to identify the elements
for(int i=0;i<el.size();i++)
{
if(el.get(i).isDisplayed())
el.get(i).sendKeys("text you want to send");//any other action you want to perform
break;
}

Transfer parameters between train steps

I have a custom PL/SQL function that i am calling through a page in ADF that inserts data in a database. I want to create a train, with two steps in order to separate my parameters. In the first step put one parameter of my function and in the other step the other and then call my function. It seems that it doesn't work because it only gets the second parameter.
Any hints how to accomplish getting both?
Save your parameters to pageFlowScope as you go, and use them in the last train step
Create a 'pageFlow' scope ManagedBean and store there the attribute values that you want to save and use on different steps of the flow. I have used this approach with my ADF applications and everything is fine.

Resources