Selenium Webdriver + Jmeter + StormRunner for Performance test - selenium-webdriver

I wanted to try out integration of Selenium Jmeter and StormRunner. My end goal is to do Load testing with 'n' number of users on StormRunner
What ? - For e.g. I have Selenium Script, convert it in to Jmeter (I can get this information from many sources)
Then my Jmeter script should get ready
Then upload Jmeter script in to StormRunner and pass the necessary parameter through Jenkins and run the load test.
I really want the opinion here about feasibility and whether it is in right direction or not.
Idea here is that Automated Load/Performance test

Selenium is a browser automation framework and JMeter acts on HTTP protocol level so your "Automated" requirement might not be fulfilled especially if your tests are relying on client-side checks like sorting or waiting for element to appear.
Theoretically given you properly configure JMeter it can behave like a real browser, but it still not be executing client-side JavaScript.
If you're fine with this constraint - your approach is valid, if not and the "automated functional test" requirement is the must - consider migrating to TruClient Protocol instead

Why wouldn't you covert your script to a native Loadrunner/Stormrunner form of virtual user?
You should look at the value of what you are trying to achieve. The end value of a performance test is in analysis. Analysis simply takes the timing records and the resource measurements produced during the test, bringing them together on a common timestamp, and then allowing you to analyze what resource "X" is being impinged when timing record "Y" is too long. This then points to some configuration or code which locks up on resource, "X."
What is your path to value in your model? You speak about converting a functional test script to a performance one. Realistically, you should already know that your code, "works for one," before you get to asking, "Does it work for many?" There is a change in script definitions which typically accompanies this understanding.
Where are your collection of resources noted? Which Resources? On which Hosts? This is on the "path to value" problem where you need to have the resource measurements to diagnose root cause of poor performance.

Related

Determine Minimum Tests To Be Run For Salesforce Deploy

I have set up a GitHub action that validates code changes upon a pull request. I am using the Salesforce CLI to validate (on PR) or deploy (on main merge).
The documentation gives me several options to determine testing for this deploy.These options are NoTestRun, RunSpecifiedTests, RunLocalTests, RunAllTestsInOrg. I am currently using RunLocalTests as so:
sfdx force:source:deploy -x output/package/package.xml --testlevel=RunLocalTests --checkonly
We work with some big orgs whose full tests take quite a while to complete. I would like to only RunSpecifiedTests for validation but am not sure how to set up my GitHub action to dynamically know which tests to pull in. I haven't seen anything in the CLI docs to determine this.
There really isn't a way to do this with 100% reliability. Any change in Apex code has the potential to impact any other Apex code. A wide variety of declarative metadata changes, including for example Validation Rules, Lookup Field Filters, Processes and Flows, Workflow Rules, and schema changes, can impact the execution of Apex code.
If you want to reduce your test and deployment runtime, some key strategies are:
Ensure your tests can run in parallel, which is typically orders of magnitude faster.
Remove any tests that are not providing meaningful validation of your application.
Modularize your application into packages, which can be meaningfully tested in isolation. Then, use integration tests (whether written in Apex or in some other tooling, such as Robot Framework) to validate the interaction between new package versions.
It's only this last that can provide you real ability to establish a boundary around specific code behavior and test it in isolation, although you'll always still need integration tests as well.
At best, you can establish some sort of naming convention that maps between Apex classes and related test classes, but per the point above, using such a strategy to limit test runs has a very real possibility of causing you to miss bugs (i.e., false positives).

What is a good approach to write Automated tests that depend on data that needs to be setup before executing the test

I am currently working on writing automated tests using Selenium Webdriver. We use MTM to run our test suites. I need some ideas as to what would be a good way to write these tests.
Currently before running these tests, we perform a basic setup that sets the username and password that would be required to login to the site, set the browser that the test should use, and few other things.
Currently the data that is required for each of the test is setup manually and is already present in the database . The test simply performs a keyword search, finds the necessary data it needs and then performs the assertions. What we would like to achieve is find such data that is already present in the database and use it instead of creating it manually. That way I can run these tests across different environments(dev,qa,production).
The site I am testing is an e-commerce website. I mostly write tests for specific features that my team develops, and thus many of these tests require some specific data. e.g setting up a store that has products with certain shipping rates, with particular offers etc. I would like to find a way to automate or almost remove this manual process of setting up the data. That way I have the flexibility to run these tests across environments. Could you please direct me to some articles/suggestions that can help me achieve this ?
If I am understanding your question correctly, you want to automate the test data setup.
You can achieve this in following ways:
If possible, write a sql script which inserts the desired data in db. Now you can execute this while running your tests. If you are using TestNG framework, then there is already an annotation available like #BeforeTest. You can execute that sql script in this annotation, it will be executed once before your test and data is ready.
Prepare data in a spreadsheet. Create an algorithm, fill the data dynamically in spreadsheet and from there either read directly and fetch it to your test using #BeforeTest or if required, data in spreadsheet can be inserted in db also.

will gatling actually perform the operation or will it check only the urls' response time?

I have a gatling test for an application that will answer a survey and upon answering this survey, the application will identify possible answers that may pose a risk and create what we call riskareas. These riskareas are normally created in the background as soon as the survey answering is finished. My question is I have a gatling test with ten users who will go and answer the survey and logout, I used recorder to record the test; now after these ten users are finished I do not see any riskareas being created in the application. Am I missing something--should the survey be really answered by gatling (like it does in selenium) user or is it just the urls that the gatling test will touch ?
I am new to gatling please help.
Gatling should be indistinguishable from a user in a web browser (or Selenium) as far as the server is concerned, so the end result should be exactly the same as if you'd gone through the process yourself. However, writing a Gatling script is a little more work than writing a Selenium script.
For performance reasons, Gatling operates at a lower level than Selenium. Gatling works with the actual data that is sent and received from the server (i.e, the actual GETs and POSTs sent to the server), rather than with user-level interactions (such as clicking links and filling forms).
The recorder will generally produce a relaitvely "dumb" script. It records the exact data that was sent to the server, and makes no attempt to account for things that may change from run to run. For example, the web application you are testing might have hidden form fields that contain session information, or the link addresses might contain a unique identifier or a session id.
This means that your script may not be doing what you think it's doing.
To debug the script, the first thing to do is to add checks on each of the requests, to validate that you are getting the response you expect (for example, check that when you submit page 1 of the survey, you are taken to page 2 - check for something that you'd only expect to find on page 2, like a specific question).
Once you know which requests are failing, look at what data was sent with the request, and try to figure out where it came from. You will probably find that there are session ids, view state, or similar, that must be extracted from the previous page.
It will help to enable request and response logging, as per the documentation.
To simplify testing of web apps, we wrote some helper functions to allow tests to be written in a more Selenium-like way. Once you understand what your application is doing, you may find that it simplifies scripting for you too. However, understanding why your current script doesn't work the way you expect should be your first step.

Protractor, how to test user accounts?

I am wondering what's the best approach is regarding end-to-end testing. If I understand it correctly the idea end-to-end testing is to cover user stories and test them in automatic manner. For example, using Protractor for Angular.js application.
In my current project you are capable to create user accounts and login in. How does this work? Would you use a specifically prepared database to test logging into an account. Also what about the registration process. How should this be tested? Are their any best practices regarding this?
I would say that ideally you have a known database backup or a script that cleans up your test DB. Then you can make a part of the testing process either restoring that DB or running the script.
The script might be simpler to implement. You can pull in whatever node modules you need to execute it as a part of running the test suite rather than it being an external step.
Alternately, I am working on a system that has a complex user creation and syncing process. So, we have other external systems that the app has to interact with, which cannot be easily reset/restored. Instead we've taken the approach of having a REST service exposed that can work with the other system to, for example, find a user with a certain set of characteristics. Then, as a part of the spec, we make a call to this service and get a valid user for our test case.
In my opinion there are two approaches to this problem:
Make your tests point to a real database that is a copy of your production environment one. This will make tests check if your database is properly accessed and data is returned as you expect. This is a possibility but not the correct to me as e2e testing should check the client frontend experience and not how the app is leaning on the backend part.
Make use of a mock backend. A mock backend is kind of a "fake server" that you can develop at client side to return the information you need to make your application work. I think this is the correct approach as you focus on making your APP work regardless of the server possible issues.
You can see an example in this tutorial:
https://blog.cloudboost.io/building-your-first-tests-for-angular5-with-protractor-a48dfc225a75
To be more concrete, in this file:
https://github.com/shootermv/protractor-tutorial/blob/master/src/app/_helpers/fake-backend.ts
Greetings.

How to Test Web Code?

Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state?
Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time).
Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set?
I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience.
Are there good articles out there that discuss this issue of web-based development in general?
I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.
Update: A quick google search showed a DB unit extension for PHPUnit.
If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.
I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.
You can then write your unit tests to make use of this reliable, repeatable data.
As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!
We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..
I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.
I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.
If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.
Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.
It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.
You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.
Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):
I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.
I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).
A way to test database code is:
Insert a few rows (using SQL) to initialize state
Run the function that you want to test
Compare expected with actual results. Here you could use your normal unit testing framework
Clean up the rows that were changed (so the next run won't see the previous run)
The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.
In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...

Resources