Determine Minimum Tests To Be Run For Salesforce Deploy - salesforce

I have set up a GitHub action that validates code changes upon a pull request. I am using the Salesforce CLI to validate (on PR) or deploy (on main merge).
The documentation gives me several options to determine testing for this deploy.These options are NoTestRun, RunSpecifiedTests, RunLocalTests, RunAllTestsInOrg. I am currently using RunLocalTests as so:
sfdx force:source:deploy -x output/package/package.xml --testlevel=RunLocalTests --checkonly
We work with some big orgs whose full tests take quite a while to complete. I would like to only RunSpecifiedTests for validation but am not sure how to set up my GitHub action to dynamically know which tests to pull in. I haven't seen anything in the CLI docs to determine this.

There really isn't a way to do this with 100% reliability. Any change in Apex code has the potential to impact any other Apex code. A wide variety of declarative metadata changes, including for example Validation Rules, Lookup Field Filters, Processes and Flows, Workflow Rules, and schema changes, can impact the execution of Apex code.
If you want to reduce your test and deployment runtime, some key strategies are:
Ensure your tests can run in parallel, which is typically orders of magnitude faster.
Remove any tests that are not providing meaningful validation of your application.
Modularize your application into packages, which can be meaningfully tested in isolation. Then, use integration tests (whether written in Apex or in some other tooling, such as Robot Framework) to validate the interaction between new package versions.
It's only this last that can provide you real ability to establish a boundary around specific code behavior and test it in isolation, although you'll always still need integration tests as well.
At best, you can establish some sort of naming convention that maps between Apex classes and related test classes, but per the point above, using such a strategy to limit test runs has a very real possibility of causing you to miss bugs (i.e., false positives).

Related

Running UI based selenium smoke tests against an ever-changing UI

We are currently running smoke tests using Selenium Webdriver & JUnit against a B2C product. Since we are using Selenium, the scripts are totally dependent on the UI. Given that the product is out of a tech startup, the UI & workflows keep changing/evolving # an extremely high frequency.
The Consequence: Smoke tests which are supposed to validate the sanctity of the application keeps failing. The team spends more time fixing the scripts rather than validating the build.
I am pretty sure most of the Automation folks out there would have faced similar issues esp. with rapid dev cycles. Looking forward to see some approaches undertaken by others in the industry who have faced similar problems.
Note: The frontend is developed in PHP
Webdriver works roughly like this: there is a start point, webdriver interacts with it (by simulating a button press for example) and then finds the next item to interact with. The next item might be on the next page or the same page. It might be found in various ways, by id or the 3rd div that is class="foo" etc.
The tests are things like does the page load with 200 OK, does the string "login" appear in a particular place and so on
The problem with a changing UI is that all the elements "move about". The ids change and the 3rd div class foo disappears. This means that the webdriver interactions fail and the tests if they are looking for particular elements will fail too
One solution is to develop and test against a set of ids. These ids will refer to fixed UI elements. All searching in webdriver should use the ids. The development team writing the PHP will put the ids in the correct places.
The set of ids can also be used as the basis for a sort of specification and can be used to explain UI flow in different ways to different stake holders.
I do not know of any specific product that handles this process of managing ids in both tests and development code but maintaining a "lexicon" like this to describe the UI items should not be a major task
The more versatile the System under Test is the more important it is to have a framework on top of Selenium that reduces the maintenance effort for a change.
For the most common changes in a System under Test there are several known patterns that can help you to reduce the maintenance efforts:
By using UIMaps to model the UI of the application it is extremely easy to handle changed IDs, CSS classes or similar changes
PageObjects reduce the effort for larger UI changes (e.g. when an input field is changed from a TextBox to a Dropdown field)
Use Keyword Driven Testing to model test cases without any knowledge of the underlying technological representation. i.e. a keyword encapsulates an action from the users point of view – a example for a keyword can be: “loginWithValidUser()”
Don’t just utilize the UI for smoke testing if the UI / Application / Workflows change drastically and very often. Most of the time it is also helpful to test certain functionalities by calling WebServices without any Web-UI

Protractor, how to test user accounts?

I am wondering what's the best approach is regarding end-to-end testing. If I understand it correctly the idea end-to-end testing is to cover user stories and test them in automatic manner. For example, using Protractor for Angular.js application.
In my current project you are capable to create user accounts and login in. How does this work? Would you use a specifically prepared database to test logging into an account. Also what about the registration process. How should this be tested? Are their any best practices regarding this?
I would say that ideally you have a known database backup or a script that cleans up your test DB. Then you can make a part of the testing process either restoring that DB or running the script.
The script might be simpler to implement. You can pull in whatever node modules you need to execute it as a part of running the test suite rather than it being an external step.
Alternately, I am working on a system that has a complex user creation and syncing process. So, we have other external systems that the app has to interact with, which cannot be easily reset/restored. Instead we've taken the approach of having a REST service exposed that can work with the other system to, for example, find a user with a certain set of characteristics. Then, as a part of the spec, we make a call to this service and get a valid user for our test case.
In my opinion there are two approaches to this problem:
Make your tests point to a real database that is a copy of your production environment one. This will make tests check if your database is properly accessed and data is returned as you expect. This is a possibility but not the correct to me as e2e testing should check the client frontend experience and not how the app is leaning on the backend part.
Make use of a mock backend. A mock backend is kind of a "fake server" that you can develop at client side to return the information you need to make your application work. I think this is the correct approach as you focus on making your APP work regardless of the server possible issues.
You can see an example in this tutorial:
https://blog.cloudboost.io/building-your-first-tests-for-angular5-with-protractor-a48dfc225a75
To be more concrete, in this file:
https://github.com/shootermv/protractor-tutorial/blob/master/src/app/_helpers/fake-backend.ts
Greetings.

Unit tests in a database driven CodeIgniter web-application

CodeIgniter comes with a Unit Testing class built in, and I would very much like to use it. However, almost all functions I would want to test interact with the database by adding records, deleting records, etc. How would I, for example, write tests for the 'create user' function without actually creating users every time I run the test?
Upon some further research, it seems I need to be using Mock objects for external services like the database, etc. I haven't been able to find much in the way of docs on how to do that besides this one forum thread:
http://codeigniter.com/forums/viewthread/106737
Is there any actual documentation?
If your database driver allows transactions, use them. Do whatever needs to be tested, then rollback (on success or failure).
I've found that it's hard to run unit tests with controller actions. If you find a good way of doing that, let us know!

How to safely unit-test write operations in Symfony 2?

I want to create tests for all my CRUD's. But how do I set a separate database for them? Is that the best way to go?
This is another question, but it is related: Should I run the tests in the production server too? Sometimes things can go wrong in different enviroments, so I guess I should. But then I need the mentioned separate database, right?
Any advice?
Running any kind of tests on a production server is generally a bad idea (unless it's just the production hardware that hasn't been commisioned yet).
A Unit Test does not hit the database (or any other external system). So, in order to create a unit test you need to remove the dependency on the database.
What you are calling a 'unit test' is probably an integration test. Any test that utilises an external system (such as a database, file system etc.) is an integration test.
Two common solutions to your problem are:
At the start of your test, restore a database backup containing
known data to a separate test database, then perform your tests
against it.
Using a 'fixed' known test database, at the start of each test start
a transaction, perform the test and then rollback the transaction to
leave the database in the same known state.
(No. 1 is often preferable, as the database in (2) can become 'polluted').
I agree with Mitch. I would add that you should decide whether you want to do an integration test or a unit test (or both, but not in the same test). If, in fact, you do want to do a unit test, realize:
Your code has a "dependency" on an external database.
When unit testing you'll have to find a way to "fake" the database. You want to test a "unit" which means a single thing, not two or more things (i.e. your CRUD code AND your connection to a database AND the database itself).
Typically you'll need to refactor your code using something like dependency injection so that when unit testing you can fake things that your code depends on.
Unit testing isn't just testing your code. That's actually the easy part. The harder part is making your code testable.
I recommend going to http://artofunittesting.com/ and watching the free videos on the right side under the heading "Unit Testing Videos". Forget the fact that he's working in .NET as it's the principles that are important.
Then watch the GoogleTechTalks by Misko Hevery where he explains why you want to do dependency injection.
Design Tech Talk Series Presents: OO Design for Testability
The Clean Code Talks -- Unit Testing
(He has more too. There is a series of six GoogleTechTalks.)
I had a similar problem today and I think I've found a good solution.
Make a copy of your database (creating a new empty database works as well).
Edit your config_test.yml to change the database name.
A sample of my test configuration (might be different depending if you have multiple db-s etc)
doctrine:
dbal:
dbname: test_db
Update your database to reflect the entities in your application by calling php app/console doctrine:schema:update --force --env=test (required, if you just created a new db as well as every time you change your application model).
Your application should now use the test database during unit tests. NB! Be sure to make a backup of your database before messing around with the live database.
However, as clearly mentioned before, these are not Unit Tests anymore and instead an integration tests.

How to Test Web Code?

Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state?
Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time).
Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set?
I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience.
Are there good articles out there that discuss this issue of web-based development in general?
I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.
Update: A quick google search showed a DB unit extension for PHPUnit.
If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.
I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.
You can then write your unit tests to make use of this reliable, repeatable data.
As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!
We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..
I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.
I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.
If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.
Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.
It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.
You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.
Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):
I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.
I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).
A way to test database code is:
Insert a few rows (using SQL) to initialize state
Run the function that you want to test
Compare expected with actual results. Here you could use your normal unit testing framework
Clean up the rows that were changed (so the next run won't see the previous run)
The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.
In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...

Resources