I have multiple scenarios in which I'd like to test pretty much the same things.
I am testing a backoffice and I have widgets (like autocomplete search for instance). And I want to make sure that the widget is not broken given that:
I just browse an article page
I saved a part of the article, which reloaded the page
1+2 then I played with some other widgets which have possible side effects
...
My first tought was to add some reusable methods to my WidgetPO (testWidgetStillWorksX ~)
After browsing on the subjet: there's some pros & cons on the subject as said in http://martinfowler.com/bliki/PageObject.html
So how do you handle / where do you put your reusable tests and what are the difficulties/advantages you've had with either methods ?
Your question is too much broad to answer. Best way to write tests using PageObject model is to exclude assertions from the PageObject file. To cut short here's a small explanation -
Difficulties -
Assertions are always a part of the test case/script. So its better to put them in the scripts that you write.
Assertions in PageObject disturbs the modularity and reusability of the code.
Difficulty in writing/extending general functions in pageobject.
Third person would need to go to your pageobject from test script each and everytime to check your assertions.
Advantages -
You can always add methods/functions that do repetitive tasks in your pageObject which performs some operation(like waiting for element to load, getting text of an element, etc...) other than assertions and returns a value.
Call the functions of PageObject from your tests and use the returned value from them to perform assertions in your tests.
Assertions in test scripts are easy to read and understand without a need to worry about the implementation of pageobjects.
Here's a good article of pageobjects. Hope this helps.
Related
If I were building a simple function that divided one number by another, I would:
In all cases, call the function. The first test would probably be a happy path, like 10 / 2.
Another test would divide a smaller number by a larger one, resulting in a decimal value.
Some other tests would introduce negative numbers into the mix.
Finally, I would be sure to have one test that divided by zero, to see how that was handled.
I have a series of modals in React that I need to test. They have some commonalities:
They receive a set of props.
They act on those props to display a bunch of elements to the user, pre-populating these elements with some combination of the data in the props.
They use the fireEvent.### functions to simulate what the user would do within the modal.
Near the end of each test that involves a POST or a PATCH, a Submit button is pressed.
After pressing the Submit button, I have two expect functions at the end of each test, like in these examples:
expect(spy).toHaveBeenCalledTimes(1);
expect(spy).toHaveBeenCalledWith({
type: 'SOME_ACTION',
payload: {
property1: 'Some value',
property2: 'Some other value',
property3: 53.2
},
});
This approach entirely makes sense to me because, much like a simple math function, the modal has data coming in and data going out. But some colleagues are insisting that we shouldn't be testing with such expect(spy) functions. Instead they're saying that we should test the end result on the page. But there is no "page" because the modal is being tested in isolation, which I also believe to be a best practice.
Another suggestion was to add a success or failure string to a Mock Service Worker (MSW) response string, like "You've successfully saved the new data". The reason I don't like this approach is because:
Such a string is just an artificial construct in the test, so what does it really prove?
If there are calculations involved with what is sent by the POST or PATCH, don't we want to know whether the correct data is being sent? For example, if the user was prompted to enter the number of hours they worked each day in the past week, wouldn't we want to compare what was entered into the input elements vs. what was sent? And maybe those hours are summed (for some reason) and included in another property that was included - wouldn't we want to confirm that the correct sum was sent?
I do respect the opinions of my colleagues but have yet to hear anything from them that justifies dropping the approach I've employed and adopting one of their alternates. So I'm seeking insight from the community to better understand if this is a practice you would follow ... or not.
This is a case by case scenario with no real answer, as performance and other variables may play a role in making these decisions.
However, based off the information you give both you and your team are right. You would want to validate that the correct data is being sent to the backend yet you would also want to validate that the user is receiving a visual response, such as "Successfully saved data".
Yet if I have to choose one it would be the checking the data as that has the most "code coverage". Checking for the "success" message simply checks if the submit button was pressed where as the data checking assures that most, if not all, data states were correctly set.
I prefer to do both.
But there is no "page" because the modal is being tested in isolation, which I also believe to be a best practice.
I like doing this because for more complex components, TDD with unit tests keep me sane. Sometimes I build out the unit test just to make sure everything is working, then delete it once the integration tests are up. (Because too much unit tests can be a maintenance burden).
Ultimately I prefer integration tests over unit tests because I've experienced situations wherein my unit tests were passing, but once the component's been nested 3 levels deep, it starts to break. Kent Dodds has an excellent article on it.
My favorite takeaway from the article:
It doesn't matter if your component <A /> renders component <B /> with props c and d if component <B /> actually breaks if prop e is not supplied. So while having some unit tests to verify these pieces work in isolation isn't a bad thing, it doesn't do you any good if you don't also verify that they work together properly. And you'll find that by testing that they work together properly, you often don't need to bother testing them in isolation.
I am new to Salesforce apex coding. My first class that I am developing has 10 methods and is some 800 lines.
I haven’t added much of exception handling, so the size should swell further.
I am wondering, what the best practice for Apex code is... should I create 10 classes with 1 method instead of letting 1 class with 10 methods.
Any help on this would be greatly appreciated.
Thanks
Argee
What do you use for coding? Try to move away from Developer Console. VSCode has some decent plugins like Prettier or Apex PMD that should help you with formatting and making methods too complex. ~80 lines/method is so-so. I'd worry about passing long lists of parameters and having deeply nested code in functions rather than just their length.
There are general guidelines (from other languages, there's nothing special about Apex!) that ideally function should fit on 1 screen so programmer can see it whole without scrolling. Read this one, maybe it'll resonate with you: https://dzone.com/articles/rule-30-%E2%80%93-when-method-class-or
I wouldn't split it into separate files just for sake of it, unless you can clearly define some "separation of concerns". Say 1 trigger per object, 1 trigger handler class (ideally derived from base class). Chunkier bits not in the handler but maybe in some "service" style class that has public static methods and can operate whether called from trigger, visualforce, lightning web component, maybe some one-off data fix would need these, maybe in future you'd need to expose part of it as REST service. And separate file for unit tests (as blasphemous as it sounds - try to not write too many comments. As you're learning you'll need comments to remind yourself what built-in methods do but naming your functions right can help a lot. And a well-written unit test is better at demonstrating the idea behind the code, sample usage and expected errors than comments that can be often overlooked).
Exception handling is an art. Sometimes it's good to just let it throw an exception. If you have a method that creates Account, Contact and Opportunity and say Opportunity fails on validation rule - what should happen? Only you will know what's good. Exception will mean the whole thing gets rolled back (no "widow" Accounts) which sucks but it's probably "more stable" state for your application. If you naively try-catch it without Database.rollback() - how will you tell user to not create duplicates with 2nd click. So maybe you don't need too much error handling ;)
Having pretty complex angular application with many pages (states) and conditional sections that creates a lot of test scenarios I need to perform e2e tests. I'm tired of nested selectors like 'div.SomeComponent > ul:nth-child(2) > ... ' and so on even using BEM namings (especially when app is evolving and it's easy to spoil tests by little change of html structure).
The question is, would you guys opt for creating some dummy (empty) classes or data-* attrs just to simplify protractor (or groovy) selector at the expense of loosing semantics? What's the alternative?
To avoid changing your element definitions every time developers change the css and to avoid using long css strings to select your elements, you can try referring to them by other means (id, className, model, etc.). See https://github.com/angular/protractor/blob/master/docs/locators.md for examples.
My personal favorite is to use element(by.css('[ng-click="executeSomeAction()"]')) as this will most likely not change with any application updates. It also works for other directives as well.
As for testing applications with a high volume of pages and elements, it's nice to define your elements in a class(es) and then call them in the test spec as needed. This reduces the code in your specs and makes them easier to read. You may also want to create a separate file for actions/functions your tests perform.
Hope this helps answer to your questions.
I'm new to testing, and I understand it's not a good practice to write unit tests that test third-party packages. But I'm curious if the following test would constitute testing the AngularJS framework itself:
describe("TestController", function () {
it("should set up the $scope correctly", function () {
expect($scope.foo).toBe("bar");
});
});
Is it a good idea to test that your $scope got initialized correctly, or since that's AngularJS's job and not yours, is that something you should avoid writing a test for? Sorry if this has an obvious answer, but I want to make sure I'm teaching correct principles when I teach this to my students.
In your example, you are testing the behaviour of TestController, which is a class that you wrote, i.e. you are testing your own code.
You are not testing if Angular can set the state properly if you tell it to (which would indeed be a bit redundant, as it is already covered by Angular tests), you are asserting that your code does the things your application requires it to do (that this involves calling Angular functions is secondary).
So that's a good test to write.
Depending on your application, it may be possible to check the same behaviour in a more "high-level" fashion than asserting what exact value a given state variable has. For some applications that could be considered an implementation detail and not the most appropriate way to validate correct behaviour. You should not be testing internal state, but externally visible behavior. In this case, though, since you are testing a controller, and all a controller does is update the state, it's probably appropriate.
If you find that all you are doing in the controller is unconditionally set state, without any logic involved, then you may not really have a need to test the code at that level of granularity (maybe test bigger units that in combination do something "interesting"). The typical example here is testing setter/getter methods: Yes, there is a chance that you get these one-liners wrong, but they make really boring tests, so you might want to skip those (unless they can be automatically generated).
Now, if this test fails, it could be for three (not mutually exclusive) reasons:
1) your code is broken (either some state setup is missing or you are not doing it right). Detecting this is the main purpose of unit testing.
2) Angular is broken (you set the state properly, but somehow Angular lost it). That is unlikely, but if it does happen, you now have a test case to attach to your bug report to Angular. Note that you did not set out to write a test case for Angular, but you got one "by accident".
3) your code as well as Angular are correct, but your test code is wrong. This happens frequently when you update the code that is being tested and test code also needs to be adjusted because its assumptions have been too narrow, or the expected behaviour has changed and the test is now simply outdated.
I am a bit confused
from wiki:
"This means that a true mock... performing tests on the data passed into the method calls as arguments."
I never used unit testing or mock framework. I thought unit tests are for automated tests so what are mock tests for?
What I want is a object replacing my database I might use later but still dont know what database or orm tool I use.
When I do my programm with mocks could I easily replace them with POCO`s later to make entity framework for example working pretty fast?
edit: I do not want to use unit testing but using Mocks as a full replacement for entities + database would be nice.
Yes, I think you are a bit confused.
Mock frameworks are for creating "Mock" objects which basically fake part of the functionality of your real objects so you can pass them to methods during tests, without having to go to the trouble of creating the real object for testing.
Lets run through a quick example
Say you have a 'Save()' method that takes a 'Doc' object, and returns a 'boolean' success flag
public bool Save(Doc docToSave(){...}
Now if you want to write a unit test for this method, you are going to have to first create a document object, and populate it with appropriate data before you can test the 'Save()' method. This requires more work than you really want to do.
Instead, it is possible to use a Mocking framework to create a mock 'Doc' object for you.
Syntax various between frameworks, but in pseudo-code you would write something like this:
CreateMock of type Doc
SetReturnValue for method Doc.data = "some test data"
The mocking framework will create a dummy mock object of type Doc that correctly returns "some test data" when it's '.data' property is called.
You can then use this dummy object to test your save method:
public void MyTest()
{
...
bool isSuccess = someClass.Save(dummyDoc);
...
}
The mocking framework ensures that when your 'Save()' method accesses the properties on the dummyDoc object, the correct data is returned, and the save can happen naturally.
This is a slightly contrived example, and in such a simple case it would probably be just as easy to create a real Doc object, but often in a complex bit software it might be much harder to create the object because it has dependencies on other things, or it has requirements for other things to be created first. Mocking removes some of that extra overload and allows you to test just the specific method that you are trying to test and not worry about the intricacies of the Doc class as well.
Mock tests are simply unit tests using mocked objects as opposed to real ones. Mocked objects are not really used as part of actual production code.
If you want something that will take the place of your database classes so you can change your mind later, you need to write interfaces or abstract classes to provide the methods you require to match your save/load semantics, then you can fill out several full implementations depending on what storage types you choose.
I think what you're looking for is the Repository Pattern. That link is for NHibernate, but the general pattern should work for Entity Framework as well. Searching for that, I found Implementing Repository Pattern With Entity Framework.
This abstracts the details of the actual O/RM behind an interface (or set of interfaces).
(I'm no expert on repositories, so please post better explanations/links if anyone has them.)
You could then use a mocking (isolation) framework or hand-code fakes/stubs during initial development prior to deciding on an O/RM.
Unit testing is where you'll realize the full benefits. You can then test classes that depend on repository interfaces by supplying mock or stub repositories. You won't need to set up an actual database for these tests, and they will execute very quickly. Tests pay for themselves over and over, and the quality of your code will increase.