Should elements ever be made available outside of a page object? - selenium-webdriver

This is a question I cannot find a definitive source on and am hoping to get some answers based on users previous experience mainly with explanations as to why a certain approach DID NOT work out.
I am using webdriver for automation via Protractor and am having a debate on whether or not page elements should ever be made available outside of page object itself. After researching it appears that there are a few different approaches people take and I cannot fully grasp the long term implications of each.
I've seen the following different implementations of the page object model:
Locators are declared in the page object and exported
This is my least favorite approach as it means element are actually being identified in the test. This seems like a bad standard to set as it could encourage automaters to use new locators, not from the page object, directly in the application.
Also any which require any dynamic information cannot be directly set when PO is initialized require further editing.
pageobject.js
export default class HomePage {
constructor() {
this.passwordField = '#password';
this.usernameField = '#user';
}
}
test.js
const homePage = new HomePage();
$(homePage.usernameField ).sendKeys('admin');
$(homePage.passwordField ).sendKeys('password');
Elements declared in page object and exported, locators not
pageobject.js
export default class HomePage {
constructor() {
this.passwordField = $('#password');
this.usernameField = $('#user');
}
}
test.js
const homePage = new HomePage();
homePage.usernameField.sendKeys('admin);
homePage.passwordField.sendKeys('password);
Elements declared in Page Object and only used directly within the page object, only methods exported
This is the approach I have used in the past and we ended up with many, many functions. For instance we had setUsename(), getCurrentUsername(), getUsernameAttibute(), verifyUsernameExists() and same for password element and many other elements. Our page object became huge so I don't feel like this is the best approach any longer.
One of the advantage however is that our tests look very clean and are very readable.
pageobject.js
export default class HomePage {
constructor() {
var passwordField= $('#password');
var usernameField = $('#user');
}
setUserName(name){
username.sendKeys(name);
};
setPassword(password){
passwordField.sendKeys(password);
};
}
test.js
const homePage = new HomePage();
homePage.setUsername('admin');
homePage.setPassword('123');
I'm very interested to get some feedback on this so hopefully you can take the time to read.

I prefer and believe that the last approach is the best.
Keeping aside the fact that we are talking about automation, any good/great software has following traits.
It is composed of individual modules/pieces/components
Each individual module/piece/component is cohesive in terms of data/information (selector, webdriver API calls in case of automation) specific to its API/methods to interact with the data.
The last approach provides just that with the added benefit of test cleanliness that you pointed out.
However, most of the times, for whatever reasons, we tend to ignore the modularity and make the existing POs bloated. This i have seen and was part of. So, in way, POs getting bloated is not because of the approach but the way automators/testers/developers being conscious to keep POs modular, composed and simpler. This is true whether it is about POs or the application functional code
Boated POs:
Specific to the problem of bloated POs, check if you can separate out the common elements out of the POs. For ex. Header, Footer, Left Nav, Right Nav etc are common across pages. They can be separated out and the POs can be composed of those individual pieces/sections.
Even within the main content, separate common content (if its across two or more pages if not all) into their own component API if its logical and can be reused across pages.
It is normal that the automation specs perform extensive regression for ex. an element is a password field, length of the text box is so and so etc. In such cases it doesn't make sense to add methods for every single spec use case (or expect). Good thing is, here also the commonality takes the centerstage. Basically, provide API that is used across specs and not if its used in one spec, literally.
Take for ex. a password field should be masked. Unlikely that you would want to test that in multiple spec files. Here, either we can add a method for it in LoginPO like isPasswordMasked() or we can let the password field accessible from LoginPO and the spec do the actual check for password type field. By doing this, we are still letting LoginPO in control of password field information that matters to other API (login(), logout() etc) i.e only PO knows how and where to get that password element. With the added advantage of pushing spec testing to spec file.
POs that expect/assert
At any point, its not a good idea to make any testing (or expect) to be part of the PO API. Reason(s):
The PO and its API is reusable across suites and should be easy for anyone to look at it and understand. Their primary responsibility is to provide common API.
They should be as lean as possible (to prevent bloating up)
More importantly, browser automation is slower inherently. If we add test logic to the POs and its API methods, we will only make it slower.
FWIW, i have not come across any web page that mandates a bloated API.
Exposing Elements in POs:
It depends on the use case i believe. May be an element that is used in only one spec can be a base case to expose. That said, in general, the idea is that the specs should be readable for the tester/developer AND for those who look at them later. Whether its done with meaningful element variable name or a method name is largely a preference at best. On the other hand, if an element is meant to have a bit of involved interaction (for ex hover open menu link) its definitely a candidate for exposing through API alone.
Hope that adds some clarification!

The last way is the correct way to implement page objects. The idea behind the page object is that it hides the internals of the page and provides a clean API for a script to call to perform actions on the page. Locators and elements should not be exposed. Anything you need to do to the page should be exposed by a public method.
One way you can avoid a getter and setter for each field on the page is to consolidate methods. Think about actions that the user would take on the page. Rather than having .setUsername(), .setPassword(), and .clickLoginButton() methods, you should just have a login() method that takes the username and password as parameters and does all the work to log in for you.
References
Martin Fowler is generally considered the inventor of the "page object" concept but others coined the name "page object". See his description of the page object.
Selenium's documentation on page objects.

Related

Why can't React use a CONTENT to automatically generate keys?

That's obvious that for the diffing algorithm in React keys are essential. But I was wandering, why React just can't automatically generate keys based on the content we iterate over?
I also assume that items can share some similarity, or cab be identical in terms of content, but isn't it possible to generate keys once user open a page and somehow attach them to the items, so it is stable?
Or maybe there where attempts to solve the problem, if so, I would be grateful if you share it to me.
Update
Thank you guys for your answers, I've learnt a lot!
Also a thing I had in mind: what we developers do when there is no stable id (e.g. user added an item which is not yet saved into DB). In the cases we just generate id, and attach it to the object, or element in an array, but we do not generate ids on a fly, so it remains stable over time.
What if React just generate ids for all arrays which are involved into rendering process, in other words, arrays which are directly used in render function?
It can be done only once, during phase Commit phase, or whatever. Also I believe, the id can be readonly, or something, so user can't erase the id.
p.s.s
While I was writing p.s. question above, I realized, autogenerating id for arrays wouldn't work, since I've missed two things. All side effect react can do only during the Commit phase, but not Render phase. But that's not the main problem.
The main problem is when we use filtering or sorting on a back-end side. Since we receive a new array, filtered one, we would need to regenerate ids for those elements, but basically, that's the same html elements, in which we can change content to match filtering order. That's the same as Slava Knyazev mentioned.
React can't generate keys, because the entire point of keys is for you to help React track elements during it's tree-diffing stage.
For example, lets say you have the following code, where you naively use content instead of identifiers for your keys:
const people = usePeople(); // [{ id: "1", name: "James"}, {id: "2", name: "William"}]
return <ul>{people.map(p => <li key={p.name}>{p.name}</li>}</ul>
The above code will function and behave as you would expect. But what happens if the name of a person changes? To understand it, lets look at the tree it generates:
ul
li(James) James
li(William) William
If James becomes Josh between renders, the new tree will look like this:
ul
li(Josh) Josh
li(William) William
React will compare the two results and conclude the following:
li(James) is to be removed
li(Josh) is to be added
However, if we set our key prop to p.id, then the old and new tree will look as follows, respectively:
ul
li(1) James
li(2) William
ul
li(1) Josh
li(2) William
And when React compares the two, it will identify that James has become Josh, and needs only the text adjusted.
In the first scenario, the <li> component is completely destroyed, and a completely new component takes its place. Both of these actions run a complete React lifecycle for the component. In the second, it remains untouched, and only the text inside changes.
While in this contrived scenario, the performance penalty in the first case in minimal, it may be very significant with complex components.
I believe, unless your data is 100% certainly going to sort in one way and never change, key={index} isn't a good key (which is what I assume you want your auto-generated keys to be). You'd ideally want something that is unique to each item, regardless of the order.
It's explained in more detail in the new beta react docs https://beta.reactjs.org/learn/rendering-lists#where-to-get-your-key
I think what you are implying is React could potentially choose to use something like a stable hash (say sha1 on a serialised string or something) on the object to generate a unique key. I think this actually would work in many cases, and even gave me pause for thought for a while! Your question is actually a really good one, and a deep one.
However, it wouldn't work in every case. I think it would work only on a static object which has no methods or anything attached. On a JS object, not all properties are enumerable. Hashing something could only ever happen on the enumerable objects of properties, but the dev may have non-enumerable yet-still-unique methods attached to these objects. In fact, even enumerable methods cant really be serialised reliably and they could be what makes the object unique. Not to mention the complexities of reliably hashing something with prototypical inheritance involved.
I suspect there's also a performance aspect to this. Hashing is cheap, but no that cheap. Most cases can be keyed by just referencing a unique ID in the object, which is vastly cheaper. When enumerating a very large number of objects, these things matter, and so its better to defer to the developer. After all, if you really do want to hash it, its just one function call in userland -- and this saves great confusion on developer side when it doesn't work. The Principle of least astonishment comes to mind.
There's also an aspect of how this would limit the power of how expressive JSX can be due to it basically allowing free-form JS. You would probably have to supply some low level <React.Map> component primitives in order for react to supply this default key handling which implies you are a bit more restrained on what you can and can't do (complex functional chains).

Firebase: referencing docs with a string vs. referencing docs with a ref. Is there a difference?

This is more of a general question, I've been bouncing around multiple posts on StackOverflow and reading what I've found so far in the docs. I haven't found anything super concrete yet to answer the post question which is: what are the benefits of linking documents to one another through the ref type vs. the string type?
As of now, I'm converting all my string "refs" to properly typed "refs". However, since i'm still relatively new to the platform I'm scratching my head wondering if this is even necessary. I assume I'd be just as effective at finding related docs with the string as with a reference.
Also, for the sake of future readers as of me posting this, you can set a ref like so:
db.collection(...).add({
...
reference: firebaseFirestore.doc(
`lesson_translations/${translationID}`
),
// reference is now typed as a 'ref'
})
I had found other posts on stackoverflow accessing it with .doc(...).ref which doesnt seem to be a thing anymore.
It's mostly a matter of personal preference and perceived convenience. There is not really anything a reference type can do that a string and your own code could not also do.
Having a reference type mostly saves you the trouble of building a new reference object in your code, if that's what you would have done with a string document ID anyway. You can also use it in security rules for the same purpose when it comes time to use get() to fetch another document referenced by a field.
Again, it's personal preference. Do whatever is most convenient for you.
When you call a document reference, you instantly get the document details while if you take a string path and call it, it will take some time to find that data. It is just a matter of a few milliseconds and a few lines of code. Imagine if you had a list of document references to store and you store them as strings, and then had to call all of them looping over it
I know of one tangible benefit, but it's only relevant if you are using a library for data binding that supports loading nested documents to n depth. I personally am using vuefire, but I would assume something similar exists for the react ecosystem.
It allows you to e.g. bind a collection or document to a variable, and automatically sync all the data of referenced documents.
Example
You sync a collection('users') with a depth of two, and each user document contains an array of favourites:
{
name:'John',
favourites: [ref1, ref2, ref3]
}
The library would immediately download the references and replace the original ref files with the data:
{
name:'John',
favourites: [{title:'foo'}, {title:'bar', {title: 'baz'}]
}

React Simple Global Entity Cache instead of Flux/React/etc

I am writing a little "fun" Scala/Scala.js project.
On my server I have Entities which are referenced by uuid-s
(inside Ref-s).
For the sake of "fun", I don't want to use flux/redux architecture but still use React on the client (with ScalaJS-React).
What I am trying to do instead is to have a simple cache, for example:
when a React UserDisplayComponent wants the display the Entity User with uuid=0003
then the render() method calls to the Cache (which is passed in as a prop)
let's assume that this is the first time that the UserDisplayComponent asks for this particular User (with uuid=0003) and the Cache does not have it yet
then the Cache makes an AjaxCall to fetch the User from the server
when the AjaxCall returns the Cache triggers re-render
BUT ! now when the component is asking for the User from the Cache, it gets the User Entity from the Cache immediately and does not trigger an AjaxCall
The way I would like to implement this is the following :
I start a render()
"stuff" inside render() asks the Cache for all sorts of Entities
Cache returns either Loading or the Entity itself.
at the end of render the Cache sends all the AjaxRequest-s to the server and waits for all of them to return
once all AjaxRequests have returned (let's assume that they do - for the sake of simplicity) the Cache triggers a "re-render()" and now all entities that have been requested before are provided by the Cache right away.
of course it can happen that the newly arrived Entity-s will trigger the render() to fetch more Entity-s if for example I load an Entity that is for example case class UserList(ul: List[Ref[User]]) type. But let's not worry about this now.
QUESTIONS:
1) Am I doing something really wrong if I am doing the state handling this way ?
2) Is there an already existing solution for this ?
I looked around but everything was FLUX/REDUX etc... along these lines... - which I want to AVOID for the sake of :
"fun"
curiosity
exploration
playing around
I think this simple cache will be simpler for my use-case because I want to take the "REF" based "domain model" over to the client in a simple way: as if the client was on the server and the network would be infinitely fast and zero latency (this is what the cache would simulate).
Consider what issues you need to address to build a rich dynamic web UI, and what libraries / layers typically handle those issues for you.
1. DOM Events (clicks etc.) need to trigger changes in State
This is relatively easy. DOM nodes expose callback-based listener API that is straightforward to adapt to any architecture.
2. Changes in State need to trigger updates to DOM nodes
This is trickier because it needs to be done efficiently and in a maintainable manner. You don't want to re-render your whole component from scratch whenever its state changes, and you don't want to write tons of jquery-style spaghetti code to manually update the DOM as that would be too error prone even if efficient at runtime.
This problem is mainly why libraries like React exist, they abstract this away behind virtual DOM. But you can also abstract this away without virtual DOM, like my own Laminar library does.
Forgoing a library solution to this problem is only workable for simpler apps.
3. Components should be able to read / write Global State
This is the part that flux / redux solve. Specifically, these are issues #1 and #2 all over again, except as applied to global state as opposed to component state.
4. Caching
Caching is hard because cache needs to be invalidated at some point, on top of everything else above.
Flux / redux do not help with this at all. One of the libraries that does help is Relay, which works much like your proposed solution, except way more elaborate, and on top of React and GraphQL. Reading its documentation will help you with your problem. You can definitely implement a small subset of relay's functionality in plain Scala.js if you don't need the whole React / GraphQL baggage, but you need to know the prior art.
5. Serialization and type safety
This is the only issue on this list that relates to Scala.js as opposed to Javascript and SPAs in general.
Scala objects need to be serialized to travel over the network. Into JSON, protobufs, or whatever else, but you need a system for this that will not involve error-prone manual work. There are many Scala.js libraries that address this issue such as upickle, Autowire, endpoints, sloth, etc. Key words: "Scala JSON library", or "Scala type-safe RPC", depending on what kind of solution you want.
I hope these principles suffice as an answer. When you understand these issues, it should be obvious whether your solution will work for a given use case or not. As it is, you didn't describe how your solution addresses issues 2, 4, and 5. You can use some of the libraries I mentioned or implement your own solutions with similar ideas / algorithms.
On a minor technical note, consider implementing an async, Future-based API for your cache layer, so that it returns Future[Entity] instead of Loading | Entity.

Deprecation of TableRegistry::get()

I'd like to ask what are your thought on deprecation of the TableRegistry::get() static call in CakePHP 3.6?
In my opinion it was not a good idea.
First of all, using LocatorAwareTrait is wrong on many levels. Most important, using traits in such way can break the Single Responsibility and Separation of Concerns principles. In addition some developers don't want to use traits as all because they thing that it breaks the object oriented design pattern. They prefer delegation.
I prefer to use delegation as well with combination of flyweight/singleton approach. I know that the delegation is encapsulated by the LocatorAwareTrait but the only problem is that it exposes the (get/set)TableLocator methods that can be used incorrectly.
In other words if i have following facade:
class Fruits {
use \Cake\ORM\Locator\LocatorAwareTrait;
public function getApples() { ... }
public function getOranges() { ... }
...
}
$fruits = new Fruits();
I don't want to be able to call $fruits->getTableLocator()->get('table') outside of the scope of Fruits.
The other thing you need to consider when you make such changes is the adaptation of the framework. Doing TableRegistry::getTableLocator()->get('table') every time i need to access the model is not the best thing if i have multiple modules in my application that move beyond simple layered architecture.
Having flyweight/singleton class like TableRegistry with property get to access desired model just makes the development more straight forward and life easier.
Ideally, i would just like to call TR::get('table'), although that breaks the Cake's coding standards. (I've created that wrapper for myself anyways to make my app bullet proof from any similar changes)
What are your thoughts?

React Redux - Methods on store objects

One pattern I've seen recommended is to use selectors to where possible to hide the shape of the store. That way if you need to update the shape of the store, you should be able to get away with only updating your selectors, and not other parts of the application.
However the same problem arises with the use of models within the state.
As one of many examples, let's assume I'm building a file system in Redux. I have a list of files which can either be a directory or a file.
My store might have a fileList property which contains an array of file ids as well as a files object which maps fileId to a file object.
Let's say I have a list of files and I want to, depending on whether it's a file or directory, have a different Item component (i.e. DirectoryItem and FileItem).
One way to achieve this is to do something like:
{
files.map(file => {
file.type = 'directory' ?
<DirectoryItem key={file.id} ...file /> :
<FileItem key={file.id} ...file />
)}
}
(or I could create a higher-order FileListItem component, for example, that does the check and renders either the DirectoryItem or FileItem)
However this might not be ideal because now my component needs to know the structure of the file object. I might want to add a different type of object (i.e. a shortcut file or shared file) and might decide that a type property isn't how I want to represent my data anymore. As such, I'd need to go and update all my components, etc.
If I were doing this in Backbone, for example, I would've probably chosen to define an isDirectory() function on my model, however that doesn't seem to be the Redux way of doing things.
One possible solution I can think of is creating a FileUtils helper class which exports an isDirectory method and takes a file object as a parameter.
Another option will be creating an isDirectory selector which takes a file id as a prop, doing something like:
(files, props) => state.files[props.fileId].type == 'directory'
If I were to create the selector, I suppose I would need to create a higher-order component to call the selector from.
Just wondering if either approach is recommended in Redux? Am I missing another approach that could help solve this issue?
The functional way of doing things simply prescribes tearing the method off of the object and calling it a function.
The recommended way to call it is to instead of having a this, simply pass a regular parameter. This is not a requirement. You can just use call or apply. That may seem real strange in js, but this may change soon with a new :: operator.
Now, you can give this function anything you like to help it get its data.
In your example
(files, props) => state.files[props.fileId].type == 'directory'
You pass it state (naming mistake there) and props and then use this info to come up with an answer. But you could instead choose to pass it a directory entry object. No need to go fetch it from state.
Note that this makes it very close to a method.
isDirectory = entry => entry.type === 'dir';
Now, because it's not getting state, it isn't selecting anything from state and is therefore not a selector.
However, it's plenty functional in nature. There really is no need or use to make life more complicated than that. Adding a higher order component or trying to shoehorn our problems into a more Redux-y way of doing things is needlessly complicating matters.
Selectors are recommended for selecting state so state usage is not tied to state shape. It's an abstraction layer, separating your mapStateToProps from your reducers.
Selectors are now considered part of the Redux Way, but that wasn't always true. And so, at your discretion, being informed of why something is done the way it is, you can then choose to not use it.
And, at your discretion, you can choose to substitute the current trend with your own version. It is highly recommended to do this, of course after consideration of alternatives.
Often the best solution is the one you come up with yourself. Being the most informed about your problem domain, you are uniquely qualified to formulate a matching solution.
Those who have developed great ideas that all of us feed off of and get inspiration from will probably move on from their viewpoint when something better comes along.
There isn't (and probably shouldn't be) a sacred paradigm. Everything is eligible for reconsideration. Occam's razor dictates that the simplest answer is most likely the right one.
And Redux is very much about simplicity. So to do things the Redux Way is mostly about doing things the straightforward way.

Resources