Does Geb cache Page Objects - selenium-webdriver

I'm trying to run a test where I have a list of ids to search in a web application and check if a property is set for each one. I modeled the search result as a Page, but every time I load a new search result the first result seems to be cached and I eventually get a stale element reference exception. How does one instantiate a new SearchResult page object for each searched id?

There is no concept of caching for pages in Geb. Page content elements can be cached though but they are not cached by default. You might be still getting StaleElementReferenceException even it things are not cached, for example when interacting with DOM elements which are removed while doing so and I suspect that this is what's happening in your case.
If your page is dynamic, i.e. the DOM is modified in an asynchronous way after an action is performed on the page then you should ensure to wait for the page to stabilise before interacting with the content. And there is no need to use WebDriver APIs directly to achieve it.
I would be able to give you better guidance if you shared code of your page class and the stacktrace you're getting.

Related

How to invalidate cache and serve the latest content to the very first request when using ISR with Next JS?

From reading this page in the Next JS docs, this is my current understanding of ISR. When contents are updated in an external CMS (or other APIs), the following takes place behind Next JS:
On the first request, Next JS will serve the cached page that is now outdated from the CMS content; however, this request triggers a rebuild of that page (provided that the revalidation time has passed).
On the following and subsequent requests, Next JS will now serve the newly generated page which is now up to date.
It will serve the new cached page until the revalidation time has passed, at which point Next JS will repeat step no. 1.
A potential issue I see is that whoever visits the page for the first time, after the content on CMS had been changed won't see the new content. The second person to visit the same page will see the new content for the first time, but not the first person.
I was wondering if there are any ways to make sure that the first person to visit also sees the latest content? For example, I can think of two possible work arounds, but I'm wondering if there are better solutions.
When a content is updated for a specific page on the CMS side, manually visit the new or updated page to manually trigger a rebuild of that page (or have this automated so that when CMS saves a change, some serverless function visits the new page to trigger rebuild of the page). This way, when an actual website visitor loads the page, they will see the latest content. Similar discussion I found here but a built-in way to do it without serverless?: #11698 Comment
Somehow have Next JS inject new props into the page when it finishes rebuilding. This way the first visitor will see the old content for a very short time, and then the content (page props) are swapped.
(maybe a feature to add in the future), have a helper function from Next to invalidate the cached page and force pre-rendering again (but for per page basis)
Are there recommended solutions to this? Or are the two possible solutions feasible? Thank you in advance for reading my very long question.

How to notify crawler that ajax powered page is completely loaded and ready to take snapshot

There are Angular/REST powered web pages, but with no navigation module being used (no hash based (#!) navigation).
Despite deprecating of google's ajax-crawling webmasters-ajax-crawling, it seems crawler only sees that JS generated content which does not rely on AJAX (REST) calls responses, and does not see page content which is depends on AJAX calls response.
It feels like google does not give enough time for a page to render, since it has no ability to identify if all expected logic in JS has finished completelly..
Q: is there a way to tell google (and to an abstract browser in general) that page completely rendered and no pending AJAX calls are there?
May be someone can suggest how to avoid rendering of page by angular - until all AJAX calls are completed (perhaps something like customized ng-cloak)?
Answering my own question..
It was asked because: it seemed that google failed to index text from
pages which is rendered by angular, after AJAX calls were performed.
Now: I see that google crawler actually indexes everything, so - no
need to notify crawler that page was rendered - it can recognize this
by itself.
But: I think google indexes pages in two phases: 1. Quickly indexing HTML of a page with no JS rendering involved (just after main document was fetched); 2. Performs heavy operation of rendering page with JS and indexes all rendered content. Second step may happen couple days after first one, so that's why you may see no indexed content for a while..

Load view with populated data in AngularJS

The problem:
AngularJS forces to split view from model and it's good. But we're experiencing the following downsides because of it: When a user loads our page a browser makes not 1, but 2 requests:
First it loads view template (the first request)
Then Angular services load data from a server (the second request)
And we have a bit slower load page speed.
The question:
Is it possible after first page loading load view with populated data and only then get data from a server when something must be changed on the page?
I tried to find something regarding it, but didn't find anything.
You will have a lot more request as you have to load the Javascript and CSS libraries as well.
You can store your initial data in a service or the rootscope and just update the data when you need it. What's exactly your problem here?
The modular approach would be to break all the different components of the page that consume data into separate data requests, so that the interface can load in progressively as different data requests complete.
Additionally, you can load initial data, but then make some predictions on what the user will do next and lazy load additional data in the background.
Finally, you could store the previously loaded data in local storage (lots of modules out there you can utilize) so that it's pulled instantly upon the user's next visit. You would want to want to also add some sort of timestamp comparison on data in storage and on server to see if it has been updated.

how to upate the lists from a page in angular.js after adding an item in the database(couchbase) from another page?

We have a problem in updating the data lists on dashboard page after creating a new list in create page. It already saved on the database, but not updating in the views. It updates once i click the refresh button on the browser but this is a one page web app. How can I update the lists on my dashboard page after adding a data from the previous page without refreshing the page? I used couchbase for database.
The problem here is that you are loading the content from your persistent storage, and then it's in angular as-is, and to retrieve any updates you will have to re-fetch it from your persistent storage. Unfortunately, it is not as simple to $watch your back-end.
You have some options here: if you are making your change from within the angular component of the site, then you can just call a function when you are creating a new page which re-fires your db-access code, and refreshes the model.
If you are making changes from outside of angular, then you will need to either use polling to refresh your angular model periodically, or go for a fancier web-socket option, as explained here.
After reading this question and your other question, my guess would be, that you get the old view from couchbase.
Per default couchbase sets the stale parameter to update_after, which results in getting the updated view only after the second access, see the couchbase documentation for more information.
To fix this, it should be sufficient to set stale to false.
If you access couchbase via REST call, adding ?stale=false to the call should do the trick, otherwise add the parameter according to your used SDK specification.

Backbone.js - persisting data after back button

I'm learning backbone.js to add some interactivity to an existing web app. On initial page load I'm bootstrapping some initial data into the page, using the reset() method suggested in the docs. This works great. I can also create new model instances, and the view handles them just as I would expect; they show up alongside the initial data, and everything is fine. (The new data also hits the database just fine.)
However, if I click on a link to a different page (not using backbone routes or anything, just a normal link) and then hit my browser's back button, the new models I created previously are gone; only the old initial data shows up. I've done some debugging and found that the reset() method runs each time the page is loaded, so presumably that's what's nuking the additional data I'd added. (However, if I actually refresh the page, the new data will be displayed again, since now it's getting bootstrapped in too.)
I know I could use fetch() to get the newly-added data (along with the older data), but I'm trying to avoid that, both because (a) that's an extra request every time the page is loaded, and (b) because the docs say that's not ideal.
So, what should I do so that using the back button doesn't make stuff (temporarily) vanish?
Page loads models, views, collections and Routers. Page set's up the collection through reset(bootstrapping). User clicks link to navigate to another page, clicks the back button. Something interesting happens now,
Routers match the url before the page is loaded ( when clicking back button). During this match you must verify that the collection contains new data and then do collection.fetch().
This will make you get latest always and hit the server only once (Either your collection is empty or it does not contain fresh data)

Resources