Next JS messing up responses for concurrent requests - reactjs

I have a NextJs app that uses SSR for all pages (so no static pages to worry about caching). Inside App.getInitialProps I get the user-agent to make an educated guess about whether the requests comes from a mobile device or desktop to render the correct layout on the server. But I came across an issue where sometimes the mobile layout got rendered on the desktop and vice-versa.
After extensive debugging, I came to this conclusion: if two user agents make a request to the same URL, there is a change next will confuse the request and serve the wrong answers. For example, if there is a chrome and a safari user simultaneously requesting the same URL, there is a chance the chrome user will get served the response meant for the safari user. In this example it's a non issue, but if Next messes up a mobile and a desktop request, the server and client get out of sync and the hydration phase fails.
Has anyone come across an issue like that? I guess I am either doing something wrong inside my _app.js or in the next.config.js, but I can't find anything regarding an issue like that

Related

calling react page from postman is possible or not?

We created a react js application.
Problem: Not able to hit react URL from the postman to run component's function.
local URL: http://localhost:3000/rules/routine
Note: Above URL can be reached without login.
When we are calling a url from browser it's working however when we hit from a postman then it always returns public/index.html page but not the expected response.
So it is not calling the proper url http://localhost:3000/rules/routine.
Please find attached screenshots on below links
browser hit: https://prnt.sc/73gDWh4PiHgu
Postman hit: https://prnt.sc/fhVL78yaiATP
It's technically possible, and working it seems, but I suspect your expectations are a little skewed in what you think Postman will do or is capable of.
Keep in mind that all React apps are effectively a single page app, even when they use a routing/navigation package like react-router. react-router is only manipulating the URL in the address bar in the browser window, the app is still running from the single location on a server where it's hosted, i.e. public/index.html.
The servers hosting the app are configured to route all page requests, i.e. "http://localhost:3000/rules/routine" to the root index file, i.e. "http://localhost:3000/index.html" so the React app is loaded and can handle the "/rules/routine" internally.
So from what I see here, the response from Postman is absolutely correct and what I'd expect to see. It made a request to the development server for a nested directory "/rules/routing" and the server responded with the root index.html file. When the browser makes this request to the development server and gets a HTML file back it loads and renders it, and in this case, it's your React app.
Postman isn't a browser so it's not going to load the HTML response and do anything with it.

Vercel/Next.js (seeminlgy) randomly returns 404s with {"notFound": True}

Intro
Apologies for not being able to provide a reproducible example. Our team is not able to reproduce the bug reliably. We have been investigating the bug for almost a week now, but can't seem to make any headway. We just rolled out our next.js based headless Shopify store (i.e. use next.js for the frontend and Shopify for everything starting from the checkout).
This bug is the strangest thing I have seen with next.js so far and any pointers towards solving the problem are more than appreciated.
Note:
You can navigate to www.everdrop.ch/it and open console to see some some broken links. However, since this is production, we obviously try to fix them as soon as possible.
Problem:
Almost every time we deploy a new version we would get to see some seemingly random 404s in the console, for when next is trying to prefetch Links.
The 404's are always of the form https://domain/_next/data/<DEPLOYMENT>/<PATH>/slug.json where sometimes PATH is e.g. category-pages and sometimes it's empty.
Observation 1
When clicking one of the broken links in console (the .json, I would get a 404:
Navigating to the broken pages on client side will also give a 404
However, when curl -I -L I would get a 200
Observation 2
When checking the Output data in Vercel
everything works like a charm
Note that the URL is different though. It is the same deployment but at a different URL.
Observation 3
The affected Links are seemingly random. However, some seem to be more likely to be affected than others.
Observation 4
Navigating to the page and then refreshing or directly accesing the page does produce the properly rendered page. Suprisingly enough this also results (for most pages that is) in a disapearance of the initital error.
Observation 5
Rerunning the deployment on vercel oftentimes fixes the problem and many of the broken links will then randomly work. Sometimes this leads to other broken links though.
Background & Stack
We use Storyblok and Shopify as data providers to query during build time. Shopify for product data and Storyblok for page and content data. All affected pages so far have been pages where we pull data from Storyblok during build time (which are all pages other than search and product pages).
We use next i18next for multi-language localisation. We use ENV variables to control where data is coming from to build our different stores.
Many people have reached out to me on LinkedIn asking how we ultimately solved the problem at hand.
I think, generally speaking, the problem occurs when at build-time, a page build fails (i.e. when you are running into API limits). This is especially problematic in combination with
fallback: true (https://nextjs.org/docs/api-reference/data-fetching/get-static-paths#fallback-true)
As I think, pages that were built but failed will not get updated later on.
Our Solution
For us, we were able to solve it with:
preventing errors at build-time (we implemented a cache, but your errors might be different)
setting revalidate param, so that even if pages fail, they will get rebuilt
fallback: blocking (https://nextjs.org/docs/api-reference/data-fetching/get-static-paths#fallback-blocking)
notFound: true (https://nextjs.org/docs/api-reference/data-fetching/get-static-props#notfound)

Load index.html for every possible route of my React SPA that is hosted on digitalocean spaces

I use digital ocean space and CDN to host a React SPA. When hitting with a browser the url [host]/index.html it works fine. However hitting [host]/index.html/customers/one or any other subpaths, returns a 404. Currently, any reload on any subpath returns that 404. Last, I use terraform to update the SPA artifacts on DO spaces and I have tried to add a website_redirect="/index.html" to all the bucket objects (js, html and css) but with no success (more info if necessary here). And to be completely honest I am not sure I understand that option in the terraform digitalocean provider. I might be using it completely wrong.
Now, I have seen that question in multiple places but never with a clear answer.
Here is one on digitalocean community (https://www.digitalocean.com/community/questions/is-it-possible-to-send-a-301-redirect-for-bucket-objects) where no answer is provided but the issue seems to be similar.
There is a similar question on SO without an approved answer Redirect wrong URL/path DigitalOcean Spaces
This is a DO idea that is somewhat related https://ideas.digitalocean.com/ideas/DO-I-318
Is there a way to achieve the mentioned goal of loading index.html for every route with DO space + CDN and let the app parse the rest of the path to display the right component subtree of the react app?

SEO aspect about client side routing with angularjs or vuejs

Using a client side routing server side doesn't forge entire page to serve a client, but datas are downloaded from webapp "on demand".
So, in this scenario, if you see html code you could see something like this below:
<body>
<div class="blah">{{content}}</div>
</body>
I know that prerender strategy can be used and i think that probably google crawler is very smarty and can see contents anyway, but the question is:
is it good this approach on seo side?
Using prerender strategy server needs to generate page with content. Could be that a penalty in page speed factor?
Thank you in advance to everyone.
As you've mentioned google is pretty smart and from a recent experience, is able to fetch some of your site's static content even when using client-side rendering. However when it comes to client-side routing it's not quite there yet so if you need to have SEO, server side rendering frameworks like nuxt.js should be your go-to.
but datas are downloaded from webapp "on demand"
The same thing applies when you do asynchronous fetches (download on demand as you've described it), imagine the data inside your {{ content }} was coming from an external API, as far as I'm concerned no crawler at this time is able to deal with this, so your content area would just be empty. So generally speaking, when SEO is an requirement, so is server-side rendering.
Using prerender strategy server needs to generate page with content.
Could be that a penalty in page speed factor?
Yes and no. Load times will certainly go up a little, but when using client-side rendering, the client needs to render the page after loading it, so this time just gets shifted to your server. This applies again to asynchronous data fetching. The delivery of the site will take longer, but the data it has to fetch will already be there, so the client wont have to do it (SSR frameworks allow you to fetch data and render it before sending the site to the client). If you accumulate everything, there shouldn't be a huge difference in time from sending the request to actually seeing the rendered page in your browser.

FB.getLoginStatus never fires the callback function in Facebook's JavaScript SDK

The simple thing of calling FB.init (right before </body>) and then FB.getLoginStatus(callback) doesn't fire the callback function.
After some debugging, I think the SDK is stuck in the "loading" (i.e. FB.Auth._loadState == 'loading') phase and never gets to "loaded", so all callbacks are queued until the SDK has loaded.
If I force-fire the "loaded" event during debugging - with FB.Event.fire('FB.loginStatus', 'loaded') in case you're intersted - then the callbacks are invoked correctly.
Extra details that might be relevant:
My app is a facebook iframe app (loaded via apps.facebook.com/myapp)
I'm using IE9. The same behavior happens in Chrome
The app is hosted in http://localhost
What's going on? Why is the SDK never gets to loaded?
Thanks
UPDATE: Just tried it on Chrome and it worked (not sure why it didn't work before). Still doesn't work in IE
I had this same problem in Firefox 3.5 on Windows, but only on the very first log in to the page (probably because it was a slower machine and there was some weird timing issues going on).
I fixed it by forcing FB to refresh the login status cookie every time it checks:
FB.getLoginStatus(callback, true); //second argument forces a refresh from Facebook's server.
Without "force=true", sometimes it wouldn't fire the callback.
I had the exact same problem, and I solved it disabling "Secure Browsing" in the Facebook Security settings. Keeping Secure Browsing on forces the pages as "https", but I had no "Secure Canvas URL" set up, and this gave me a lot of errors in the console as well.
Hope this may help someone :)
In my experience, getLoginStatus() never calls the callback in Firefox when third-party cookies are disabled.
The original poster mentioned his application is hosted on http://localhost. I've never had luck with that, and believe it will cause problems.
Just today, I've had problems where getLoginStatus is not calling the callback on any browser, unless the user is actually connected to the app! I'm hoping this is a bug on facebook's end that they will solve.
Yet another possibility for FB.getLoginStatus not firing its callback is when using a "test" user account that has not been authorized to view that application. Its pretty bad that facebook doesn't give you any error messages.
I have also seen failed callbacks on bad appIds and redirectUrls.
I also ran into this issue specifically in Chrome. I tried calling it on page load and after a user-initiated action with no success.
It turned out that it was not a cross-domain issue. The getLoginStatus() call was being blocked by the Un-Passwordise extension in Chrome. As soon as I disabled the extension, it worked perfectly, even on page load.
More info about this issue here: Chrome-only cross-domain scripting errs in Facebook iFrame App upon FB.Login(..)
I understand that this question is a little old now, but I ran across it searching for solutions.
Double-check what you have set in your Facebook app configuration under the section "Website with Facebook Login". The Site URL domain must match the domain your page with the FB.getLoginStatus (and other related auth Javascript) is served from.
After hours of struggling, I realized that I could not reuse an existing app configuration I had on a new server and had to create a new app to handle the website login for this new server.
The other answers are probably equally valid in your specific case, but since there may be others like me who have struggled for a while on this, hopefully this gives you one other place to check. Making a new app with the correct Site URL was the answer in my particular case.

Resources