Gatsby: disappearing url parameters from email link - reactjs

I'm developing a site in Gatsby. Users receive an email with a link containing a single-use token, like this:
https://www.example.com/approval?token=hIPdI7oSw6KV6k8ttsXG3XAHmIqExyB01YkChxiLR9leksJ67iRme6yyxfBztz3Z
This should take them to the approval page and supply the token as parameter.
It works fine in the development build, but in the production build the parameter is missing from the url and the user is simply directed to https://www.example.com/approval
Does anyone know why Gatsby might be re-writing the url without the parameter in the production build, and is there some way to prevent that from happening?
EDIT: This site is hosted on CloudFront, and we've enabled forwarding of query params. Possibly there are some a re-direct happening at another level?

It turned out to be Gatsby/CloudFront madness: https://github.com/gatsbyjs/gatsby/issues/20139
The Gatsby dev server was unhelpfully injecting a / in front of the ?. The production build on CloudFront wasn't doing that, triggering a re-direct. CloudFront requires query strings in the format /? - for future reference!

Related

Site deploys fine on Netlify, but resources are failing to load

I have an issue, my website seems to build and deploy fine but when I visit it does not work. When I inspect element I receive error stating ‘Failed to load resource: the server responded with a status of 404 ()’. I understand that the page isn’t loading because it can’t be found but I don’t understand why it can’t be found, but the website is working fine locally.
website is build on react
Github: https://github.com/aff7n/weather-app
Website: https://profound-muffin-a078da.netlify.app/
I know all this may sound stupid, (bear with me since I'm a beginner with React) but how do I fix this?
You're using process.env in browsers to get the complete URL. This is your code:
const apiUrl = `${process.env.REACT_APP_API_URL}/weather/?lat=${lat}&lon=${long}&units=metric&APPID=${process.env.REACT_APP_API_KEY}`;
process is Node.js API, and it returns undefined in browsers. This is why, the URL is converted to:
https://profound-muffin-a078da.netlify.app/undefined/weather/?lat=19.1981219&lon=72.9600269&units=metric&APPID=undefined
The solution would be to:
Hardcode the actual values of the variables. I don't see why you're using environment variables here. There's nothing sensitive in the URL, anyone can see it anyways.
OR Refer to: https://create-react-app.dev/docs/adding-custom-environment-variables/. Quoting:
You must create custom environment variables beginning with REACT_APP_. Any other variables except NODE_ENV will be ignored to avoid accidentally exposing a private key on the machine that could have the same name. Changing any environment variables will require you to restart the development server if it is running.
Basically, prefix the variables with REACT_APP_. The end result would be same as option 1 though.
Final option: Use Netlify Functions (or Netlify Edge Functions) to make a call to the API and return the response to the client application. Here, you can use process.env (or Deno.env if you use Netlify Edge Functions) and keep the URL a secret.

Dealing with the environment url in the "build" version of react

I'm trying to deploy a react-django app to production using digitalocean droplet. I have a file where I check for the current environment (development or production), and based on the current environment assign the appropriate url to use to connect to the django backend like so:
export const server = enviroment ? "http://localhost:8000" : "domain-name.com";
My app is working perfectly both on development and production modes in local system (I temporarily still used http://localhost:8000 in place of domain-name.com). But I observed something rather strange. It's the fact that when I tried to access the site (still in my local computer) with "127.0.0.1:8000" ON THE BROWSER, the page is blank with a console error "No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' ....".
When I changed it back to "http://localhost:8000", everything was back working. My worry is isn't 127.0.0.1:8000 the same as http://localhost:8000? From this I conclude that whatever you have in the domain-name.com place when you build your react frontend is exactly what will be used.
Like I said, I'm trying to deploy to a digital ocean droplet, and I plan to install ssl certificate so the site could be served on https. Now my question is given the scenario painted above, what should be the right way to write the url in production? Should it be "serverIP-address", "domain-name.com", "http://domain-name.com", "https://domain-name.com" ?.
I must mentioned that I had previously attempted to deploy to the said platform using the IP-address in the domain-name.com place. After following all the steps. I got a 502 (Bad gateway) error. However, I'm not saying using Ip address was responsible for the error in that case.
Please I would appreciate any help especially from someone who had previously deployed a react-django app to the said platform. Thanks
From this I conclude that whatever you have in the domain-name.com
place when you build your react frontend is exactly what will be used.
Not exactly true, the domain from which the react app is served will be used. If you build it local and upload it to the server and configure domain.com to serve it, then domain.com will be used for cors. The best idea is to allow all CORS until your project is deployment ready. Once done, whitelist the domain.com
The solution actually lies in providing the host(s) allowed to connect to the Back-end in the setting.py file like so: CORS_ALLOWED_ORIGINS = [ domain-name.com, https:domain-name.com , ... ] etc. That way, you wouldn't be tied to using the url provided in the react environment variable. Though I have not deployed to the server, my first worry within the local machine is taken care off.

Request denied based on content categorization: "Uncategorized URLs"

I have a reactJs SPA hosted on github pages with custom domain.
I indexed it in google search and also have description <meta> tags in my code.
All works fine on normal machine.
However, When I try to open the website behind a corporate firewall, I get this error
Request denied based on content categorization: "Uncategorized URLs"
I might never be able to open the site behind this firewall, because I can not update the firewall policies, but the reason mentioned in the blocking proxy is something I want to fix.
Question
My site is shown as "Uncategorized URLs".
As this is a SPA, and no content gets loaded until JS gets triggered so proxy might not be able to analyze the content, but is there a way to resolve this Uncategorized URLs issue by setting some <meta> tags or any quick fix except SSR

How to escape # character in HAProxy config?

I'm trying to modularize my front-end which is in Angular-JS, with it we are using HA-proxy as a load balancer and K8s.
Each ACL in the HA-proxy configuration is attached to a different service in K8s and since we are using Angular with the (hash-bang enabled), in the HA-proxy configuration file we use that as a way to identify the different modules.
Below is my configuration, in HA-proxy which is failing because I can't escape the # in the file even after following the HA Documentation.
acl login-frontend path_beg /\#/login
use_backend login-frontend if login-frontend
acl elc-frontend path_beg /\#/elc
use_backend elc-frontend if elc-frontend
I have tried escaping it as /%23/login and /'#'/admin but without success.
Any idea would be greatly appreciated.
The fragment (everything followed by a # character) as defined in RFC 3986
As with any URI, use of a fragment identifier component does not
imply that a retrieval action will take place. A URI with a fragment
identifier may be used to refer to the secondary resource without any
implication that the primary resource is accessible or will ever be
accessed.
and it is used on the client side, therefore a client (a browser, a curl, ...) does not send it with a request. As reference: Is the URL fragment identifier sent to the server?
So there is no point to route/acl with it. The reason why haproxy provide an escape sequence for that is you may want to include it with a body, a custom header... but again, you will not obtain that part from the request line (the first line with URI).
What is really happening here is the user is requesting from HAProxy / and Angular, in the user's browser, is then parsing the #/logic and #/elc part to decide what to do next.
I ran into a similar problem with my Ember app. For SEO purposes I split out my "marketing" pages and my "app" pages.
I then mounted my Ember application at /app and had HAProxy route requests to the backend that serviced my Ember app. A request for "anything else" (i.e. /contact-us) was routed to the backend that handled marketing pages.
/app/* -> server1 (Ember pages)
/ -> server2 (static marketing pages)
Since I had some urls floating around out there on the web that still pointed to things like /#/login but really they should now be /app/#/login what I had to do was edit the index.html page being served by my marketing backend and add Javascript to that page that parsed the url. If it detected a /#/login it forced a redirect to /app/#/login instead.
I hope that helps you figure out how to accomplish the same for your Angular app.

CDN serving private images / videos

I would like to know how do CDNs serve private data - images / videos. I came across this stackoverflow answer but this seems to be Amazon CloudFront specific answer.
As a popular example case lets say the problem in question is serving contents inside of facebook. So there is access controlled stuff at an individual user level and also at a group of users level. Besides, there is some publicly accessible data.
All logic of what can be served to whom resides on the server!
The first request to CDN will go to application server and gets validated for access rights. But there is a catch - keep this in mind:
Assume that first request is successful and after that, anyone will be able to access the image with that CDN URL. I tested this with Facebook user uploaded restricted image and it was accessible with the CDN URL by others too even after me logging out. So, the image will be accessible till the CDN cache expiry time.
I believe this should work - all requests first come to the main application server. After determining whether access is allowed or not, a redirect to the CDN server or access-denied error can be shown.
Each CDN working differently, so unless you specify which CDN you are looking for its hard to tell.

Resources