Dealing with the environment url in the "build" version of react - reactjs

I'm trying to deploy a react-django app to production using digitalocean droplet. I have a file where I check for the current environment (development or production), and based on the current environment assign the appropriate url to use to connect to the django backend like so:
export const server = enviroment ? "http://localhost:8000" : "domain-name.com";
My app is working perfectly both on development and production modes in local system (I temporarily still used http://localhost:8000 in place of domain-name.com). But I observed something rather strange. It's the fact that when I tried to access the site (still in my local computer) with "127.0.0.1:8000" ON THE BROWSER, the page is blank with a console error "No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' ....".
When I changed it back to "http://localhost:8000", everything was back working. My worry is isn't 127.0.0.1:8000 the same as http://localhost:8000? From this I conclude that whatever you have in the domain-name.com place when you build your react frontend is exactly what will be used.
Like I said, I'm trying to deploy to a digital ocean droplet, and I plan to install ssl certificate so the site could be served on https. Now my question is given the scenario painted above, what should be the right way to write the url in production? Should it be "serverIP-address", "domain-name.com", "http://domain-name.com", "https://domain-name.com" ?.
I must mentioned that I had previously attempted to deploy to the said platform using the IP-address in the domain-name.com place. After following all the steps. I got a 502 (Bad gateway) error. However, I'm not saying using Ip address was responsible for the error in that case.
Please I would appreciate any help especially from someone who had previously deployed a react-django app to the said platform. Thanks

From this I conclude that whatever you have in the domain-name.com
place when you build your react frontend is exactly what will be used.
Not exactly true, the domain from which the react app is served will be used. If you build it local and upload it to the server and configure domain.com to serve it, then domain.com will be used for cors. The best idea is to allow all CORS until your project is deployment ready. Once done, whitelist the domain.com

The solution actually lies in providing the host(s) allowed to connect to the Back-end in the setting.py file like so: CORS_ALLOWED_ORIGINS = [ domain-name.com, https:domain-name.com , ... ] etc. That way, you wouldn't be tied to using the url provided in the react environment variable. Though I have not deployed to the server, my first worry within the local machine is taken care off.

Related

Site deploys fine on Netlify, but resources are failing to load

I have an issue, my website seems to build and deploy fine but when I visit it does not work. When I inspect element I receive error stating ‘Failed to load resource: the server responded with a status of 404 ()’. I understand that the page isn’t loading because it can’t be found but I don’t understand why it can’t be found, but the website is working fine locally.
website is build on react
Github: https://github.com/aff7n/weather-app
Website: https://profound-muffin-a078da.netlify.app/
I know all this may sound stupid, (bear with me since I'm a beginner with React) but how do I fix this?
You're using process.env in browsers to get the complete URL. This is your code:
const apiUrl = `${process.env.REACT_APP_API_URL}/weather/?lat=${lat}&lon=${long}&units=metric&APPID=${process.env.REACT_APP_API_KEY}`;
process is Node.js API, and it returns undefined in browsers. This is why, the URL is converted to:
https://profound-muffin-a078da.netlify.app/undefined/weather/?lat=19.1981219&lon=72.9600269&units=metric&APPID=undefined
The solution would be to:
Hardcode the actual values of the variables. I don't see why you're using environment variables here. There's nothing sensitive in the URL, anyone can see it anyways.
OR Refer to: https://create-react-app.dev/docs/adding-custom-environment-variables/. Quoting:
You must create custom environment variables beginning with REACT_APP_. Any other variables except NODE_ENV will be ignored to avoid accidentally exposing a private key on the machine that could have the same name. Changing any environment variables will require you to restart the development server if it is running.
Basically, prefix the variables with REACT_APP_. The end result would be same as option 1 though.
Final option: Use Netlify Functions (or Netlify Edge Functions) to make a call to the API and return the response to the client application. Here, you can use process.env (or Deno.env if you use Netlify Edge Functions) and keep the URL a secret.

React JS - Best practice to hide http call credentials

What's the best practices to hide or prevent the user see the credentials (implemented in WebService calls). The development is ReactJS and use Heroku to deploy the WebApp.
I have this code:
I want to prevent the user can see the credentials and some security details.
I started using the node module dotenv recently and really like how easy it is to use. All you need to do is install it and create a .env file with your environment variables like this:
.env
SECRET_KEY=123456
ANOTHER_KEY=78901
Then, require it as early as possible in your application:
require('dotenv').config().
I do this inside my server.js file (or whatever you name it).
That's it! Anything stored in the file can now be accessed by doing process.env.{name}
For example:
let secret = process.env.SECRET_KEY;
console.log(secret); // 123456
This is not really possible to do in a client side because all the HTTP calls can be easily visible in Network tab in Chrome Inspect Elements(or any web browser).
I would suggest you work on the security so you don't care if a user will see your HTTP endpoints or not.
You can also consider making your HTTP request on a server which will act as a bridge between your client and an API.

Quickblox is not working with subdomain url

I have multiple URL for same domain , but its contain sub domain like following
admin.projectname.com
doctor.projectname.com
etc..
Here quickblox call not working with this URLs and giving following error:
NavigatorUserMediaError {
name: "PermissionDeniedError",
message: "Only secure origins are allowed.",
constraintName: ""
} app.js:577 4
I have refer quickblox and found some solution like quickblox only work with localhost and https SSL but I want to make it work with this type of URL.
It is already working with localhost but I want to start it with virtual domain of localhost.
Please help me out of this. let me know if any query.
You can't get access to UserMedia unless you're connected to a secure host. Your browser recognizes as secure host the ones with HTTPS or, for development, localhost.
If you need to develop with full domain names you either generate an SSL certificate (a free self-signed) for your environment or use an obscure flag like --unsafely-treat-insecure-origin-as-secure="admin.projectname.com".
See https://www.chromium.org/Home/chromium-security/deprecating-powerful-features-on-insecure-origins section Testing a Powerful Feature.

AppEngine dev_appserver - urllib2.urlopen issue with localhost url

UPDATE
App Engine SDK 1.9.24 was released on July 20, 2015, so if you're still experiencing this, you should be able to fix this simply by updating. See +jpatokal's answer below for an explanation of the exact problem and solution.
Original Question
I have an application I'm working with and running into troubles when developing locally.
We have some shared code that checks an auth server for our apps using urllib2.urlopen. When I develop locally, I'm getting rejected with a 404 on my app that makes the request from AppEngine, but the request succeeds just fine from a terminal.
I have appengine running on port localhost:8000, and the auth server on localhost:8001
import urllib2
url = "http://localhost:8001/api/CheckAuthentication/?__client_id=dev&token=c7jl2y3smhzzqabhxnzrlyq5r5sdyjr8&username=amadison&__signature=6IXnj08bAnKoIBvJQUuBG8O1kBuBCWS8655s3DpBQIE="
try:
r = urllib2.urlopen(url)
print(r.geturl())
print(r.read())
except urllib2.HTTPError as e:
print("got error: {} - {}".format(e.code, e.reason))
which results in got error: 404 - Not Found from within AppEngine
It appears that AppEngine is adding the schema, host and port to the PATH portion of the url I'm trying to hit, as this is what I see on the auth server:
[02/Jul/2015 16:54:16] "GET http://localhost:8001/api/CheckAuthentication/?__client_id=dev&token=c7jl2y3smhzzqabhxnzrlyq5r5sdyjr8&username=amadison&__signature=6IXnj08bAnKoIBvJQUuBG8O1kBuBCWS8655s3DpBQIE= HTTP/1.1" 404 10146
and from the request header we can see the whole scheme and host and port are being passed along as part of the path (header pieces below):
'HTTP_HOST': 'localhost:8001',
'PATH_INFO': u'http://localhost:8001/api/CheckAuthentication/',
'SERVER_PORT': '8001',
'SERVER_PROTOCOL': 'HTTP/1.1',
Is there any way to not have the AppEngine Dev server hijack this request to localhost on a different port? Or am I not misunderstanding what is happening? Everything works fine in production where our domains are different.
Thanks in advance for any assistance helping to point me in the right direction.
This is an annoying problem introduced by the urlfetch_stub implementation. I'm not sure what gcloud sdk version introduced it.
I've fixed this by patching the gcloud SDK - until Google does.
which means this answer will hopefully be irrelevant shortly
Find and open urlfetch_stub.py, which can often be found at ~/google-cloud-sdk/platform/google_appengine/google/appengine/api/urlfetch_stub.py
Around line 380 (depends on version), find:
full_path = urlparse.urlunsplit((protocol, host, path, query, ''))
and replace it with:
full_path = urlparse.urlunsplit(('', '', path, query, ''))
more info
You were correct in assuming the issue was a broken PATH_INFO header. The full_path here is being passed after the connection is made.
disclaimer
I may very easily have broken proxy requests with this patch. Because I expect google to fix it, I'm not going to go too crazy about it.
To be very clear this bug is ONLY related to LOCAL app development - you won't see this on production.
App Engine SDK 1.9.24 was released on July 20, 2015, so if you're still experiencing this, you should be able to fix this simply by updating.
Here's a brief explanation of what happened. Until 1.9.21, the SDK was formatting URL fetch requests with relative paths, like this:
GET /test/ HTTP/1.1
Host: 127.0.0.1:5000
In 1.9.22, to better support proxies, this changed to absolute paths:
GET http://127.0.0.1:5000/test/ HTTP/1.1
Host: 127.0.0.1:5000
Both formats are perfectly legal per the HTTP/1.1 spec, see RFC 2616, section 5.1.2. However, while that spec dates to 1999, there are apparently quite a few HTTP request handlers that do not parse the absolute form correctly, instead just naively concatenating the path and the host together.
So in the interest of compatibility, the previous behavior has been restored. (Unless you're using a proxy, in which case the RFC requires absolute paths.)

Custom domain http redirects to https when I don't want it to, why is it doing this?

I am trying to get a custom domain to work with Google App Engine 1.9.7 without SSL
I have done all the prerequisites;
Domain is verified with the proper TXT records.
Domain is configured in the GAE Cloud Console with the proper subdomain www.
Application is deployed the appspot.com domain and works.
But when I try to got to http://www.customdomain.com it immediately redirects to https://www.customdomain.com and I get the following error:
net::ERR_SSL_PROTOCOL_ERROR
I know that for SSL I need to set up a certificate.
I don't have any of my webapp modules configured to be secure.
I don't want SSL right now, I don't need it right now.
I found this little nugget after reading the instructions again and again:
It's okay for multiple domains and subdomains to point to the same
application. You can design your app to treat them all the same or
handle each one in a different way.
This is exactly what I want to do but I can't find any information on how to actually do this?
How do I get it to stop redirecting to the http to https?
I ran into the same problem. You say "I don't have any of my webapp modules configured to be secure." If that's the case, sorry, can't help you.
Otherwise the most likely cause for your problem would be: A "secure: always" flag for the respective handler in your app.yaml in the handlers section. Like so:
handlers:
- url: /*
secure: always
Remove the line with the "secure: always". Details in the official Google docs here (table item "secure").
How to run into this problem? I ran into it, because I copied the app.yaml from one of my other apps that didn't need to run on a custom domain, yet needed the SSL always.
For a Django/Python GAE app, by the way, the same problem is caused like this:
handlers:
- url: /.*
script: google.appengine.ext.django.main.app
secure: always
Same answer here: Remove or change the "secure" line. Python version just tested as described. Always works on the appspot.com domain, only without secure flag on a custom domain.
Just pointing out the above, as other people might run into this problem and come to this threat for help.
What I had to do was to shut down all instances and remove all versions, then do a fresh deployment from scratch, then I stopped having this problem.

Resources