Forcing JS fetch to use non-https - reactjs

I have an in-development ReactJS application that I run locally from my computer (on localhost). I have also set up local certs so that the application runs on HTTPS (https://localhost).
I also have a backend application located at an HTTP endpoint hosted in the cloud. Of course, the backend application will eventually be located at an HTTPS endpoint eventually, but that process hasn't been started yet.
I am trying to hit that HTTP endpoint from my local HTTPS ReactJS application by using fetch. However, something is upgrading the connection from HTTP to HTTPS automatically.
The only relevant information I have found on this is this post. I have tried the accepted answer (setting the referrerPolicy to unsafe-url) but that did not work for me.
Any other suggestions?

Related

Quarkus REST API CORS configuration not working for consumer ReactJS app

I have a ReactJS UI that is served by a static NGINX web server and a Quarkus REST API server. Both are dockerized services, and the ReactJS app is supposed to use the Quarkus REST API to consume data/make requests. In the depiction below we can see this simple setup for my localhost dev enironment (both services are exposed and mapped to different localhost ports):
In the deployed production environment, these will services will likely correspond to different hosts/URLs. The problem is, even in the localhost setup i expectedly have the issue of CORS errors when i try to make calls to the REST API service from the ReactJS app running in the clients browser, e.g. during login:
I have to admit, i dont fully understand CORS in terms of where exactly one has to make changes/configs to allow them - but i was told i need to set them in the server i make requests to (which in this case is the Quarkus REST API). So i added this setting in the Quarkus app application.properties to just generally allow all requests:
quarkus.http.cors=true
(as shown in https://quarkus.io/guides/http-reference#cors-filter)
In reality i should probably change this to be more precise, however i still receive the same CORS error in my browser when running the react web app. I understand that i could also configure a proxy in the NGINX server to tunnel requests to the other service container potentially, but i would like to solve this through CORS configuration. Where do i have to make which configurations for this to work? Did i make a mistake with the Quarkus config?
It seems you cannot only set quarkus.http.cors=true for it to work and allow all requests, as per the Quarkus documentation. In my case i had to add more configurations, i.e.:
quarkus.http.cors=true
# This allows all origin hosts, should be specified if possible
quarkus.http.cors.origins=*
quarkus.http.cors.headers=accept, authorization, content-type, x-requested-with
quarkus.http.cors.methods=GET, POST, PUT, DELETE, OPTIONS

How to deploy a React website via Vercel or heroku?

I am developing a React project for studies and would like to publish.
I tried some ways, but the site is blank, there is no data from the NEWS-API I am using.
It seems to make no mistake.
It is a front application, only react with the API.
If it helps, here's the repository link.
https://github.com/carlos-souza-dev/apinews
I visited your deployment in vercel from your github repo and noticed this issue.
You're requesting data from the API over http which is insecure, while your page hosted by vercel uses https.
Modern browsers donot allow for a page served over https to request http data.
It might just be a fixed by changing your urls to use https, or if the API didn't have https you might have to do other workarounds.(Although it's better to use an API with https support)
I noticed this by opening the console after visiting your page to see these Mixed content requests blocked error.
The reason for the blank page after loading is that the Promise to get the data from the API get rejected but never handled causing the JavaScript execution to stop
[EDIT 1]
I read through some of the code in your repository and noticed a link pointing to localhost. It looks like you tried to setup a nodejs server to proxy data through https
The API you're using does seem to have HTTPS support
Conclusion:
Try changing the links to the API to https instead of http in your react code and see if it works. If it does, there's no need for a backend server of you're own
If the API doesn't have HTTPS support however, do either one of
Migrate to a different API with HTTPS support
Try serving your static react app through the backend and pointing your react app to /path/to/api/route without an absolute url and use the proxy setting in package.json as said here for development
Point to a full path to your backend server on the internet (i.e remove localhost links)
Also note that you cannot deploy a backend to vercel but it does support serverless functions

Running an Ionic app as a PWA on Firebase hosting

Background
We have a large Ionic v1 app that runs on Android (by obtaining it from the Google Play Store) and also on our development machines via "ionic serve".
The app uses a Google App Engine (GAE) website as a backend server.
The server maintains sessions for each user by means of a cookie. We don't store much data in the session, but we need to securely identify the user making each request. When the app runs successfully, the GAE server code creates a cookie that contains the session ID and sends it to the Ionic client code when responding to each HTTP request.
Note:
That the Ionic code does not access the cookie in any way. It is only
necessary that the same cookie be sent back to the GAE server with each
subsequent request so that the GAE code recognizes the user.
The goal
We would like to serve the Ionic code by use of Firebase Hosting. We can in fact do so in both of two modes:
Keeping the Ionic code on our dev machine, running "firebase serve", and going to "localhost:5000" on the browser
Deploying the Ionic code to the Firebase host and going to "xxxx.firebaseapp.com" on the browser
Everything works! Uh, except for one little thing, which we've been trying to solve for weeks...
The problem
The cookie used by the GAE code to manage session continuity, and sent in responses to HTTP requests generated by the GAE code, does not come back in the next request from the Ionic app running on Firebase. So the GAE app always responds as though the user is not yet logged in.
In fact, further testing shows that the session cookie sent in responses to HTTP requests does not even get set in the browser (so of course it's not sent back to the GAE code with the next HTTP request). The GAE code on the backend server always responds as if this is the first HTTP request of a session.
What we've solved already
The problem is not the fact that Ionic does not support cookies. We know this is not the problem because the app runs fine as an Android app and also via "ionic serve". In both cases, the GAE backend is able to maintain sessions using a cookie to store the session ID from one request to the next.
The problem does not get solved by using "memcache" instead of cookies for GAE session support, because even if you use memcache, you still need the cookie for the session ID. If you wish, you can go with the default and let GAE session support use cookies; in that case, it will use the same cookie for both the session ID and any other session data.
The problem does not get solved by using "__session" as the name of the cookie. Firebase does in fact support using such a cookie name, but apparently only in the context of running Firebase Hosting with Cloud Functions. Cloud Functions are for running backend code, not client code that the user interacts with. We could see no way to make an Ionic app run as a Cloud Function. And without Cloud Functions, the "__session" cookie set by the GAE backend apparently gets stripped by the browser client running the app, along with all other cookies.
Adding "Access-Control-Allow-Origin/-Credentials/-Methods/-Headers" headers to the GAE-code generated response, and setting crossDomain: true xhrFields: { withCredentials: true } on the client side, does not improve the situation. Still, no cookie from the GAE code gets set on the browser.
Any help will be appreciated.

net::ERR_CONNECTION_TIMED_OUT from angular to nodejs in k8s

I am trying to make request to nodejs backend from containerized AngularJS frontend to nodejs.
Both are deployed in AWS using Kubernetes(KOPS). And I created service to access both.
For frontend type is LoadBalancer in k8s services and for backend, its ClusterIP. I can access frontend from browser using URL of the load balancer which "kubectl get services" gives me. But when frontend tries to make request to backend I am getting following error:
net::ERR_CONNECTION_TIMED_OUT or net::ERR_NAME_NOT_RESOLVED.
I checked using telnet etc, and app is running and can be accessed. Direct access to hostname works but doesn't work from AngularJS/NodeJS.
Your post was light on specifics, but if I understand correctly:
1. ELB -> Service -> Pod("http-server-serving-Angular")
2. ClusterIP -> Service -> Pod("nodejs")
Is that correct? because if so:
and for backend, its ClusterIP
Cluster IP addresses are, as their name suggests, only available within the cluster. You will want the backend Service to be of type LoadBalancer, also, so traffic that is not in the cluster can reach the nodejs app.
I'm cheating you with that answer just a tiny bit, because you can absolutely provision an Ingress controller and then leave the other Services as ClusterIP, but I would bet that's not the typical setup.
i think i found the problem. Here is the possible cause. I am using express.js for frontend which is hosted in nodejs. We wrote service which makes connection to backend host. This is not being served through http server and this is making connection being made from client's browser. I tried adding public ip to backend and it was working as expected. So possible fix it serve express/angular from nodejs web server.
This is not kubernetes question. I apologize for adding misleading tag.
Thanks for all the replies guys!
So this problem was wrong rule in nginx controller. My ingress has typo which was causing frontend to not recognize url. This issue is resolved.

Crossdomain Issue while accessing Linkedin enabled Silverlight Application from hosted environment

I am developing a linkedin enabled Silverlight application. LinkedIn API calls are made through Oauth2 over REST platform. I am able to fetch authorization code,access token and make API calls while running the application as "localhost" or in local IIS. However, when I try to access the deployed application outside the environment with Public IP, I am unable to make LinkedIn API calls. While debugging with Fiddler, the trace shows that "crossdomain.xml and clientaccesspolicy.xml" not found in http://www.linkedin.com/crossdomain.xml. I am unsure why this problem occurs only when we try accessing the hosted application outside and doesn't occur while running under localhost. I have tried placing both these xml files in the root folder of IIS and also inside our application hosted folder. Looking forward for your kind suggestions/help resolving this.
Thanks,
Subbu

Resources