net::ERR_CONNECTION_TIMED_OUT from angular to nodejs in k8s - angularjs

I am trying to make request to nodejs backend from containerized AngularJS frontend to nodejs.
Both are deployed in AWS using Kubernetes(KOPS). And I created service to access both.
For frontend type is LoadBalancer in k8s services and for backend, its ClusterIP. I can access frontend from browser using URL of the load balancer which "kubectl get services" gives me. But when frontend tries to make request to backend I am getting following error:
net::ERR_CONNECTION_TIMED_OUT or net::ERR_NAME_NOT_RESOLVED.
I checked using telnet etc, and app is running and can be accessed. Direct access to hostname works but doesn't work from AngularJS/NodeJS.

Your post was light on specifics, but if I understand correctly:
1. ELB -> Service -> Pod("http-server-serving-Angular")
2. ClusterIP -> Service -> Pod("nodejs")
Is that correct? because if so:
and for backend, its ClusterIP
Cluster IP addresses are, as their name suggests, only available within the cluster. You will want the backend Service to be of type LoadBalancer, also, so traffic that is not in the cluster can reach the nodejs app.
I'm cheating you with that answer just a tiny bit, because you can absolutely provision an Ingress controller and then leave the other Services as ClusterIP, but I would bet that's not the typical setup.

i think i found the problem. Here is the possible cause. I am using express.js for frontend which is hosted in nodejs. We wrote service which makes connection to backend host. This is not being served through http server and this is making connection being made from client's browser. I tried adding public ip to backend and it was working as expected. So possible fix it serve express/angular from nodejs web server.
This is not kubernetes question. I apologize for adding misleading tag.
Thanks for all the replies guys!

So this problem was wrong rule in nginx controller. My ingress has typo which was causing frontend to not recognize url. This issue is resolved.

Related

Quarkus REST API CORS configuration not working for consumer ReactJS app

I have a ReactJS UI that is served by a static NGINX web server and a Quarkus REST API server. Both are dockerized services, and the ReactJS app is supposed to use the Quarkus REST API to consume data/make requests. In the depiction below we can see this simple setup for my localhost dev enironment (both services are exposed and mapped to different localhost ports):
In the deployed production environment, these will services will likely correspond to different hosts/URLs. The problem is, even in the localhost setup i expectedly have the issue of CORS errors when i try to make calls to the REST API service from the ReactJS app running in the clients browser, e.g. during login:
I have to admit, i dont fully understand CORS in terms of where exactly one has to make changes/configs to allow them - but i was told i need to set them in the server i make requests to (which in this case is the Quarkus REST API). So i added this setting in the Quarkus app application.properties to just generally allow all requests:
quarkus.http.cors=true
(as shown in https://quarkus.io/guides/http-reference#cors-filter)
In reality i should probably change this to be more precise, however i still receive the same CORS error in my browser when running the react web app. I understand that i could also configure a proxy in the NGINX server to tunnel requests to the other service container potentially, but i would like to solve this through CORS configuration. Where do i have to make which configurations for this to work? Did i make a mistake with the Quarkus config?
It seems you cannot only set quarkus.http.cors=true for it to work and allow all requests, as per the Quarkus documentation. In my case i had to add more configurations, i.e.:
quarkus.http.cors=true
# This allows all origin hosts, should be specified if possible
quarkus.http.cors.origins=*
quarkus.http.cors.headers=accept, authorization, content-type, x-requested-with
quarkus.http.cors.methods=GET, POST, PUT, DELETE, OPTIONS

intermediate hops in nginx reverse proxy

I am new to nginx and doing full stack development for the first time. Could you please help me understand the below logic
Lets say we are building a chatting app. We have 3 servers (EC2 instances)
server_nginx - ec2 running a nginx server working as reverse proxy
server_react - ec2 running a react project through nginx as web server
server_spring - ec2 running a spring boot project through nginx as web server
My reverse proxy is running through SSL/https. Initially everything was happening on same machine, so I made my spring boot service also ssl because I could not initiate a http connection over https. Now the I started separating out the instance as mentioned above (3 ec2 instances). I was expecting that the connection to my backend would fail. Reason :
user connects to reverse proxy through https domain (lets say mydomain.com).
The request comes to server_nginx(mydomain.com).
From here this proxies this to server_react (which is running on simple http). This is the server where my react code is hosted
This react code tries to initiate a web socket connection to server_spring where I have enabled the CORS for mydomain.com. So I was expecting that the connection would fail here as this is a different IP now. But surprisingly all the apis are getting the response as if I am hitting it from mydomain.com
So can anyone please help me understand why is the behaviour like this
Thanks.

Forcing JS fetch to use non-https

I have an in-development ReactJS application that I run locally from my computer (on localhost). I have also set up local certs so that the application runs on HTTPS (https://localhost).
I also have a backend application located at an HTTP endpoint hosted in the cloud. Of course, the backend application will eventually be located at an HTTPS endpoint eventually, but that process hasn't been started yet.
I am trying to hit that HTTP endpoint from my local HTTPS ReactJS application by using fetch. However, something is upgrading the connection from HTTP to HTTPS automatically.
The only relevant information I have found on this is this post. I have tried the accepted answer (setting the referrerPolicy to unsafe-url) but that did not work for me.
Any other suggestions?

How to deploy a React website via Vercel or heroku?

I am developing a React project for studies and would like to publish.
I tried some ways, but the site is blank, there is no data from the NEWS-API I am using.
It seems to make no mistake.
It is a front application, only react with the API.
If it helps, here's the repository link.
https://github.com/carlos-souza-dev/apinews
I visited your deployment in vercel from your github repo and noticed this issue.
You're requesting data from the API over http which is insecure, while your page hosted by vercel uses https.
Modern browsers donot allow for a page served over https to request http data.
It might just be a fixed by changing your urls to use https, or if the API didn't have https you might have to do other workarounds.(Although it's better to use an API with https support)
I noticed this by opening the console after visiting your page to see these Mixed content requests blocked error.
The reason for the blank page after loading is that the Promise to get the data from the API get rejected but never handled causing the JavaScript execution to stop
[EDIT 1]
I read through some of the code in your repository and noticed a link pointing to localhost. It looks like you tried to setup a nodejs server to proxy data through https
The API you're using does seem to have HTTPS support
Conclusion:
Try changing the links to the API to https instead of http in your react code and see if it works. If it does, there's no need for a backend server of you're own
If the API doesn't have HTTPS support however, do either one of
Migrate to a different API with HTTPS support
Try serving your static react app through the backend and pointing your react app to /path/to/api/route without an absolute url and use the proxy setting in package.json as said here for development
Point to a full path to your backend server on the internet (i.e remove localhost links)
Also note that you cannot deploy a backend to vercel but it does support serverless functions

AWS EC2 security to only allow HTTP requests originating as a result of browser accessing static s3 content

As part of distributed deployment on AWS, we have moved all static web assets, including angularjs files and dependencies to an AWS S3 bucket (static website). Angularjs controllers have complete API URL pointing to a nodejs server running on an EC2 instance. I am trying to figure out what is a good way to prevent Nodejs server from processing any HTTP requests other than the ones originating from angularjs controller.
Option -1 ) I cannot use S3 IP address for obvious reasons as incoming IP address for EC2 security group hosting the Nodejs server.
Option -2) I can use VPC Endpoint, but its more of a solution to allow EC2 in private subnet to access an S3 bucket.
Option -3) I can have another EC2 instance hosting a reverse proxy which the S3 angularjs will connect to. This reverse proxy will forward the request to EC2 instance running nodejs.
Option -4) Use AWS Nat Gateway, do not think its much different from option # 3.
Need folks to chime in with their thoughts keeping in mind security.
In case of AngularJS, it's all JavaScript. Your code is run from the use's web browser, not S3 bucket.
You can implement this by validating the Origin HTTP header. But that can be easily hacked.
The best possible solution is provide a service which generates some kind of session token, add it as part of all requests in some header field in AngularJS while sending requests to your NodeJS server and validate it for every request.

Resources