I'm asking for some help, maybe I'm missunderstanding some concepts and finally I dont know how to solve this requirement:
I have a create-react-app application deployed using Netlify.
Also my backend is deployed on AWS ECS.
I'm using AWS route 53 for routing frontend and backend to myapp.mydomain.com and api.mydomain.com respectively.
A client has a specific network config so only *.mydomain.com requests are allowed for his organization.
The problem resides on frontend because it uses many libraries, for example:
Checking network tab on browser I noticed thw following:
I'm using a giphy library, so it makes requests to api.giphy.com.
I'm using some google stuff like analytics and fonts, so I assume it will make requests to some google domain.
And so on...
As I understand this kind of fetches will be blocked by client network "firewall".
Adding more rules to said firewall is not an option (That was my first proposal to client but they only allows *.mydomain.com and no more)
So my plan B was implement a proxy ... but I dont have any idea of how to implement such solution.
It's possible to "catch" third party fetches, redirect them to my backend like api.mydomain.com/forward, so my backend would make real fetch and returns response given by said fetch to frontend?
The result desired should be, for example again, all fetches made to api.giphy.com should be redirected to api.mydomain.com/forward/giphy and same for all third-party fetches
I Googled a lot and now I'm very confused, any help is welcome!! Thanks devs!
Related
Can anyone help how to configure nginx so it only accepts the server IP where ReactJS is hosted?
Ive tried many options to no avail. I always see ReactJS is using the client IP where the user is currently browsing (because of I guess of its client-based nature). Unfortunately, I need to block all other request to protect my Django rest api from external requests. My Django app is having this nginx reverse proxy by the way. How do you guys do this?
I think you seriously misunderstood how your app works. Unless you are doing something really, really weird, it generally works like this:
The user's browser (the client) receives the ReactJS code from the server hosting it
The ReactJS code is executed in the user's browser
All requests for your REST API will originate from the user's browser executing the ReactJS code, i.e. coming from the client machine, not from the server hosting the ReactJS code
The server hosting the ReactJS code merely returns the ReactJS code to client, and doesn't even interact with the server hosting Django REST API
Thus, what you are attempting to do is misguided and in fact will just restrict your users.
As a side note, you may use several complicated techniques to be somewhat sure that the API requests do come from a browser executing your ReactJS code (i.e. a legitimate use of your API) rather than other tools, but that's far from a guarantee and in most cases serves no practical purpose anyway.
I have a React based SPA that is hosted via S3 on one subdomain, react.mydomain.com ... It communicates with a PHP REST API that is hosted on a VPS on another subdomain, api.mydomain.com . The api.mydomain.com is behind CloudFlare. The webapp is behind CloudFront since it is on AWS.
I am having issues w/ bot requests directly to the API flooding my VPS, and I would like to use the JS challenge functionality with CloudFlare to mitigate.
However, what seems to be happening is that users are able to load the React webapp (which is not behind CloudFlare). Then, the request that will prompt the JS challenge will fail with a 503 response instantly, because it is an AJAX request and it is incompatible with the Javascript challenge.
I thought I may be able to handle this by catching the error and redirecting. However, if I manually force my own browser to navigate to the api.mydomain.com URL, I will see the CloudFlare challenge and pass it. However, if I then navigate back to my react.mydomain.com SPA, the OPTIONS requests will fail because it cannot attach the cookie that tells CloudFlare it has passed.
I don't understand how to adjust my infrastructure so that I can take advantage of using the JS challenge. At the moment I am restricted to using rate limiting, but I have found that I am still letting in what seems like ~75% or more of the unwanted bot traffic through by the time I get severe enough that users start complaining.
If you have backend access, you may be able to use NPM and process.kill(process.pid) upon detection of a bot as a temporary solution.
I suggest not to host spa in s3 only file uploads or attachments to s3
Host on Ec2 and block all access through security Group Policy
only allow Cloudflare IP's
which are listed here https://www.cloudflare.com/ips/
You can also use Amazon AWS Lambda serverless for hosting instead of s3
https://aws.amazon.com/lambda/?c=ser&sec=srv
Maybe this is a really basic question, but how do you architect your system such that your single page application is hosted on premise with some hostname, say mydogs.com but you want to host your application services code in the cloud (as well as database). For example, let's say you spin up an Amazon EC2 Container Service using docker and it is running NodeJS server. The hostnames will all have ec2_some_id.amazon.com. What system sits in from of the Amazon EC2 instance where my angularjs app connects to? What architecture facilitate this type of app? Especially AWS based services.
One of the important aspects setting up the web application and the backend is to server it using a single domain avoiding cross origin requests (CORS). To do this, you can use AWS CloudFront as a proxy, where the routing happens based on URL paths.
For example, you can point the root domain to index.html while /api/* requests to the backend endpoint running in EC2. Sample diagram of the architecture is shown below.
Also its important for your angular application to have full url paths. One of the challenges having these are, for routes such as /home /about and etc., it will reload a page from the backend for that particular path. Since its a single page application you won't be having server pages for /home and /about & etc. This is where you can setup error pages in CloudFront so that, all the not found routes also can be forwarded to the index.html (Which serves the AngularJS app).
The only thing you need to care about is the CORS on whatever server you use to host your backend in AWS.
More Doc on CORS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Hope it helps.
A good approach is to have two separated instances. It is, an instance to serve your API (Application Program Interface) and another one to serve your SPA (Single Page Application).
For the API server you may want more robust service because it's the one that will suffer the most receiving tons of requests from all client instances, so this one needs to have more performance, band, etc. In addition, you probably want your API server to be scalable when needed (depends on the load over it); maybe not, but is something to keep in mind if your application is supposed to grow fast. So you may invest a little bit more on this one.
The SPA server in the other hand, is the one that will only serve static resources (if you're not using server side rendering), so this one is supposed to be cheaper (if not free). Furthermore, all it does is to serve the application resources once and the application actually runs on client and most files will end up being cached by the browser. So you don't need to invest much on this one.
Anyhow, you question about which service will fit better for this type of application can't be answered because it doesn't define much about that you may find the one that sits for the requisites you want in terms of how your application will be consumed by the clients like: how many requests, downloads, or storage your app needs.
Amazon EC2 instance types
I have Web application with angular as frontend and Django REST as backend.
My web application does the request like
/api/options/user?filter={}
Now is it possible that if those requests are made from application then they go through but they type that in broswer directly and edit some filters then they don't work
Although the data is not sensitive and they can still see it via console but i just don't want them to play with it or at least make it hard
You can't rely on the URL to distinguish between the two cases. You could have your application provide information in the headers of the request, which a browser would not know, but someone writing their own application could mimic your technique.
I've picked up Angular and am now developing two separate applications, the frontend, Angular app, and the backend, the Laravel app.
As of now my backend app is just an API endpoint that handles requests, database interaction, logic, validation, etc.
However, what stops someone from requesting /api/users/1 and getting that data?
Right now there is nothing in place that prevents this from occurring.
What's the best way to prevent this from occurring and verify the request is sent through the application and not through something like http://hurl.it from some random user?
You should first evaluate what routes need to be protected, and who should have access. Sometimes it might be fine to leave them open to the public.
Once you've figured that out you have a few options. I personally lean towards the oAuth 2.0 protocol. Some people find it to be over kill. Then there is also WSSE, I personally feel like today there is far better resources explaining the use of oAuth and would probably be easier to follow.
You can google around for oAuth server libraries for laravel. One such is: https://github.com/lucadegasperi/oauth2-server-laravel
You will also probably want to enable CORS if your angular app is on a different domain from your api. IE: api.example.com (holds api). And example.com is where your app lives.
For CORS laravel also has some packages, one such being: https://github.com/barryvdh/laravel-cors