I have a React based SPA that is hosted via S3 on one subdomain, react.mydomain.com ... It communicates with a PHP REST API that is hosted on a VPS on another subdomain, api.mydomain.com . The api.mydomain.com is behind CloudFlare. The webapp is behind CloudFront since it is on AWS.
I am having issues w/ bot requests directly to the API flooding my VPS, and I would like to use the JS challenge functionality with CloudFlare to mitigate.
However, what seems to be happening is that users are able to load the React webapp (which is not behind CloudFlare). Then, the request that will prompt the JS challenge will fail with a 503 response instantly, because it is an AJAX request and it is incompatible with the Javascript challenge.
I thought I may be able to handle this by catching the error and redirecting. However, if I manually force my own browser to navigate to the api.mydomain.com URL, I will see the CloudFlare challenge and pass it. However, if I then navigate back to my react.mydomain.com SPA, the OPTIONS requests will fail because it cannot attach the cookie that tells CloudFlare it has passed.
I don't understand how to adjust my infrastructure so that I can take advantage of using the JS challenge. At the moment I am restricted to using rate limiting, but I have found that I am still letting in what seems like ~75% or more of the unwanted bot traffic through by the time I get severe enough that users start complaining.
If you have backend access, you may be able to use NPM and process.kill(process.pid) upon detection of a bot as a temporary solution.
I suggest not to host spa in s3 only file uploads or attachments to s3
Host on Ec2 and block all access through security Group Policy
only allow Cloudflare IP's
which are listed here https://www.cloudflare.com/ips/
You can also use Amazon AWS Lambda serverless for hosting instead of s3
https://aws.amazon.com/lambda/?c=ser&sec=srv
Related
I am currently building a web + mobile application.
My front end is developed using React and Axios (for API call requests). It is served directly by vercel on mydomain.com
My Mobile App is developed using the Flutter
My back end is developed using Django and Django Rest. It is served with apache2 on api.mydomain.com. It only serves API endpoints.
So the front-end, Mobile app, and back-end are separated.
I would like only my front-end (mydomain.com) and flutter app to be able to make API requests to my Django Rest backend.
I would like to prevent any other domains, any clients such as postman, insomnia, curl, or any script to make API requests to my backend.
I have already set CORS in Django Rest. However, I can still make requests to the backend using curl or any other client.
Do you have any idea of what I could do to achieve this?
Thanks a lot in advance for your answers.
CORS is enforced only by web browsers to prevent leaking information to unrelated pages that might request it. You need some kind of access control, either by authenticating the caller or limiting access to the endpoint.
Checking the Host header with get_host() may offer sufficient protection, depending on your server setup.
get_host() will tell you the value of the Host header in the request, which is data provided by the client so could be manipulated in any way. The Host header is an integral part of HTTP 1.1 in allowing multiple domains to be hosted at a single address so you might be able to depend on your server rejecting requests that aren't actually arriving from localhost with a matching header, but it's difficult to be certain.
It would likely be more reliable to check the client's network address and reject requests from all clients except those that are specifically allowed.
Check this question too.
I'm asking for some help, maybe I'm missunderstanding some concepts and finally I dont know how to solve this requirement:
I have a create-react-app application deployed using Netlify.
Also my backend is deployed on AWS ECS.
I'm using AWS route 53 for routing frontend and backend to myapp.mydomain.com and api.mydomain.com respectively.
A client has a specific network config so only *.mydomain.com requests are allowed for his organization.
The problem resides on frontend because it uses many libraries, for example:
Checking network tab on browser I noticed thw following:
I'm using a giphy library, so it makes requests to api.giphy.com.
I'm using some google stuff like analytics and fonts, so I assume it will make requests to some google domain.
And so on...
As I understand this kind of fetches will be blocked by client network "firewall".
Adding more rules to said firewall is not an option (That was my first proposal to client but they only allows *.mydomain.com and no more)
So my plan B was implement a proxy ... but I dont have any idea of how to implement such solution.
It's possible to "catch" third party fetches, redirect them to my backend like api.mydomain.com/forward, so my backend would make real fetch and returns response given by said fetch to frontend?
The result desired should be, for example again, all fetches made to api.giphy.com should be redirected to api.mydomain.com/forward/giphy and same for all third-party fetches
I Googled a lot and now I'm very confused, any help is welcome!! Thanks devs!
I am developing a React project for studies and would like to publish.
I tried some ways, but the site is blank, there is no data from the NEWS-API I am using.
It seems to make no mistake.
It is a front application, only react with the API.
If it helps, here's the repository link.
https://github.com/carlos-souza-dev/apinews
I visited your deployment in vercel from your github repo and noticed this issue.
You're requesting data from the API over http which is insecure, while your page hosted by vercel uses https.
Modern browsers donot allow for a page served over https to request http data.
It might just be a fixed by changing your urls to use https, or if the API didn't have https you might have to do other workarounds.(Although it's better to use an API with https support)
I noticed this by opening the console after visiting your page to see these Mixed content requests blocked error.
The reason for the blank page after loading is that the Promise to get the data from the API get rejected but never handled causing the JavaScript execution to stop
[EDIT 1]
I read through some of the code in your repository and noticed a link pointing to localhost. It looks like you tried to setup a nodejs server to proxy data through https
The API you're using does seem to have HTTPS support
Conclusion:
Try changing the links to the API to https instead of http in your react code and see if it works. If it does, there's no need for a backend server of you're own
If the API doesn't have HTTPS support however, do either one of
Migrate to a different API with HTTPS support
Try serving your static react app through the backend and pointing your react app to /path/to/api/route without an absolute url and use the proxy setting in package.json as said here for development
Point to a full path to your backend server on the internet (i.e remove localhost links)
Also note that you cannot deploy a backend to vercel but it does support serverless functions
I have an API made with NodeJS (NodeJS v10 + Express v4.16 + Node-Fetch v2.3) and into this API, I have one endpoint that need to consume content from a third-party API/Service via HTTP Request (POST)
The problem is: This third-party API only accepts requests coming from Brazil
In the past, my API was hosted on Digital Ocean, but with this rule I have migrated to GCP (since DO doesn't have hosts in Brazil) and created my App Engine Application under region southamerica-east1 (Sao Paulo/Brazil according with this document)
And yeah... It works on my machine ¯|_(ツ)_/¯
What's happening: Sometimes the requests runs Ok, working fine, but after some version updates (I'm using CI/CD to make de deployment) the requests goes down.
The Question: Exist a way to control my application to only use the hosted region to make the outgoing requests??
PS* I'm not using flex env, purposely to prevent auto-scale (and cost elevation). (I don't know if I'm right about it because I'm new on GCP)
The IPs of Google Cloud Platform share the same geolocation (US) so I would say that it's expected for the requests to fail. You can have a look at this and this questions for more info and potential workarounds.
I've picked up Angular and am now developing two separate applications, the frontend, Angular app, and the backend, the Laravel app.
As of now my backend app is just an API endpoint that handles requests, database interaction, logic, validation, etc.
However, what stops someone from requesting /api/users/1 and getting that data?
Right now there is nothing in place that prevents this from occurring.
What's the best way to prevent this from occurring and verify the request is sent through the application and not through something like http://hurl.it from some random user?
You should first evaluate what routes need to be protected, and who should have access. Sometimes it might be fine to leave them open to the public.
Once you've figured that out you have a few options. I personally lean towards the oAuth 2.0 protocol. Some people find it to be over kill. Then there is also WSSE, I personally feel like today there is far better resources explaining the use of oAuth and would probably be easier to follow.
You can google around for oAuth server libraries for laravel. One such is: https://github.com/lucadegasperi/oauth2-server-laravel
You will also probably want to enable CORS if your angular app is on a different domain from your api. IE: api.example.com (holds api). And example.com is where your app lives.
For CORS laravel also has some packages, one such being: https://github.com/barryvdh/laravel-cors