I have bunch of AWS Lambda functions behind an AWS API Gateway with a custom authorizer Lambda function and they are being hit by a React front end using Axios for making requests. For some reason, everything works fine on Chrome and Postman but Safari times out on a select few endpoints. I've tested to see if the API Gateway was timing out by setting the API Gateway timeout to 5000 ms and the front end timeout to 10000 ms. The requests timeout at 10000 ms, leading me to believe that the front end isn't even reaching the API Gateway but I'm still left trying to figure out what exactly is going on.
I know this isn't a lot to go on but if this sounds familiar to anyone, I'd love to hear your experience so I can get this thing sorted.
Thanks!
EDIT: We have done some more testing and it seems that Safari is preventing requests from succeeding when an iframe is present on the page. If it helps, the src of the iframe has the same domain as the parent page but a different subdomain. When an iframe is present, no requests to our API seem to be completing.
Related
I'm asking for some help, maybe I'm missunderstanding some concepts and finally I dont know how to solve this requirement:
I have a create-react-app application deployed using Netlify.
Also my backend is deployed on AWS ECS.
I'm using AWS route 53 for routing frontend and backend to myapp.mydomain.com and api.mydomain.com respectively.
A client has a specific network config so only *.mydomain.com requests are allowed for his organization.
The problem resides on frontend because it uses many libraries, for example:
Checking network tab on browser I noticed thw following:
I'm using a giphy library, so it makes requests to api.giphy.com.
I'm using some google stuff like analytics and fonts, so I assume it will make requests to some google domain.
And so on...
As I understand this kind of fetches will be blocked by client network "firewall".
Adding more rules to said firewall is not an option (That was my first proposal to client but they only allows *.mydomain.com and no more)
So my plan B was implement a proxy ... but I dont have any idea of how to implement such solution.
It's possible to "catch" third party fetches, redirect them to my backend like api.mydomain.com/forward, so my backend would make real fetch and returns response given by said fetch to frontend?
The result desired should be, for example again, all fetches made to api.giphy.com should be redirected to api.mydomain.com/forward/giphy and same for all third-party fetches
I Googled a lot and now I'm very confused, any help is welcome!! Thanks devs!
I'm a backend developer, and I've developed a WebApi service with Asp.Net Core.
I've also developed ApiGateway using Ocelot library.
From front side, front-end developers use React and Axios as HTTP client.
When the request is provided to the API method directly, it works nice.
But when the request is called through the ApiGateway, response time is much bigger (more than 1-2 minutes).
This delay is performed from Chromium-based browsers f.e: Google Chrome, Microsoft Edge and Opera.
Everything is OK from Mozilla Firefox.
Also, there is no any problem from Postman and JMeter.
What can be a reason of a such behavior?
Or where should I try to find a solution, on the back or front side?
With API Gateway it requires going from the client to API Gateway,
which means leaving the application and going out to the internet,
then back to your application to go to your other Instance, then back
to API Gateway, which means leaving your application again and then
back to your first instance.
So this additional latency is expected. The only way to lower the
latency is to add in API Caching which is only going to be useful is
if the content you are requesting is going to be static and not
updating constantly. You will still see the longer latency when the
item is removed from cache and needs to be fetched from the System,
but it will lower most calls.
So I guess the latency is normal, which is unfortunate.
As for why it responds well on Mozilla Firefox, probably due to differences between API implementations.
The above is my point of view, I hope it will help you. If my understanding is wrong, please correct me, thanks.
I can request to a API very easily in Postman. But when I try it with React JS and on localhost:3000, it throws me an error:
Please tell me why?! Thanks
See https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/
If you know CORS problems, it would be easier to explain.
Because the requests sent by Postman or ones by your React.JS are kind of different.
As you send requests with Postman directly, the requests are considered sent by as "a person", like you enter the URLs of your APIs in your browser.
It's different with sending requests with React because React is a front-end JavaScript based framework language "executed" by your browser.
Imaging if you happen to access a malicious web-site, and the site sends intended codes of React(or some JavaScript) to manipulate your browser(An easy example: If there are no limits, it can use your browser as a web crawler to other sites).
You know that every time you open a site, your browser is executing lots of codes from the site.
So you may have to understand why we need CORS policy, and how CORS works to develop your APIs.
We just started having lots of 502 errors out of the blue, without deploying anything new. Somehow 99% of all requests to the endpoints don't get through to appengine (as seen in the appengine log). The service status of google app engine and endpoints seems to be green.
We tried deploying a new endpoints api description and a new appengine version using it, also stopping respective versions.
We can also no longer look at the api explorer.
web requests via the gapi js library return "Error 502 (Server Error)!!1" when trying to initialize and load the "_ah/api/static/proxy.html" page
What could be the problem here? Is there a way to "restart" endpoints?
OK, its just magically started working again after around 50min of downtime. I guess it would still be interesting to know if there is anything we could do in cases like this.
I have a React based SPA that is hosted via S3 on one subdomain, react.mydomain.com ... It communicates with a PHP REST API that is hosted on a VPS on another subdomain, api.mydomain.com . The api.mydomain.com is behind CloudFlare. The webapp is behind CloudFront since it is on AWS.
I am having issues w/ bot requests directly to the API flooding my VPS, and I would like to use the JS challenge functionality with CloudFlare to mitigate.
However, what seems to be happening is that users are able to load the React webapp (which is not behind CloudFlare). Then, the request that will prompt the JS challenge will fail with a 503 response instantly, because it is an AJAX request and it is incompatible with the Javascript challenge.
I thought I may be able to handle this by catching the error and redirecting. However, if I manually force my own browser to navigate to the api.mydomain.com URL, I will see the CloudFlare challenge and pass it. However, if I then navigate back to my react.mydomain.com SPA, the OPTIONS requests will fail because it cannot attach the cookie that tells CloudFlare it has passed.
I don't understand how to adjust my infrastructure so that I can take advantage of using the JS challenge. At the moment I am restricted to using rate limiting, but I have found that I am still letting in what seems like ~75% or more of the unwanted bot traffic through by the time I get severe enough that users start complaining.
If you have backend access, you may be able to use NPM and process.kill(process.pid) upon detection of a bot as a temporary solution.
I suggest not to host spa in s3 only file uploads or attachments to s3
Host on Ec2 and block all access through security Group Policy
only allow Cloudflare IP's
which are listed here https://www.cloudflare.com/ips/
You can also use Amazon AWS Lambda serverless for hosting instead of s3
https://aws.amazon.com/lambda/?c=ser&sec=srv