I'm a backend developer, and I've developed a WebApi service with Asp.Net Core.
I've also developed ApiGateway using Ocelot library.
From front side, front-end developers use React and Axios as HTTP client.
When the request is provided to the API method directly, it works nice.
But when the request is called through the ApiGateway, response time is much bigger (more than 1-2 minutes).
This delay is performed from Chromium-based browsers f.e: Google Chrome, Microsoft Edge and Opera.
Everything is OK from Mozilla Firefox.
Also, there is no any problem from Postman and JMeter.
What can be a reason of a such behavior?
Or where should I try to find a solution, on the back or front side?
With API Gateway it requires going from the client to API Gateway,
which means leaving the application and going out to the internet,
then back to your application to go to your other Instance, then back
to API Gateway, which means leaving your application again and then
back to your first instance.
So this additional latency is expected. The only way to lower the
latency is to add in API Caching which is only going to be useful is
if the content you are requesting is going to be static and not
updating constantly. You will still see the longer latency when the
item is removed from cache and needs to be fetched from the System,
but it will lower most calls.
So I guess the latency is normal, which is unfortunate.
As for why it responds well on Mozilla Firefox, probably due to differences between API implementations.
The above is my point of view, I hope it will help you. If my understanding is wrong, please correct me, thanks.
Related
I'm asking for some help, maybe I'm missunderstanding some concepts and finally I dont know how to solve this requirement:
I have a create-react-app application deployed using Netlify.
Also my backend is deployed on AWS ECS.
I'm using AWS route 53 for routing frontend and backend to myapp.mydomain.com and api.mydomain.com respectively.
A client has a specific network config so only *.mydomain.com requests are allowed for his organization.
The problem resides on frontend because it uses many libraries, for example:
Checking network tab on browser I noticed thw following:
I'm using a giphy library, so it makes requests to api.giphy.com.
I'm using some google stuff like analytics and fonts, so I assume it will make requests to some google domain.
And so on...
As I understand this kind of fetches will be blocked by client network "firewall".
Adding more rules to said firewall is not an option (That was my first proposal to client but they only allows *.mydomain.com and no more)
So my plan B was implement a proxy ... but I dont have any idea of how to implement such solution.
It's possible to "catch" third party fetches, redirect them to my backend like api.mydomain.com/forward, so my backend would make real fetch and returns response given by said fetch to frontend?
The result desired should be, for example again, all fetches made to api.giphy.com should be redirected to api.mydomain.com/forward/giphy and same for all third-party fetches
I Googled a lot and now I'm very confused, any help is welcome!! Thanks devs!
I have bunch of AWS Lambda functions behind an AWS API Gateway with a custom authorizer Lambda function and they are being hit by a React front end using Axios for making requests. For some reason, everything works fine on Chrome and Postman but Safari times out on a select few endpoints. I've tested to see if the API Gateway was timing out by setting the API Gateway timeout to 5000 ms and the front end timeout to 10000 ms. The requests timeout at 10000 ms, leading me to believe that the front end isn't even reaching the API Gateway but I'm still left trying to figure out what exactly is going on.
I know this isn't a lot to go on but if this sounds familiar to anyone, I'd love to hear your experience so I can get this thing sorted.
Thanks!
EDIT: We have done some more testing and it seems that Safari is preventing requests from succeeding when an iframe is present on the page. If it helps, the src of the iframe has the same domain as the parent page but a different subdomain. When an iframe is present, no requests to our API seem to be completing.
I have had a Python-based Google App Engine app working great using Cloud Endpoints 1.0 for several years without incident. I have had nothing but trouble migrating to Cloud Endpoints 2.0.
Currently I'm in the following state after already clearing many previous hurdles described in other similar questions:
I have one version of my service called gce1 which uses Endpoints 1.0 and is set as the default service receiving 100% of my traffic. I can point API clients and the APIs Explorer to both gce1-dot-myservice.appspot.com and the default myservice.appspot.com and everything works fine. I can verify in the logs that anything that goes through here is using GCE 1.0.
I have a second version of my service called gce2 which is not receiving any traffic by default, but if I point an API client or the APIs Explorer to gce2-dot-myservice.appspot.com it works just fine, and I can verify in the logs that anything that goes through here is using GCE 2.0.
Great, right? So it would seem that all I need to do is migrate all my traffic to gce2 and I'm done.
But... when I do that everything breaks! The default myservice.appspot.com serves up 405 POST Method Not Allowed responses to my existing clients, and if I look at the APIs Explorer, suddenly it now shows a bunch of obsolete methods that I think are from years ago and are no longer used in my current API. I can't tell where those are coming from (they are nowhere in my code, and haven't been for years), and I can't get the default service to serve the GCE 2.0 API no matter what I do.
The biggest problem is that I have thousands of users in the wild that all point to the default API URL, so it isn't so easy to just have them start pointing to gce2-dot-myservice, and besides, it doesn't make sense that I can't make the new default the new default. I've been working on this migration to GCE 2.0 for months, the deadline for getting off GCE 1.0 is getting closer by the day, and Google Support has not responded since late last year on this topic.
I should also mention I have tried:
Pushing a new service with the GCE2.0 code directly to default
Pushing a new service with no API at all (to maybe clear a cache or something)
Pushing services with all different sorts of version names
None of these have worked, although I haven't done any of them allowing a long delay since I'm working on a live service with real users.
This issue is now resolved, so for most people it should no longer occur. However, in my specific case, I had a legacy API that was getting in the way and had to be deleted, which did require specific attention from a Google engineer.
If you have similar issues, visit issuetracker.google.com/issues/76031966 and comment there.
Thanks to #saiyr for help tracking this down.
I have an app engine service deployed in GAE (written in Node) to accept a series of click stream events from my website. The data is pushed as an CORS ajax call. I think, since the POST request can be seen in browser through the developer tools, somebody can use the app engine URL to post similar data from the browser console.( like in firefox, we can resend the URL. Chrome also has this features i guess)
A few options I see here is,
Use the firewall setting to allow only my domain to send data to the GAE.
This can still fail me since the requests can be made from the browser console repetitively)
Use a WAF ( Web Application Firewall) as a proxy to create some custom rule.
What should be my approach to secure my GAE service?
I don't think the #1 approach would actually work for you: if the post requests are visible in the browser's development tools it means they're actually made by the client, not by your website, so you can't actually use firewall rules to secure them.
Since requests are coming into the service after fundamentally originating on your own website/app (where I imagine the clicks in the mentioned sequences happen) I'd place the sanity check inside that website/app (where you already have a lot more context info available to make decisions) rather than in the service itself (where, at best, you'd have to go through loops to discover/restore the original context required to make intelligent decisions). Of course, this assumes that the website/app is not a static/dumb site and has some level of intelligence.
For example, for the case using browser development tools for replaying post requests you described - the website app could have some 'executed' flag attached to the respective request, set on the first invocation (when the external service is also triggered) and could thus simply reject any subsequent cloned/copy of the request.
With the above in place I'd then replace sending the service request from the client. Instead I'd use your website/app to create and make the service request instead, after passing through the above-mentioned sanity checks. This is almost trivial to secure with simple firewall rules - valid requests can only come in from your website/app, not from the clients. I suspect this is closer to what you had in mind when you listed the #1 approach.
I have created a small project in Google apengine and like it apart from one thing: It seems virtually impossible to delete or change an endpoint, once it has been made. The API often stays the same, regardless of code changes in the java endpoint classes. I tried to delete the complete API, but that is not possible either. Is there any way to do this?
RE: Deletion
it's not currently supported, but we're working on it.
RE: API Changes
The Google APIs Explorer web app aggressively caches, so you'll need to clear your cache or force a refresh when you update your API server side to see the changes in the client.