Outgoing HTTP Request Location on Google App Engine - google-app-engine

I have an API made with NodeJS (NodeJS v10 + Express v4.16 + Node-Fetch v2.3) and into this API, I have one endpoint that need to consume content from a third-party API/Service via HTTP Request (POST)
The problem is: This third-party API only accepts requests coming from Brazil
In the past, my API was hosted on Digital Ocean, but with this rule I have migrated to GCP (since DO doesn't have hosts in Brazil) and created my App Engine Application under region southamerica-east1 (Sao Paulo/Brazil according with this document)
And yeah... It works on my machine ¯|_(ツ)_/¯
What's happening: Sometimes the requests runs Ok, working fine, but after some version updates (I'm using CI/CD to make de deployment) the requests goes down.
The Question: Exist a way to control my application to only use the hosted region to make the outgoing requests??
PS* I'm not using flex env, purposely to prevent auto-scale (and cost elevation). (I don't know if I'm right about it because I'm new on GCP)

The IPs of Google Cloud Platform share the same geolocation (US) so I would say that it's expected for the requests to fail. You can have a look at this and this questions for more info and potential workarounds.

Related

Cannot access backend sevices in React App using GKE Ingress

I have a React application that I have been trying to run on GKE for weeks now but I cannot figure out the the GKE Ingress. There are a total of 7 microservices running including the React App.
My React App makes 4 API calls in total
"/posts/create" //creates a new post
'/posts/comments/*' //adds a comment to a post
'/posts' // gets posts+comments, returns empty object since no posts are created
'/posts/save' // saves post to cloudSQL
The application uses an event bus that handles communication between the different microservices so I created a ClusterIP service for each app and created additional NodePort services to use on the Ingress. After the Ingress is created I can access the React App but it says all of the backend services are unhealthy and I can't access them. I have tried calling the API's in several ways through the React Client including (calls // error in Chrome console
"http://query-np-srv:4002/posts" //Failed to load resource: net::ERR_NAME_NOT_RESOLVED
"http://10.96.11.196:4002/posts"(this is the endpoint for the service) //xhr.js:210 GET http://10.96.11.196:4002/posts net::ERR_CONNECTION_TIMED_OUT
"http://posts.com/posts // GET http://posts.com/posts 502 (Bad Gateway)
If i run any of the follwoing commands from the client pod I get an object returned as intended
curl query-srv:4002/posts
curl 10.96.12.242:4002/posts
curl query-np-srv:4002/posts
The only way I have been able to get this application to actually work on GKE is by exposing the client, posts, comments, and query pods on LoadBalancers and hard coding the LB IP's into the API calls, which cannot be a best practice. At least this way I know the project is functional and leads me to believe this is an ingress issue
Here is my Github repo for project
All of the yaml files are located in the infra/k8s folder and I am using the test.yaml to deploy the ingress, not the ingress-srv.yaml. Also, I am not using skaffold to deploy so that can be ignored as it is not causing the issues. If anyone can figure this out I would be very appreciative.
If after you create the ingress object the backends services are unhealthy, you need to review your Health checks. Did you review if GKE created Health checks for each backend service?
Health checks connect to backends on a configurable, periodic basis.
Each connection attempt is called a probe. Google Cloud records the
success or failure of each probe. Google Cloud considers backends to
be unhealthy when the unhealthy threshold has been met. Unhealthy
backends are not eligible to receive new connections; however,
existing connections are not immediately terminated. Instead, the
connection remains open until a timeout occurs or until traffic is
dropped.

CloudFlare JS challenge is breaking my SPA

I have a React based SPA that is hosted via S3 on one subdomain, react.mydomain.com ... It communicates with a PHP REST API that is hosted on a VPS on another subdomain, api.mydomain.com . The api.mydomain.com is behind CloudFlare. The webapp is behind CloudFront since it is on AWS.
I am having issues w/ bot requests directly to the API flooding my VPS, and I would like to use the JS challenge functionality with CloudFlare to mitigate.
However, what seems to be happening is that users are able to load the React webapp (which is not behind CloudFlare). Then, the request that will prompt the JS challenge will fail with a 503 response instantly, because it is an AJAX request and it is incompatible with the Javascript challenge.
I thought I may be able to handle this by catching the error and redirecting. However, if I manually force my own browser to navigate to the api.mydomain.com URL, I will see the CloudFlare challenge and pass it. However, if I then navigate back to my react.mydomain.com SPA, the OPTIONS requests will fail because it cannot attach the cookie that tells CloudFlare it has passed.
I don't understand how to adjust my infrastructure so that I can take advantage of using the JS challenge. At the moment I am restricted to using rate limiting, but I have found that I am still letting in what seems like ~75% or more of the unwanted bot traffic through by the time I get severe enough that users start complaining.
If you have backend access, you may be able to use NPM and process.kill(process.pid) upon detection of a bot as a temporary solution.
I suggest not to host spa in s3 only file uploads or attachments to s3
Host on Ec2 and block all access through security Group Policy
only allow Cloudflare IP's
which are listed here https://www.cloudflare.com/ips/
You can also use Amazon AWS Lambda serverless for hosting instead of s3
https://aws.amazon.com/lambda/?c=ser&sec=srv

Running an Ionic app as a PWA on Firebase hosting

Background
We have a large Ionic v1 app that runs on Android (by obtaining it from the Google Play Store) and also on our development machines via "ionic serve".
The app uses a Google App Engine (GAE) website as a backend server.
The server maintains sessions for each user by means of a cookie. We don't store much data in the session, but we need to securely identify the user making each request. When the app runs successfully, the GAE server code creates a cookie that contains the session ID and sends it to the Ionic client code when responding to each HTTP request.
Note:
That the Ionic code does not access the cookie in any way. It is only
necessary that the same cookie be sent back to the GAE server with each
subsequent request so that the GAE code recognizes the user.
The goal
We would like to serve the Ionic code by use of Firebase Hosting. We can in fact do so in both of two modes:
Keeping the Ionic code on our dev machine, running "firebase serve", and going to "localhost:5000" on the browser
Deploying the Ionic code to the Firebase host and going to "xxxx.firebaseapp.com" on the browser
Everything works! Uh, except for one little thing, which we've been trying to solve for weeks...
The problem
The cookie used by the GAE code to manage session continuity, and sent in responses to HTTP requests generated by the GAE code, does not come back in the next request from the Ionic app running on Firebase. So the GAE app always responds as though the user is not yet logged in.
In fact, further testing shows that the session cookie sent in responses to HTTP requests does not even get set in the browser (so of course it's not sent back to the GAE code with the next HTTP request). The GAE code on the backend server always responds as if this is the first HTTP request of a session.
What we've solved already
The problem is not the fact that Ionic does not support cookies. We know this is not the problem because the app runs fine as an Android app and also via "ionic serve". In both cases, the GAE backend is able to maintain sessions using a cookie to store the session ID from one request to the next.
The problem does not get solved by using "memcache" instead of cookies for GAE session support, because even if you use memcache, you still need the cookie for the session ID. If you wish, you can go with the default and let GAE session support use cookies; in that case, it will use the same cookie for both the session ID and any other session data.
The problem does not get solved by using "__session" as the name of the cookie. Firebase does in fact support using such a cookie name, but apparently only in the context of running Firebase Hosting with Cloud Functions. Cloud Functions are for running backend code, not client code that the user interacts with. We could see no way to make an Ionic app run as a Cloud Function. And without Cloud Functions, the "__session" cookie set by the GAE backend apparently gets stripped by the browser client running the app, along with all other cookies.
Adding "Access-Control-Allow-Origin/-Credentials/-Methods/-Headers" headers to the GAE-code generated response, and setting crossDomain: true xhrFields: { withCredentials: true } on the client side, does not improve the situation. Still, no cookie from the GAE code gets set on the browser.
Any help will be appreciated.

Firebase: How to awake App Engine when client changes db?

I'm running a backend app on App Engine (still on the free plan), and it supports client mobile apps in a Firebase Realtime Database setup. When a client makes a change to the database, I need my backend to review that change, and potentially calculate some output.
I could have my App Engine instance sit awake and listen on Firebase ports all the time, waiting for change anywhere in the database, but That would keep my instance awake 24/7 and won't support load balancing.
Before I switched to Firebase, my clients would manually wake up the backend by sending a REST request of the change they want to perform. Now, that Firebase allows the clients to make changes directly, I was hoping they won't need to issue a manual request. I could continue to produce a request from the client, but that solution won't be robust, as it would fail to inform the server if for some reason the request didn't come through, and the user switched off the client before it succeeded to send the request. Firebase has its own mechanism to retain changes, but my request would need a similar mechanism. I'm hoping there's an easier solution than that.
Is there a way to have Firebase produce a request automatically and wake up my App Engine when the db is changed?
look at the new (beta) firebase cloud functions. with that, you can have node.js code run, pre-process and call your appengine on database events.
https://firebase.google.com/docs/functions/
Firebase currently does not have support for webhooks.
Have a look to https://github.com/holic/firebase-webhooks
From Listening to real-time events from a web browser:
Posting events back to App Engine
App Engine does not currently support bidirectional streaming HTTP
connections. If a client needs to update the server, it must send an
explicit HTTP request.
The alternative doesn't quite help you as it would not fit in the free quota. But here it is anyways. From Configuring the App Engine backend to use manual scaling:
To use Firebase with App Engine standard environment, you must use
manual scaling. This is because Firebase uses background threads to
listen for changes and App Engine standard environment allows
long-lived background threads only on manually scaled backend
instances.

urlfetch.fetch() from Google App Engine not showing up in Fiddler2

I'm testing a Google App Engine app on my Windows machine, running locally on localhost:8084. Fiddler2 shows all my activity when I navigate around my app, but when requesting an external url with urlfetch.fetch() it doesn't show up in Fiddler at all, even when using an http, not an https address, and with a successful status code 200 in the response.
What do I need to do to get the urlfetch.fetch() request from Google App Engine to show up in Fiddler2?
My understanding is that Fiddler2 runs as an HTTP proxy; browser requests go through this proxy instead of directly to the internet resource. This allows Fiddler2 to capture information about the request and the response.
According to the Fiddler2 docs, "You can configure any application which accepts a HTTP Proxy to run through Fiddler so you can debug its traffic". So I think you would need to change the URLFetch API call to use a proxy, supplying the Fiddler URL and port. However, the URLFetch documentation doesn't specify exactly how to do this. You might be able to use urllib2 as specified in this question.
Irussell is generally right, but I'd like to make the answer more specific.
As proxies aren’t supported within Google AppEngine production environment, it’s not directly supported by development engine either. It seems that the only way to overcome this limitation is to modify the code of AppEngine development server.
You'll have to modify the urlfetch_stub.py file, by adding the following lines:
connection = connection_class('127.0.0.1', 8888)
and
full_path = protocol + "://" + host + full_path
You may find the detailed explanation in my blog post Use Fiddler to debug urlfetch requests in Google AppEngine

Resources