I'm running an app on a VM instance (instance-1) and would like myproject.appspot.com requests to be served by instance-1.
I read https://cloud.google.com/appengine/docs/java/modules/routing but it wasn't clear. Is there a way to say "send all traffic to my one instance"?
If I go to my (ephemeral) external IP address for that instance, I can see the server. But, that won't work for an oAuth2 domain (no IP addresses allowed), so I need it to go through the named domain.
I'd be ok if I could use something constant like instance-1-dot-myproject.appspot.com but would prefer the base myproject.appspot.com to say "any instances? great! use that."
I think you want to use Managed VMs. They give you the flexibility of Google Compute Engine but work more like the PaaS that is Google App Engine.
You don't create the Google Compute Engine VM instances yourself, however, Managed VMs will spin them up on demand, using the Docker image you provide as the container of the code, data, etc.
Note that as of 29 Sep 2015, per the docs:
Beta
This is a Beta release of Managed VMs. This feature is not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes.
Related
Problem. I'm looking for an agile way to shoot a docker container (stored on GCR.IO) to a managed service on GCP:
one docker container gcr.io/project/helloworld with private data (say, Cloud SQL backend) - can't face the real world.
a bunch of IPs I want to expose it to: say [ "1.2.3.4" , "2.3.4.0/24" ].
My ideal platform would be Cloud Run, but also GAE works.
I want to develop in agile way (say deploy with 2-3 lines of code), is it possible run my service secretly and yet super easily? We're not talking about a huge production project, we're talking about playing around and writing a POC you want to share securely over the internet to a few friends making sure the rest of the world gets a 403.
What I've tried so far.
The only think that works easily is a GCE vm with docker-friendly OS (like cos) where I can set up firewall rules. This works, but it's a lame docker app on a disposable VM. Machine runs forever and dies at reboot unless I stabilize it on cron/startup. Looks like I'm doing somebody else's job.
Everything else I've tried so far failed:
Cloud Run. Amazing but can't set up firewall rules on it, or Cloud Director, .. seems to work only with IAP which is painful to set up.
GAE. Works with multiple IPs and can't detach public IPs or firewall it. I managed to get the IP filtering within the app but seems a bit risky. I don't [want to] trust my coding skills :)
Cloud Armor. Only supports a HTTPS Load Balancer which I don't have. Nor I have MIGs to point to. I want simplicity.
Traffic Director and need a HTTP L7 balancer. But I have a docker container, on a single pod. Why do I need a LB?
GKE. Actually this seems to work: [1] but it's not fully managed (I need to create cluster, pods, ..)
Is this a product deficiency or am I looking at the wrong products? What's the simplest way to achieve what I want?
[1] how do I add a firewall rule to a gke service?
Please limit your question to one service. Not everyone is an expert on all Google Cloud services. You will have a better chance of a good answer for each service if they are separate questions.
In summary, if you want to use Google Cloud Security Groups to control IP based access you need to use a service that runs on Compute Engine as security groups are part of the VPC feature set. App Engine Standard and Cloud Run do not run within your project's VPC. This leaves you with App Engine Flex, Compute Engine, and Kubernetes.
I would change strategies and use Google Cloud Run managed by authentication. Access is controlled by Google Cloud IAM via OAuth tokens.
Cloud Run Authentication Overview
I have agreed with the John Hanley’s reply and I have up-voted his answer.
Also, I’ve learned that you are looking how to restrict access to your service through GCP.
By setting a firewall rules, You can limit access to your service by limiting the Source IP range as Allowed source, so that only this address will be allowed as source IP.
Please review another thread in Server Fault [1], stating how to “Restrict access to single IP only”.
https://serverfault.com/questions/901364/restrict-access-to-single-ip-only
You can do quite easily with a Serverless NEG for Cloud Run or GAE
If you're doing this in Terraform you can follow this article
I have jobs and APIs hosted on cloud composer and App Engine that works fine. However for one of my job I would need to call an API that is IP restricted.
As far as I understand, I see that there's no way to have a fixed IP for app engine and cloud composer workers and I don't know what is the best solution then.
I thought about creating a GCE with a fixed IP that would be switched on/off by the cloud composer or app engine and then the API call would be executed by the startup-script. However, it restrains this to only asynchronous tasks and it seems to add a non desired step.
I have been told that it is possible to set up a proxy but I don't know how to do it and I did not find comprehensive docs about it.
Would you have advice for this use-case ?
Thanks a lot for your help
It's probably out of scope to you, but you could whitelist the whole range of app engine ip by performing a lookup on _cloud-netblocks.googleusercontent.com
In this case you are whitelisting any app engine applications, so be sure this api has another kind of authorization and good security. More info on the App Engine KB.
What I would do is install or implement some kind of API proxy on GCE. It's a bummer to have a VM on 24/7 for this kind of task so you could also use an autoscaler to scale to 0 (not sure about this one).
As you have mentioned: you can set up a TCP or UDP proxy in GCE as a relay, and then send requests to the relay (which then forwards those requests to the IP-restricted host).
However, that might be somewhat brittle in some cases (and introduces a single point of failure). Therefore, another option you could consider is creating a private IP Cloud Composer environment, and then using Cloud NAT for public IP connectivity. That way, all requests from Airflow within Composer will look like they are originating from the IP address of the NAT gateway.
I got a Google AppEngine Standard app running in region1 and want to deploy that same app as a backup region2 in case region1 is down. I'm looking for a way to make that failover happens seemlessly for my users (both human users on browsers and other third party services which call back to my app's service).
Currently I have my custom domain name (both naked name and www name) mapped to the app on region #1 (done in [Google Cloud Console][App Engine][Settings][Customer Domains]).
In the event region1 is down, I would like to go in that setting area of app1(region1), remove those maps and then go that setting area of app2(region2) add those maps, so that after that point, requests to myappdomainname.com and www.myappdomainname.com will go to app2 on region2
Question: is that plan feasible? In particular, if region1 is down, can I still be able to access app1's setting area to remove those maps, so that I can add the maps to app2?
Down time while switching these for about an hour is okay for my app, as long as my users can continue to use the same URL they been using when region1 was still running.
Google App Engine is a regional service, meaning that it cannot span to more than a region. However, it's replicated through all zones of the region to reduce any potential downtime.
The kind of implementation you want for GAE is opposite of the actual purpose of it. One of GAE's principal features is that you don't have to configure and manage the instances running in the background yourself.
The preferred way of getting this to work on Google Cloud Platform would be using Compute Engine. GCE gives you the option to create the instances in any region you want and configure a Load Balancer to serve the traffic and scale your instances as you want. Here is some documentation about serving applications using GCE:
Running a GO app on Compute Engine (part of a GCP quickstart)
Building Scalable and Resilient Web Applications on Google Cloud Platform
Designing Robust Systems
Also, here's a Google Groups post about this issue that goes a little bit more in detail.
Instances in my Google App engine Flexible Environment "compat" system communicate to each other communicate with each other with REST invocations. How can I port this to the new Flex Env?
The documentation says "You can no longer route traffic to specific instances, such as https://instance-dot-version-dot-service-dot-app-id.appspot.com" -- so how do I port this to non-compat Flex Env?
This is really an App Engine anti-pattern - instances come up and go down all the time, so it's generally not advisable to try communicating between them like this. That having been said, there are two approaches here that can work.
Use Google Cloud Pub/Sub. This is nice because you don't have to deal with the instance lifecycle issues. You put a job on a queue, and someone goes and picks it up.
Use something like etcd with a ttl and IP addresses. You can have each instance report it's IP back to a central etcd instance on startup with a low TTL. Then you can query etcd to get a list of active instances and their IPs. Inside the network, using IP<->IP connectivity between the instances should be fine.
Best of luck!
If a particular computer is making tons of accounts or flooding my server with other requests, could parse-server automatically check this behaviour and block the specified IP address?
Built-in rate limiting would also be a nice alternative, although it doesn't really solve the problem if the person continues to spam.
I am hosting on google app engine by the way.
I don't know about Parse itself, but from App Engine side you have DoS protection service controlled via dos.yaml file in your project that lets you blacklist IP blocks—sounds like that may help. It's not "automatic", though; you still need to manually update this file and issue appcfg.py update_dos <PROJECT_DIR> for changes to take effect.
I don't believe that this is a feature out of the box - see advanced options here: https://github.com/ParsePlatform/parse-server.
You'd need to look at controlling access to the Google App Engine (or another host - such as Microsoft Azure Web App) using a firewall (you can easily do this with Azure. I'm not familiar with Google App Engine, but imagine similar functionality is available.
However, I don't believe that a firewall is necessary - just better app security. Disable anonymous users - Parse Server Security