Is it possible to use internal load balancer with backend on App Engine Flexible instances? - google-app-engine

We need this to use non standard TCP port. From what I can see Load balancers need instance groups to be defined, and instance groups are compute engine thing. So I wonder if it's possible at all.

App Engine is a managed service, this means that it has it's own load balancer and manages the requests in a very specific way which you can check over here.
As mentioned by #guillaume blaquiere, you would be mixing 2 concepts completely different which are IaaS and PaaS. If you need to set up a specific load balancer with certain behaviour for your App to run, you may be better off using Managed instance groups rather than App Engine so you have more control on everything rather than relying on GCP's management.
I hope you find this information useful.

Related

Is it possible to use a fully managed service (Cloud Run or App Engine) with firewall in GCP?

Problem. I'm looking for an agile way to shoot a docker container (stored on GCR.IO) to a managed service on GCP:
one docker container gcr.io/project/helloworld with private data (say, Cloud SQL backend) - can't face the real world.
a bunch of IPs I want to expose it to: say [ "1.2.3.4" , "2.3.4.0/24" ].
My ideal platform would be Cloud Run, but also GAE works.
I want to develop in agile way (say deploy with 2-3 lines of code), is it possible run my service secretly and yet super easily? We're not talking about a huge production project, we're talking about playing around and writing a POC you want to share securely over the internet to a few friends making sure the rest of the world gets a 403.
What I've tried so far.
The only think that works easily is a GCE vm with docker-friendly OS (like cos) where I can set up firewall rules. This works, but it's a lame docker app on a disposable VM. Machine runs forever and dies at reboot unless I stabilize it on cron/startup. Looks like I'm doing somebody else's job.
Everything else I've tried so far failed:
Cloud Run. Amazing but can't set up firewall rules on it, or Cloud Director, .. seems to work only with IAP which is painful to set up.
GAE. Works with multiple IPs and can't detach public IPs or firewall it. I managed to get the IP filtering within the app but seems a bit risky. I don't [want to] trust my coding skills :)
Cloud Armor. Only supports a HTTPS Load Balancer which I don't have. Nor I have MIGs to point to. I want simplicity.
Traffic Director and need a HTTP L7 balancer. But I have a docker container, on a single pod. Why do I need a LB?
GKE. Actually this seems to work: [1] but it's not fully managed (I need to create cluster, pods, ..)
Is this a product deficiency or am I looking at the wrong products? What's the simplest way to achieve what I want?
[1] how do I add a firewall rule to a gke service?
Please limit your question to one service. Not everyone is an expert on all Google Cloud services. You will have a better chance of a good answer for each service if they are separate questions.
In summary, if you want to use Google Cloud Security Groups to control IP based access you need to use a service that runs on Compute Engine as security groups are part of the VPC feature set. App Engine Standard and Cloud Run do not run within your project's VPC. This leaves you with App Engine Flex, Compute Engine, and Kubernetes.
I would change strategies and use Google Cloud Run managed by authentication. Access is controlled by Google Cloud IAM via OAuth tokens.
Cloud Run Authentication Overview
I have agreed with the John Hanley’s reply and I have up-voted his answer.
Also, I’ve learned that you are looking how to restrict access to your service through GCP.
By setting a firewall rules, You can limit access to your service by limiting the Source IP range as Allowed source, so that only this address will be allowed as source IP.
Please review another thread in Server Fault [1], stating how to “Restrict access to single IP only”.
https://serverfault.com/questions/901364/restrict-access-to-single-ip-only
You can do quite easily with a Serverless NEG for Cloud Run or GAE
If you're doing this in Terraform you can follow this article

How to make HTTP call reaching all instances behind App Engine load balancer?

Is there a way to make HTTP call to all instances running behind an google app engine load balancer.
This is a similar question asked here for aws.
If you're using the standard environment and manual scaling (not possible for auto/basic scaling) you could use Targeted routing to reach a particular instance:
https://[INSTANCE_ID]-dot-[VERSION_ID]-dot-[SERVICE_ID]-dot-[MY_PROJECT_ID].appspot.com
http://[INSTANCE_ID].[VERSION_ID].[SERVICE_ID].[MY_CUSTOM_DOMAIN]
Note: Targeting an instance is not supported in services that are configured for auto scaling or basic scaling. The instance ID must be
an integer in the range from 0, up to the total number of instances
running. Regardless of your scaling type or instance class, it is not
possible to send a request to a specific instance without targeting a
service or version within that instance.
If you're using the flexible environment it's not possible to reach a specific instance. From Targeted routing:
Note: In the flexible environment, targeting an instance is not supported. It is not possible to send requests directly to a specific
instance.

Service Fabric (On-premise) Routing to Multi-tenancy Containerized Application

I'm trying to get a proof of concept going for a multi-tenancy containerized ASP.NET MVC application in Service Fabric. The idea is that each customer would get 1+ instances of the application spread across the cluster. One thing I'm having trouble getting mapped out is routing.
Each app would be partitioned similar to this SO answer. The plan so far is to have an external load balancer route each request to the SF Reverse Proxy service.
So for instance:
tenant1.myapp.com would get routed to the reverse proxy at <SF cluster node>:19081/myapp/tenant1 (19081 is the default port for SF Reverse Proxy), tenant2.myapp.com -> <SF Cluster Node>:19081/myapp/tenant2, etc and then the proxy would route it to the correct node:port where an instance of the application is listening.
Since each application has to be mapped to a different port, the plan is for SF to dynamically assign a port on creation of each app. This doesn't seem entirely scaleable since we could theoretically hit a port limit (~65k).
My questions then are, is this a valid/suggested approach? Are there better approaches? Are there things I'm missing/overlooking? I'm new to SF so any help/insight would be appreciated!
I don't think the Ephemeral Port Limit will be an issue for you, is likely that you will consume all server resources (CPU + Memory) even before you consume half of these ports.
To do what you need is possible, but it will require you to create a script or an application that will be responsible to create and manage configuration for the service instances deployed.
I would not use the built-in reverse proxy, it is very limited and for what you want will just add extra configuration with no benefit.
At moment I see traefik as the most suitable solution. Traefik enables you to route specific domains to specific services, and it is exactly what you want.
Because you will use multiple domains, it will require a dynamic configuration that is not provided out of the box, this is why I suggested you to create a separate application to deploy these instances. A very high level steps would be:
You define your service with the traefik default rules as shown here
From your application manager, you deploy a new named service of this service for the new tenant
After the instance is deployed you configure it to listen in a specific domain, setting the rule traefik.frontend.rule=Host:tenant1.myapp.com to the correct tenant name
You might have to add some extra configurations, but this will lead you to the right path.
Regarding the cluster architecture, you could do it in many ways, for starting, I would recommend you keep it simple, one FrontEnd node type containing the traefik services and another BackEnd node type for your services, from there you can decide how to plan the cluster properly, there is already many SO answers on how to define the cluster.
Please see more info on the following links:
https://blog.techfabric.io/using-traefik-reverse-proxy-for-securing-microservices-on-azure-service-fabric/
https://docs.traefik.io/configuration/backends/servicefabric/
Assuming you don't need an instance on every node, you can have up to (nodecount * 65K) services, which would make it scalable again.
Have a look at Azure API management and Traefik, which have some SF integration options. This works a lot nicer than the limited built-in reverse proxy. For example, they offer routing rules.

Can Google App Engine Memcache Standard be accessed from an external server

I am trying to figure out how to access Google App Engine Memcache service from outside Google App Engine. Any help on how this can be done would be greatly appreciated.
Thanks in advance!
I don't think this is currently possible. I don't know if there is any technical argument for this or if this decision has been made simply for billing purposes. But it seems like memcache is intended to be an integral part of App Engine. The only relevant discussion I could find is this feature request. It calls for possibility of accesing memcached data of one App Engine project by another App Engine project. It seems to me that Google didn't consider such functionality to be beneficial. You could try filing your own feature request to make memcache a standalone service. In case you do not succeed (and I am afraid you won't), here is a simple workaround.
A simple workaround:
Create a simple App Engine project which would serve as a facade over memcache service. This dummy App Engine project would simply translate your HTTP requests to memcache API calls and return the obtained data in the body of a HTTP response. For example, to retrieve a memcache record you could send a GET request such as:
https://<your-poject-id>.appspot.com/get?key=<some-particular-key>
This call would get "translated" into:
memcache.get(<some-particular-key>);
And the obtained data appended to the HTTP response.
Since accessing memcache is free, you would only have to pay for instance time. I don't know what through-put are you expecting, but I can imagine scenarios where you could even fit into the free daily quota (currently 28 hours/day). All in all, the intermediate App Engine project should not come with significant cost in neither performance nor price.
Before using this workaround:
The above snippet of code is intended for illustration purposes only. There still remain some issues to be dealt with before using this approach in production. For example, as pointed out by Suken, anyone would be able to access your memcache if they knew what requests to send. Here are four additional things I would personally do:
Address the security issues by sending some authentication token with each request. An obvious necessity would be to make the calls over HTTPS to prevent man-in-the-middle attackers from obtaining this token. Note that App Engine's appspot.com subdomains are accessible via HTTPS by default.
Prefer batch API calls such as getAll() over their single record alternatives such as get(). Retrieving multiple records in one batch call is much faster than making multiple separate API calls.
Use POST requests (instead of GET) to access the facade application. You won't have to worry about your batch requests being to large. I only used GET request in the example above because it was easier to write.
Check if such usage of App Engine doesn't violate the Terms of Service. Personally, I don't believe it does. And I don't see why Google should mind. After all, you will be paying for instance hours.
EDIT: After giving this some more thought, I believe that the suggested workaround is actually what Google presumes you to do. Given that the Goolge's objective is to earn money, it would be unreasonable to provide a free service unless it was a part of a paid one. Of course, another billing schemes could be created. For example, allowing direct access only for developers who are willing to pay for dedicated memcache. The question is whether your use case is broad enough to convince Google to take some action.
No, AFAIK the Memcache service is not available outside GAE. To be even more specific it is only available inside the GAE standard environment, it is unavailable in the GAE flexible environment.
But some of the alternate solutions suggested for GAE flexible users might be useable for you as well. From Memcache:
The Memcache service is currently not available for the App Engine
flexible environment. An alpha version of the memcache service will be
available shortly. If you would like to be notified when the service
is available, fill out this early access form.
If you need access to a memcache service immediately, you can use the
third party memcache service from Redis Labs. To access this service,
see Caching Application Data Using Redis Labs Memcache.
You can also use Redis Labs Redis Cloud, a third party fully-managed
service. To access this service, see Caching Application Data Using
Redis Labs Redis.
As stated by other users the Memcache is not offered as a service outside GAE (Google App Engine). I would like to point out that implementing GAE facade over Memcache service has security ramifications. Please note that facade GAE Memcache app will be exposed on the public internet like any other GAE service. I am assuming that you want to use Memcache for internal use only. Another aspect to think about is writing into memcache. If you intend to write to memcache from outside GAE, then definitely avoid facade implementation. If comprised anyone will be able to use you facade implementation as their own cache without paying for it ;)
My suggestion is to spin up a stack using GCP Cloud Launcher. There are various stack templates available for both Redis and Memcache stacks. Further you can configure the template to use preemptible burstable instances to reduce the cost of your Memcache.

Scaling AppEngine applications

I've been thinking lately about the pros and cons of using AppEngine.
My concern would be, when we create application for GAE, the front-end code (the UI stuff) is served from the same application instance in the GAE cloud as with the Datastore codes.
The question would be when my applications grows:
For GAE:
Do I need to create multiple instance of my application?
If so, what do I need to manually update all instances?
For Appscale:
Do I also need to create multiple instance of my application?
If so, what do I need to manually update all instances?
GAE starts new frontend instances automatically, you even can't create or update frontend instances. You just need to configure min/max latency, min/max idle instances in Application Settings. See docs for performance settings
Btw, there are also Backend Instances that can be Resident and started manually from Admin Console. But it's useful only when you need something very specific
You seem to have missed the whole point of AppEngine, which is that Google takes care of scaling your app for you automatically. You seem to be confusing 'instance' with 'version' - you have control over which version of your app is serving, but Google dynamically creates and kills instances of that app depending on load. That's the main benefit of using AppEngine in the first place.

Resources