GAE flex Connection String to first generation CloudSQL - google-app-engine

I have a GAE flex application that uses sqlalchemy/mysql. I also have a Google CloadSQL mysql instance First Generation. Is it possible to connect a GAE flex environment to a First Generation CloudSQL instance without connecting as an external app (and thus needing to whitelist the world). The Google documentation states to use /cloudsql/<INSTANCE_CONNECTION_NAME> as the connection string. I've tried many different flavors, but I'm still unsuccessful.
Examples:
mysql+pymysql://user:password#/cloudsql/<INSTANCE_CONNECTION_NAME>
mysql+pymysql:///cloudsql/<INSTANCE_CONNECTION_NAME>
Is there a different driver that's needed?
Thanks

You are correct, it is not possible to connect GAE flex to CloudSQL 1st generation.
The alternatives are:
a. Use App Engine Standard App instead.
b. Migrate the First Generation instance to a Second Generation one

After searching the various Google documents, IRC channels and Slack forums, i've come to the conclusion that CloudSQL Gen 1 isn't support on GAE:flex :(

Related

Is it possible to use a fully managed service (Cloud Run or App Engine) with firewall in GCP?

Problem. I'm looking for an agile way to shoot a docker container (stored on GCR.IO) to a managed service on GCP:
one docker container gcr.io/project/helloworld with private data (say, Cloud SQL backend) - can't face the real world.
a bunch of IPs I want to expose it to: say [ "1.2.3.4" , "2.3.4.0/24" ].
My ideal platform would be Cloud Run, but also GAE works.
I want to develop in agile way (say deploy with 2-3 lines of code), is it possible run my service secretly and yet super easily? We're not talking about a huge production project, we're talking about playing around and writing a POC you want to share securely over the internet to a few friends making sure the rest of the world gets a 403.
What I've tried so far.
The only think that works easily is a GCE vm with docker-friendly OS (like cos) where I can set up firewall rules. This works, but it's a lame docker app on a disposable VM. Machine runs forever and dies at reboot unless I stabilize it on cron/startup. Looks like I'm doing somebody else's job.
Everything else I've tried so far failed:
Cloud Run. Amazing but can't set up firewall rules on it, or Cloud Director, .. seems to work only with IAP which is painful to set up.
GAE. Works with multiple IPs and can't detach public IPs or firewall it. I managed to get the IP filtering within the app but seems a bit risky. I don't [want to] trust my coding skills :)
Cloud Armor. Only supports a HTTPS Load Balancer which I don't have. Nor I have MIGs to point to. I want simplicity.
Traffic Director and need a HTTP L7 balancer. But I have a docker container, on a single pod. Why do I need a LB?
GKE. Actually this seems to work: [1] but it's not fully managed (I need to create cluster, pods, ..)
Is this a product deficiency or am I looking at the wrong products? What's the simplest way to achieve what I want?
[1] how do I add a firewall rule to a gke service?
Please limit your question to one service. Not everyone is an expert on all Google Cloud services. You will have a better chance of a good answer for each service if they are separate questions.
In summary, if you want to use Google Cloud Security Groups to control IP based access you need to use a service that runs on Compute Engine as security groups are part of the VPC feature set. App Engine Standard and Cloud Run do not run within your project's VPC. This leaves you with App Engine Flex, Compute Engine, and Kubernetes.
I would change strategies and use Google Cloud Run managed by authentication. Access is controlled by Google Cloud IAM via OAuth tokens.
Cloud Run Authentication Overview
I have agreed with the John Hanley’s reply and I have up-voted his answer.
Also, I’ve learned that you are looking how to restrict access to your service through GCP.
By setting a firewall rules, You can limit access to your service by limiting the Source IP range as Allowed source, so that only this address will be allowed as source IP.
Please review another thread in Server Fault [1], stating how to “Restrict access to single IP only”.
https://serverfault.com/questions/901364/restrict-access-to-single-ip-only
You can do quite easily with a Serverless NEG for Cloud Run or GAE
If you're doing this in Terraform you can follow this article

How can one Instance in Google App Engine Flex Environment talk to another?

Instances in my Google App engine Flexible Environment "compat" system communicate to each other communicate with each other with REST invocations. How can I port this to the new Flex Env?
The documentation says "You can no longer route traffic to specific instances, such as https://instance-dot-version-dot-service-dot-app-id.appspot.com" -- so how do I port this to non-compat Flex Env?
This is really an App Engine anti-pattern - instances come up and go down all the time, so it's generally not advisable to try communicating between them like this. That having been said, there are two approaches here that can work.
Use Google Cloud Pub/Sub. This is nice because you don't have to deal with the instance lifecycle issues. You put a job on a queue, and someone goes and picks it up.
Use something like etcd with a ttl and IP addresses. You can have each instance report it's IP back to a central etcd instance on startup with a low TTL. Then you can query etcd to get a list of active instances and their IPs. Inside the network, using IP<->IP connectivity between the instances should be fine.
Best of luck!

Can I have two instances on same App Engine project - Java servlet and Endpoints side by side?

We have Java servlets up and running on GAE, using blobstore, datastore and other cloud services.
Currently, we're starting a migration process to cloud Endpoints and we've hit an issue: if we use a different GAE project, we would not be able to query regarding current datastore entities (to the best of my knowledge, Google doesn't want you to do this - see
this question
and the GAE terms of service - section 3.3d), so we need to use the same project for both.
I looked up whether it's possible to have one GAE instance running Java servlets and one instance running Endpoints, but I found no conclusive answer anywhere.
If we try to implement and something goes wrong, we're looking at a potentially major issue for our users, so we need to be sure beforehand.
Has anyone tried something similar, and can assure us that this works?
You have 2 options to run the old and the new code inside the same app (thus with no issues sharing access to the datastore) but as separate engine instances, so they can be developed/deployed/managed independently:
as different versions of the same app/module(s):
the old version remains the default, the new one can be accessed at a different URL during development (possibly via URL routing)
you can use traffic splitting to do live A/B testing on the new code and for gradual final migration until you make the new version the default
as different modules of the same app:
both can run (fully functional) side by side indefinitely, but you need to be more careful during development
traffic is routed to the modules in several possible ways
final migration is done by publishing the new URLs, eventually re-directing the old URLs and finally bringing down the old module code
The 2 approaches can even be combined, if needed, as the final solution described by the OP's in this somehow similar question (for the python environment, but java equivalents exist): Google App Engine upgrading part by part

How do I set up Google Compute Engine as a HTTPS server?

I'm running an app on a VM instance (instance-1) and would like myproject.appspot.com requests to be served by instance-1.
I read https://cloud.google.com/appengine/docs/java/modules/routing but it wasn't clear. Is there a way to say "send all traffic to my one instance"?
If I go to my (ephemeral) external IP address for that instance, I can see the server. But, that won't work for an oAuth2 domain (no IP addresses allowed), so I need it to go through the named domain.
I'd be ok if I could use something constant like instance-1-dot-myproject.appspot.com but would prefer the base myproject.appspot.com to say "any instances? great! use that."
I think you want to use Managed VMs. They give you the flexibility of Google Compute Engine but work more like the PaaS that is Google App Engine.
You don't create the Google Compute Engine VM instances yourself, however, Managed VMs will spin them up on demand, using the Docker image you provide as the container of the code, data, etc.
Note that as of 29 Sep 2015, per the docs:
Beta
This is a Beta release of Managed VMs. This feature is not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes.

EC2 , Openstack, Google App Engine (GAE) and REST

I was handed an assignment but I don't know where to start.
The aim is to have 2 piece of code running. One will run in Open stack private cloud and perform the task of indexing two sets of text, with another running in EC2 with the task of matching the two indexed tests.
I want to access them via google app engine.
Ideally, I would like to click a button or perform an action on Google app engine, which then sends a request to Openstack to run the code and retrieve the output of a txt file.
That outputted text files will then be forwarded onto EC2 where the matching will occur and the results sent back to Google App Engine.
My question is, how can I send the files between the systems using REST requests?
FrankN --
EC2, GAE and OpenStack are disparate cloud computing platforms. To integrate them might include, say, using one platform while saving backups to another.
CloudU.Rackspace.com is a vendor-neutral education site about cloud computing (note: It'll take six or so hours to finish it all). This might help.
Disclaimer: I work for Rackspace.
Firstly, not really sure what your requirements are, why your code does or why are you trying to mix cloud providers in that way.
That said, I would suggest taking the upload from GAE and push it to AWS S3 where you can then retrieve and use as you please from EC2.
Not sure what functionality you are trying to get out of OpenStack that is not present in AWS; however, I would suggest building whatever you are building in EC2 first, then duplicate in on OpenStack services to avoid future vendor lock in.

Resources