How to connect via ssh to google cloud run or appengine? - google-app-engine

I know cloud run and appengine are different services.
I need connect via ssh to an appengine or cloud run instance to execute some process manually.
The reason to use one of these services is they charge only when I use it, not 24x7 hours
Some way to do that?
Thanks

Short answer: you can't.
In fact, these services are designed to answer to HTTP request, and only when an HTTP request is processed you pay for the service. If you log into an instance in SSH, will you pay for the HTTP request? If you run a process on the instances, will you pay for the HTTP request?
Of course not. But the cost isn't the main reason. Cloud Run and App Engine can create and destroy instances as they wish, according with the traffic or something else. It's useless to log into an instance and to run a process and few seconds/minutes after the instance is deleted and a new one created, you will lost all what you do.
If you use these services, you must accept that the servers are managed by Google, that you can only deploy a service and use it through HTTP. It's not a traditional VM instance, it's "serverless".
After saying that, if you want to explore the runtime configuration, you can use a HTTP reverse shell. But, at the end, it's not very useful...

Context
I code using codeanywhere, because I had multiple places with desktop computers to work and don't want to load a laptop
Actually I had vps's as enviroments, like my projects are long time, don't need to rebuild or change the enviroment in years
The need:
I run some times per month shell commands like test nodejs scripts, before to move them to serverless (cloud run)
The old-approach:
try to run these scripts on a working enviroment connecting via ssh
The moderm developer way:
use codeanywhere containers as code storage and testing + create a gitlab ci/cd to deploy automatically on google cloud run instances

Related

Executing a python script in Google Compute Engine from App Engine

I have a Python script stored in a Compute Engine instance. I also have a web application deployed on the Google App Engine.
What I would like to achieve is let users enter some parameters on the web application interface and have it execute the script in the Compute Engine instance with the entered parameters.
My question is: how can I access the Compute Engine instance from App Engine and execute the script with the parameters that users passed in?
I think there are several factors here to take into account:
Security implications: Having a web site backend accessing a different host to run a command with client-defined parameters upon a request can easily introduce lots of potential exploits that aren't worth dealing with.
Sanitizing the parameter or parameters from the Python script would be a must, which you might be able to do this with shlex.quote().
Running the script in the VM instance via SSH from App Engine:
With the Google Cloud Client Library for Python you may be able to connect to a given GCE instance and run a command by setting up OS Login and granting roles/compute.osLogin for this instance to the Service Account that runs your App Engine app as described in this guide (with this example).
Otherwise, you may try creating a system account for this purpose inside the instance and allow its login in its /etc/ssh/ssh_config and use a new RSA key added to this user's ~/ssh/.authorized_keys file with a generic SSH client library like Paramiko connecting to its external IP address, assuming it has one.
In both cases this is going to introduce very high latencies for the requests as SSH sessions usually take 2+ seconds to get created.
As an acceptable-latency (and potentially more secure) alternative, you might be able to have a simple HTTPS service in the VM (you can probably check for a correct snakeoil certificate in your Python code if needed) and set up a webhook with a long hash-like URL path (and optionally a non-default port) handled by, for instance, a simple PHP script that runs the end script with exec() after passing the parameter variable in its $_POST[] superglobal through excapeshellarg() to (re-)sanitize it.

Is it possible to use a fully managed service (Cloud Run or App Engine) with firewall in GCP?

Problem. I'm looking for an agile way to shoot a docker container (stored on GCR.IO) to a managed service on GCP:
one docker container gcr.io/project/helloworld with private data (say, Cloud SQL backend) - can't face the real world.
a bunch of IPs I want to expose it to: say [ "1.2.3.4" , "2.3.4.0/24" ].
My ideal platform would be Cloud Run, but also GAE works.
I want to develop in agile way (say deploy with 2-3 lines of code), is it possible run my service secretly and yet super easily? We're not talking about a huge production project, we're talking about playing around and writing a POC you want to share securely over the internet to a few friends making sure the rest of the world gets a 403.
What I've tried so far.
The only think that works easily is a GCE vm with docker-friendly OS (like cos) where I can set up firewall rules. This works, but it's a lame docker app on a disposable VM. Machine runs forever and dies at reboot unless I stabilize it on cron/startup. Looks like I'm doing somebody else's job.
Everything else I've tried so far failed:
Cloud Run. Amazing but can't set up firewall rules on it, or Cloud Director, .. seems to work only with IAP which is painful to set up.
GAE. Works with multiple IPs and can't detach public IPs or firewall it. I managed to get the IP filtering within the app but seems a bit risky. I don't [want to] trust my coding skills :)
Cloud Armor. Only supports a HTTPS Load Balancer which I don't have. Nor I have MIGs to point to. I want simplicity.
Traffic Director and need a HTTP L7 balancer. But I have a docker container, on a single pod. Why do I need a LB?
GKE. Actually this seems to work: [1] but it's not fully managed (I need to create cluster, pods, ..)
Is this a product deficiency or am I looking at the wrong products? What's the simplest way to achieve what I want?
[1] how do I add a firewall rule to a gke service?
Please limit your question to one service. Not everyone is an expert on all Google Cloud services. You will have a better chance of a good answer for each service if they are separate questions.
In summary, if you want to use Google Cloud Security Groups to control IP based access you need to use a service that runs on Compute Engine as security groups are part of the VPC feature set. App Engine Standard and Cloud Run do not run within your project's VPC. This leaves you with App Engine Flex, Compute Engine, and Kubernetes.
I would change strategies and use Google Cloud Run managed by authentication. Access is controlled by Google Cloud IAM via OAuth tokens.
Cloud Run Authentication Overview
I have agreed with the John Hanley’s reply and I have up-voted his answer.
Also, I’ve learned that you are looking how to restrict access to your service through GCP.
By setting a firewall rules, You can limit access to your service by limiting the Source IP range as Allowed source, so that only this address will be allowed as source IP.
Please review another thread in Server Fault [1], stating how to “Restrict access to single IP only”.
https://serverfault.com/questions/901364/restrict-access-to-single-ip-only
You can do quite easily with a Serverless NEG for Cloud Run or GAE
If you're doing this in Terraform you can follow this article

Dev server test for app engine when communicating with other server

I'm using Google App Engine Standard to write a small application, let's call it AppX.
AppX is supposed to receive a POST message from another website, let's say B, and then do some processing and show on its mainpage.
The question is:
I don't know how to use dev_app server to debug. As if I use dev_app server, the server will run locally without https, then I don't know how to send a POST message from website B.
Google cloud shell is a very limited shell too, which does have limited ports enabled for outgoing connections only.
Although there might be a way to configure it, I think the easiest way to test calls from website B to a dev_app would be configuring a GCE virtual machine with a fixed IP. There you can configure the firewall freely, and also not worry any non-interactive session to finish abruptly.

Debugging GAE microservices locally but without using localhost

I would like to debug my Google App Engine (GAE) app locally but without using localhost. Since my application is made up of microservices, the urls in a production environment would be along the lines of:
https://my-service.myapp.appspot.com/
But code in one service can call another service and that means that the urls are hardcoded. I could of course use a mechanism in code to determine whether the app is running locally or on GAE and use urls that are different although I don't see how a local url would handle the since the only way to run an app locally is to use localhost. Hence:
http://localhost:8080/some-service
Notice that "some-service" maps to a servlet, whereas "my-service" is a name assigned to a service when the app is uploaded. These are really two different things.
The only possible solution I was able to find was to use a reverse proxy which would map one url to a different one. Still, it isn't clear whether the GAE development SDK even supports this.
Personally I chose to detect the local development vs GAE environment and build my inter-services URLs accordingly. I feel it was a well-worthy effort, I've been (re)using it a lot. No reverse proxy or any other additional ops necessary, it just works.
Granted, I'm using Python, so I'm not 100% sure a complete similar Java solution exists. But maybe it can point you in the right direction.
To build the per-service URLs I used modules.get_hostname() (the implementation is presented in Resolve Discovery path on App Engine Module). I believe the Java equivalent would be getInstanceHostname() from com.google.appengine.api.modules.
This method, when executed on the local server, automatically provides the particular port the server listens to for each service.
BTW, all my services for an app are executed by a single development server process, which listens on multiple ports (this is, I guess, how it can provide the modules.get_hostname() info). See Running multiple services using dev_appserver.py on different ports. This is part I'm unsure about: if/how the java local dev server can simultaneously run multiple services. Apparently this used to be supported some time ago (when services were still called modules):
Serving multiple GAE modules from one development server?
GAE modules on development server
This can be accomplished with the following steps:
Create an entry in the hosts file
Run the App Engine Dev server from a Terminal using certain options
Use IntelliJ with Remote debugging to attach the App Engine Dev server.
To edit the hosts file on a Mac, edit the file /etc/hosts and supply the domain that corresponds to your service:. Example:
127.0.0.1 my-service.myapp.com
After you save this, you need to restart your computer for the changes to take place.
Run the App Engine Dev server manually:
dev_appserver.sh --address=0.0.0.0 --jvm_flag=-Xdebug
--jvm_flag=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000
[path_to_exploded_war_directory]
In IntelliJ, create a debug configuration. Use the Remote template to create this configuration. Set the host to the url you set in the hosts file and set the port to 8000.
You can set a breakpoint and run the app in IntelliJ. IntelliJ will attach to the running instance of App Engine Dev server.
Because you are using a port during debugging and no port is actually used when the app is uploaded to the GAE during production, you need to add code that identifies when the app is running locally and when it's running on GAE. This can be done as follows:
private String mServiceUrl = "my-service.my-app.appspot.com";
...
if (SystemProperty.environment.value() != SystemProperty.Environment.Value.Production) {
mServiceUrl += ":8000";
}
See https://cloud.google.com/appengine/docs/standard/java/tools/using-local-server
An improved solution is to avoid including the port altogether and not having to use code to determine whether your app is running locally or on the production server. One way to do this is to use Charles (an application for monitoring and interacting with requests) and use a feature called Remote Mapping which lets you map one url to another. When enabled, you could map something like:
https://my-service.my-app.appspot.com/
to
https://localhost:8080
You would then enable the option to include the original host, so that this gets delivered to the local dev server. As far as your code is concerned it only sees:
https://my-service.my-app.appspot.com/
although the ip address will be 127.0.0.1:8080 when remote mapping is enabled. To use https on local host however does require that you enable ssl certificates for Charles.
For a complete overview on how to setup and debug microservices for a GAE Java app in IntelliJ, see:
https://github.com/JohannBlake/gae-microservices

EC2 , Openstack, Google App Engine (GAE) and REST

I was handed an assignment but I don't know where to start.
The aim is to have 2 piece of code running. One will run in Open stack private cloud and perform the task of indexing two sets of text, with another running in EC2 with the task of matching the two indexed tests.
I want to access them via google app engine.
Ideally, I would like to click a button or perform an action on Google app engine, which then sends a request to Openstack to run the code and retrieve the output of a txt file.
That outputted text files will then be forwarded onto EC2 where the matching will occur and the results sent back to Google App Engine.
My question is, how can I send the files between the systems using REST requests?
FrankN --
EC2, GAE and OpenStack are disparate cloud computing platforms. To integrate them might include, say, using one platform while saving backups to another.
CloudU.Rackspace.com is a vendor-neutral education site about cloud computing (note: It'll take six or so hours to finish it all). This might help.
Disclaimer: I work for Rackspace.
Firstly, not really sure what your requirements are, why your code does or why are you trying to mix cloud providers in that way.
That said, I would suggest taking the upload from GAE and push it to AWS S3 where you can then retrieve and use as you please from EC2.
Not sure what functionality you are trying to get out of OpenStack that is not present in AWS; however, I would suggest building whatever you are building in EC2 first, then duplicate in on OpenStack services to avoid future vendor lock in.

Resources