Is it possible to run side containers or init containers in Google Cloud Run or App Engine? I couldn't find any documentation of this and trying to ask this on GCP support forums I was directed to ask at stack-overflow. If possible how would you accomplish this? I came across this repo but it wasn't helpful.
I know it is possible with GKS but trying to do the same with these services.
You can't, for now, running a Pod on Cloud Run, you can only run a container. However, it's possible to run a multi process container for helping you to achieve this.
You can find here and here a post from Ahmet for running several process in the same container.
Note: Ahmet is one of Cloud Run engineers at Google, you can rely on his articles!
As John mentioned, the answer is no. Cloud run and App Engine both provide you a simple way to deploy a containerized application int the cloud and have it run.
To keep this as simple and streamlined as possible, additional features that you see in pod specs are not available (such as init containers or running multiple containers).
For more complex deployments, it is recommended to use GKE
Related
I can't find any documentation on how gcp scheduler works under the hood. An App Engine is needed in the project, so I assume that the Http calls or Pub/Sub messages are started from the App Engine.
Currently I can use a cloud scheduler even without an App Engine in the project. Apparently a compute engine that also contains a permanently running VM is also sufficient. Could someone confirm my assumptions please or does anyone have sources on this?
I can't tell you how work Cloud Scheduler under the hood. I can just tell you that works well!
I'm sure there is a VM, or a cluster of VM, on Google serverless environment, and your Cloud Scheduler job is set on it. It's serverless, the under the hood doesn't matter, it works, and it's what I want!
Now, the relation with App Engine can be confusing. In fact, there is no longer relation between the product now, but you need the App Engine API activated on your project to use Cloud Scheduler. This strange things is normal if you have been using Google Cloud for a while. At the beginning, only App Engine existed, and Datastore, Cloud Task, Cloud Scheduler was all "modules" of App Engine. Years, after years, google has refactored and extracted these modules to create independent products, as you can see them today. However, some relations are still present, like the API activation.
There have been a lot of changes to App Engine Standard. Is it still possible to have a local instance of the app runnable with something similar to dev_appserver.py and to use the new cloud.google.com/... APIs?
Previously, you could emulate the Datastore locally for example.
For GAE first gen apps, I believe the best option is to stick with dev_appserver.py. I'm keeping my fingers crossed that Google doesn't break that anytime soon.
For GAE second gen apps, you need to run your app directly (e.g. for Flask, python main.py) and run the datastore emulator in a separate terminal window. Other Google APIs need to be mocked or you can create a test GAE project and use the test project credentials when running locally. For the test project, you may have to pay a bit, but costs should be low.
I am currently experimenting with the Google App Engine flexible environment, especially the feature allowing you to build custom runtimes by providing a Dockerfile.
Docker provides a really nice feature called docker-compose for defining and running multi-container Docker applications.
Now the question is, is there any way one can use the power of docker-compose within GAE? If the answer is no, what would be the best approach for deploying a multi-container application (for instance Nginx + PHP-FPM + RabbitMQ + Elasticsearch + Redis + MongoDB, ...) within GAE flexible environment using Docker?
It is not possible at this time to use docker-compose to have multiple application containers within a single App Engine instance. This does seem however to be by design.
Scaling application components independently
If you would like to have multiple application containers, you would need to deploy them as separate App Engine services. There would still only be a single application container per service instance but there could be multiple instances of each service. This would grant you the flexibility you seek of scaling each application component independently. In addition, if the application in a container were to hang, it could not affect other services as they would reside in different VMs.
An added benefit of deploying each component as a separate service is that one need not use the flexible environment for every service. For some very small tasks such as API backends or serving relatively slow-changing web content, the standard environment may suffice and may be less expensive at low resource levels.
Communication between components
Since one of your comments mentions getting instance IPs, I thought you might find inter-service communication useful. I'm not certain for what reason you wish to use VM instance IPs but a typical use case might be to communicate between instances or services. To do this without instance IPs, your best bet is to issue HTTP request from one service to another simply using the appropriate url. If you have a service called web and one called api, the web service can issue a request to api.mycustomdomain.com where your application is hosted and the api service will receive a request with the X-Appengine-Inbound-Appid header specified with your project ID. This can serve as a way a identifying the request as coming from your own application.
Multicontainer application using Docker
You mention many examples of applications including NGinx, PHP-FPM, RabbitMQ, etc.. With App Engine using custom runtimes, you can deploy any container to handle traffic as long as it responds to requests from port 8080. Keep in mind that the primary purpose of the application is to serve responses. The instances should be designed to start up and shut down quickly to be horizontally scalable. They should not be used to store any application data. That should remain outside of App Engine using tools like Cloud SQL, Cloud Datastore, BigQuery or your own Redis instance running on Compute Engine.
I hope this clarifies a few things and answers your questions.
You can follow following steps to create a container with docker-compose file in Google App Engine.
Follow link
You can build your custom image using docker-compose file
docker-compose build
Create a tag for local build
docker tag [SOURCE_IMAGE] [HOSTNAME]/[PROJECT-ID]/[IMAGE]
Push image to google registry
docker push [HOSTNAME]/[PROJECT-ID]/[IMAGE]
deploy Container
gcloud app deploy --image-url=[HOSTNAME]/[PROJECT-ID]/[IMAGE]
please add auth for docker commands to run.
I've built systems on top of Google's App Engine and leveraged Google's Datastore, but for my new project I'm considering a containerized solution (using Google's Container Engine). Does anyone with experience using both technologies together know:
if this is possible to use Container Engine with Datastore?
if it's easy to set up a local containerized dev environment with gcd?
if there are some serious headaches I should consider before going down this route?
Absolutely! You can run any code you want in Container Engine, and if you add the datastore scope to your cluster when you create it, authentication to the Datastore API will be automatic if you're using Datastore's client libraries or tools.
I'm not familiar with the local gcd environment, so I can't help much here. Testing Docker containers locally before pushing them to the cloud works great, so the only question will be making sure the gcd dev environment can be exposed to your local containerized app.
The dev environment is the one issue I'm not sure of. Using Datastore from Container Engine should work fine.
What I have done is just created a service account and use the json key to access datastore when I am working locally. It seems to work pretty well.
i wanted to create a multiple instances of my application in either google cloud or EC2. I have two queries regarding this
1.How to achieve this?
Can we create a virtual instances by using zookeeper?
Google App Engine instances are started automatically, as your traffic raises. You may also have always on instances or backends instaces. Just read the docs: http://code.google.com/intl/pt-BR/appengine/docs/adminconsole/instances.html
Google App Engine is not adequate to use with Zookeper. Since Java code runs in a limited sandbox, you may not be able to communicate with Zookeper at all. Also, you will have to start and end you backends programmatically, leading you to lots of work.
As for EC2, see this:
http://www.mail-archive.com/zookeeper-user#hadoop.apache.org/msg01083.html