I'm trying to deploy my appengine go application to managed vm, and I keep getting this error
Pulling image: google/appengine-go
Traceback (most recent call last):
File "/home/honeybooboo/google-cloud-sdk/./lib/googlecloudsdk/gcloud/gcloud.py", line 170, in <module>
main()
File "/home/honeybooboo/google-cloud-sdk/./lib/googlecloudsdk/gcloud/gcloud.py", line 166, in main
_cli.Execute()
File "/home/honeybooboo/google-cloud-sdk/./lib/googlecloudsdk/calliope/cli.py", line 385, in Execute
post_run_hooks=self.__post_run_hooks, kwargs=kwargs)
File "/home/honeybooboo/google-cloud-sdk/./lib/googlecloudsdk/calliope/frontend.py", line 274, in _Execute
pre_run_hooks=pre_run_hooks, post_run_hooks=post_run_hooks)
File "/home/honeybooboo/google-cloud-sdk/./lib/googlecloudsdk/calliope/backend.py", line 928, in Run
result = command_instance.Run(args)
File "/home/honeybooboo/google-cloud-sdk/lib/googlecloudsdk/appengine/app_commands/setup_managed_vms.py", line 39, in Run
args.image_version)
File "/home/honeybooboo/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/images/pull.py", line 54, in PullBaseDockerImages
util.PullSpecifiedImages(docker_client, image_names, version, bucket)
File "/home/honeybooboo/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/images/util.py", line 232, in PullSpecifiedImages
'Error pulling {image}: {e}'.format(image=image_name, e=e))
googlecloudsdk.appengine.lib.images.util.DockerPullError: Error pulling google/appengine-go: 404 Client Error: Not Found ("No such id: localhost:49156/google/appengine-go")
My docker version
Client version: 1.3.0
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): c78088f
OS/Arch (client): linux/amd64
Server version: 1.3.0
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): c78088f
My Gcloud version
Google Cloud SDK 0.9.37
app 2014.11.18
app-engine-go-linux-x86_64 1.9.15
app-engine-java 1.9.15a
app-engine-managed-vms 2014.11.03
app-engine-python 1.9.15a
app-engine-python-extras 1.9.6
bq 2.0.18
bq-nix 2.0.18
compute 2014.11.25
core 2014.11.25
core-nix 2014.10.20
dns 2014.11.06
gae-go 2014.11.25
gae-go-nix 2014.09.10
gae-python 2014.05.06
gcutil 1.16.5
gcutil-nix 1.16.5
gsutil 4.6
gsutil-nix 4.6
preview 2014.11.18
preview-extensions-linux-x86_64 4.1
sql 2014.11.18
Sorry that you're having problems. We're aware of this issue and it is already fixed in the next SDK release (coming out in a week). As a temporary workaround please try to run
gcloud --verbosity debug preview app setup-managed-vms
(and choose Go in the list of options)
several times (until success) to get the base image for go runtime.
Another options is to try pulling the base go image (google/appengine-go) from containers-prod
bucket using google/docker-registry https://registry.hub.docker.com/u/google/docker-registry/
Pull the google/docker-registry
docker pull google/docker-registry
Get your credentials
gcloud auth print-refresh-token
Store your refresh token and your bucket (containers prod in a registry-params.env file)
cat registry-params.env
GCP_OAUTH2_REFRESH_TOKEN=your-refresh-token
GCS_BUCKET=containers-prod
Run registry
docker run -d --env-file=registry-params.env -p 5000:5000 google/docker-registry
Pull the image
docker pull localhost:5000/google/appengine-go
Retag the image
docker tag localhost:5000/google/appengine-go google/appengine-go
Remove old tag containing registry name
docker rmi localhost:5000/google/appengine-go
Check your image is in there. You'll see something like
docker images | grep google
You'll see something like
google/appengine-go latest 35ef8e2a9c5e 13 days ago 206 MB
Don't forget to stop your registry container
docker ps
docker stop <CONTAINER ID>
Related
Setup: Run in Google Cloud Shell Standard Env
I tried to access firestore from the flask app using from google.cloud import firestore
I have installed it using pip install --upgrade google-cloud-firestore -t lib. If I run the script manually, it works fine. But when using dev_appserver.py, it fails. Got the error below.
$ dev_appserver.py app.yaml
INFO 2017-10-06 07:34:35,301 devappserver2.py:105] Skipping SDK update check.
INFO 2017-10-06 07:34:35,391 api_server.py:300] Starting API server at: http://0.0.0.0:34796
WARNING 2017-10-06 07:34:35,391 dispatcher.py:312] Your python27 micro version is below 2.7.12, our current production version.
INFO 2017-10-06 07:34:35,440 dispatcher.py:251] Starting module "default" running at: http://0.0.0.0:8080
INFO 2017-10-06 07:34:35,441 admin_server.py:116] Starting admin server at: http://0.0.0.0:8000
ERROR 2017-10-06 07:34:42,266 wsgi.py:263]
Traceback (most recent call last):
File "/google/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/google/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/google/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/home/user1/projects/probfe/main.py", line 10, in <module>
from google.cloud import firestore
File "/google/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/sandbox.py", line 1132, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named google.gax
However, in my lib, I can see them.
$ls lib
builtins future-0.16.0.dist-info itsdangerous.pyc requests-2.18.4.dist-info
cachetools futures-3.1.1.dist-info jinja2 rsa
cachetools-2.0.1.dist-info google Jinja2-2.9.6.dist-info rsa-3.4.2.dist-info
certifi googleapis_common_protos-1.5.3.dist-info libfuturize setuptools
certifi-2017.7.27.1.dist-info googleapis_common_protos-1.5.3-py2.7-nspkg.pth libpasteurize setuptools-36.5.0.dist-info
chardet google_auth-1.1.1.dist-info _markupbase six-1.11.0.dist-info
chardet-3.0.4.dist-info google_auth-1.1.1-py2.7-nspkg.pth markupsafe six.py
click google_cloud_core-0.27.1.dist-info MarkupSafe-1.0.dist-info six.pyc
click-6.7.dist-info google_cloud_core-0.27.1-py3.6-nspkg.pth past socketserver
concurrent google_cloud_firestore-0.27.0.dist-info pkg_resources tests
copyreg google_cloud_firestore-0.27.0-py3.6-nspkg.pth ply _thread
dill google_gax-0.15.15.dist-info ply-3.8.dist-info tkinter
dill-0.2.7.1.dist-info google_gax-0.15.15-py2.7-nspkg.pth protobuf-3.4.0.dist-info urllib3
_dummy_thread grpc protobuf-3.4.0-py2.7-nspkg.pth urllib3-1.22.dist-info
easy_install.py grpcio-1.4.0.dist-info pyasn1 werkzeug
easy_install.pyc html pyasn1-0.3.7.dist-info Werkzeug-0.12.2.dist-info
enum http pyasn1_modules winreg
enum34-1.1.6.dist-info idna pyasn1_modules-0.1.4.dist-info xmlrpc
flask idna-2.6.dist-info queue
Flask-0.12.2.dist-info itsdangerous-0.24.dist-info reprlib
future itsdangerous.py requests
Since you're using dev_appserver.py it means you have a standard env application. And in the standard env all external dependencies need to be installed in your app, not on the local python instalation (which is what you did with your pip invocation).
From Installing a third-party library:
Create a directory to store your third-party libraries, such as lib/.
mkdir lib
Use pip (version 6 or later) with the -t <directory> flag to copy the libraries into the folder you created in the previous
step. For example:
pip install -t lib/ <library_name>
I have a Flask application deployed on an Amazon Elastic Beanstalk cluster. On my local machine, macOS, I've added an integration with the Google Cloud API, and I've updated my requirements.txt to include the line google-cloud==0.27.0. When I deploy to Elastic Beanstalk with the updated requirements file, my deployment fails during pip install with the error
Running setup.py install for grpcio
Complete output from command /opt/python/run/venv/bin/python3.4 -c "import setuptools, tokenize;__file__='/tmp/pip-build-ve1vz0tx/grpcio/setup
.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-aszzosux-rec
ord/install-record.txt --single-version-externally-managed --compile --install-headers /opt/python/run/venv/include/site/python3.4/grpcio:
Failed to import the site module
Traceback (most recent call last):
File "/opt/python/run/venv/lib64/python3.4/site.py", line 890, in <module>
main()
File "/opt/python/run/venv/lib64/python3.4/site.py", line 848, in main
virtualenv_search_paths(sys.prefix)
File "/opt/python/run/venv/lib64/python3.4/site.py", line 638, in virtualenv_search_paths
addsitedir(sitedir, known_paths)
File "/opt/python/run/venv/lib64/python3.4/site.py", line 204, in addsitedir
addpackage(sitedir, name, known_paths)
File "/opt/python/run/venv/lib64/python3.4/site.py", line 173, in addpackage
exec(line)
File "<string>", line 1, in <module>
KeyError: 'google'
I am able to install my requirements locally in a virtualenv running python 3, however, when I create a similar virtualenv on my ec2 and install the requirements, I get the same error I get during deployment. One thing I have read about is that the ec2 might not have google cloud sdk installed, however, I installed it on my ec2 (tested both inside and outside of a virtualenv) using the following commands as described here here:
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init
How can I diagnose this error and prevent it from happening going forward?
My current hypotheses are:
there is still an issue with the way google cloud sdk is install or operating on the ec2
there is some conflict between requirements in my requirements.txt file once I add the google-cloud requirement
I've identified and fixed the problem. I had google==1.9.2 as a package in my requirements.txt and it wasn't playing well with google-cloud==0.27.0. I'm not sure why this occurred though.
Note: when deploying to Elastic Beanstalk, I had to rebuild the environments for the change to take place. It seems like Elastic Beanstalk reuses the Python virtualenv across deploys, so if a server had ever run a version of my application with google==1.9.2 in requirements, that previously installed version of google would interfere with future deploys that excluded it
I'm trying to deploy my GAE app remotely with a URL, and this part works nicely.
Jenkins checks out the latest revision correctly but when trying to build with the command specified in the Google Cloud Help:
gcloud --project=<project-id> preview app deploy -q app.yaml
I get the follow error message:
[workspace] $ /bin/sh -xe /opt/bitnami/apache-tomcat/temp/hudson7352698921882428590.sh
+ gcloud --project=XYZXYZXYZ preview app deploy -q app.yaml
/opt/bitnami/apache-tomcat/temp/hudson7352698921882428590.sh: 2:
/opt/bitnami/apache-tomcat/temp/hudson7352698921882428590.sh: gcloud: not found
Build step 'Execute shell' marked build as failure
I have changed the project-id to mine, but i can't figure out why it's missing the gcloud command..
EDIT
I ran
/usr/local/bin/gcloud --project=<project-id> preview app deploy -q app.yaml
and now i got this error:
Traceback (most recent call last):
File "/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/gcloud/gcloud.py", line 199, in <module>
_cli = CreateCLI()
File "/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/gcloud/gcloud.py", line 197, in CreateCLI
return loader.Generate()
File "/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/calliope/cli.py", line 384, in Generate
cli = self.__MakeCLI(top_group)
File "/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/calliope/cli.py", line 546, in __MakeCLI
log.AddFileLogging(self.__logs_dir)
File "/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/core/log.py", line 546, in AddFileLogging
_log_manager.AddLogsDir(logs_dir=logs_dir)
File "/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/core/log.py", line 330, in AddLogsDir
log_file = self._SetupLogsDir(logs_dir)
File "/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/core/log.py", line 407, in _SetupLogsDir
os.makedirs(day_dir_path)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/.config'
Build step 'Execute shell' marked build as failure
FINAL EDIT
For anyone comming around here later, i did not exactly solve this, but at least i managed to get around this error by creating an all new VM through the command Google has written in the guide Push-to-Deploy with Jenkins:
$ PASSWORD=<password> # 12 or more chars, with letters and numbers
$ PROJECT_ID=<project-id>
$ BITNAMI_IMAGE=<bitnami-image> # e.g. bitnami-jenkins-1-606-0-linux-debian-7-x86-64
$ gcloud compute \
instances create bitnami-jenkins \
--project ${PROJECT_ID} \
--image-project bitnami-launchpad \
--image ${BITNAMI_IMAGE} \
--zone us-central1-a \
--machine-type n1-standard-1 \
--metadata "bitnami-base-password=${PASSWORD},bitnami-default-user=user,bitnami-key=jenkins,bitnami-name=Jenkins,bitnami-url=//bitnami.com/stack/jenkins,bitnami-description=Jenkins,startup-script-url=https://dl.google.com/dl/jenkins/p2dsetup/setup-script.sh" \
--scopes "https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/devstorage.full_control,https://www.googleapis.com/auth/projecthosting,https://www.googleapis.com/auth/appengine.admin" \
--tags "bitnami-launchpad"
At least with this new VM i could move forward and i'm now close to get it working, but i'm stuck on another error which i've created another question for now:
Gcloud preview app can't parse my yaml
Try specifying the gcloud executable with its full path - it might not be accessible in jenkins' shell PATH environment.
OSError: [Errno 13] Permission denied: '/.config' suggests that Jenkins is attempting to use the root directory as a configuration directory, possibly because of an unusual $HOME directory configuration.
Try setting $CLOUDSDK_CONFIG to point to a directory that the Jenkins user has access to:
CLOUDSDK_CONFIG=/tmp /home/margorjon/google-cloud-sdk/bin/gcloud version
As Dan Cornilescu said, path env variable was not set correctly. Run:
ln -s /var/jenkins_home/google-cloud-sdk/bin/gcloud /usr/local/bin/gcloud
in your worker node to symlink gcloud to global path
For the past few days I have been trying to trace the source of the error shown in the blockquote in trying to deploy an app to google app engine on a windows machine using the command:
> gcloud preview app deploy app.yaml
Beginning deployment... Verifying that Managed VMs are enabled and
ready.
Traceback (most recent call last): ...
File "C:\Program Files\Google\Cloud
SDK\google-cloud-sdk\bin.../lib\docker\docker\tls.py", line 46, in
init
'Path to a certificate and key files must be provided'
TLSParameterError: Path to a certificate and key files must be
provided through the client_config param. TLS configurations should
map the Docker CLI client configurations.
Does anyone have any idea on how to solve this issue?
FYI: I have already set the environment variables DOCKER_CERT_PATH, DOCKER_TLS_VERIFY and DOCKER_HOST as set by Docker.
I've set up an App Engine project locally using Docker (on OSX), and have been running a server using the usual "gcloud preview app run app.yaml" command. From what I can tell, this keeps creating new images over and over again. After an hour or so of work I end up with something like 30 docker images, each taking 130MB.
Eventually I'm told I can no longer bind to localhost:8080. I tried killing all containers and images, but still cannot use localhost:8080 until I reboot.
Seems like I'm not using Docker/gcloud correctly. Anyone have an idea what I might be doing wrong? Is there another way I should be restarting App Engine instances other than hitting command C and running the "run" command again?
UPDATE: After looking closer, I noticed I'm getting this message when I run an app locally and a container is created: "http: Hijack is incompatible with use of CloseNotifier". I'm not familiar enough with Docker to understand what's going on here. All searches seem to point to Go, which I am not using.
UPDATE 2: Here is the trace:
Creating container...
INFO 2015-05-05 02:23:28,293 containers.py:560] Container 1564ce4344957114312d6d1dc696ffbb4176b40ace6dcff5e4239e13ee04a8f6 created.
Exception in thread Thread-2:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/Users/judeosborn/google-cloud-sdk/platform/google_appengine/google/appengine/tools/docker/containers.py", line 643, in _ListenToLogs
for line in log_lines:
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 225, in _multiplexed_response_stream_helper
socket = self._get_raw_response_socket(response)
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 167, in _get_raw_response_socket
self._raise_for_status(response)
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 119, in _raise_for_status
raise errors.APIError(e, response, explanation=explanation)
APIError: 500 Server Error: Internal Server Error ("http: Hijack is incompatible with use of CloseNotifier")
INFO 2015-05-05 02:23:28,606 module.py:1745] New instance for module "default" serving on:
http://localhost:8080
There's an ongoing issue with Docker 1.6.x [reference] that prevents gcloud to work well with Managed VMs (as you seem to be using). Easiest workaround until it gets fixed is to downgrade Docker in your development machine to version 1.5.0, which is the latest version known to work.
For Ubuntu, you can do something like:
$ curl -sSL https://get.docker.com/ubuntu | sed 's/lxc-docker/lxc-docker-1.5.0/' | sudo sh
For other Linux distros, you might have to modify that sed pattern, though.
On the other hand, if you're using Boot2Docker under Mac OS X, follow these steps:
Fully uninstall your previous Boot2Docker/Docker setup; there is a nice guide here
Reinstall Boot2Docker/Docker following instructions here. IMPORTANT: You MUST stop right after completing "Install Boot2Docker" step and before "Start the Boot2Docker Application". Once you get there, open up a terminal and execute the following commands:
$ mkdir ~/.boot2docker
$ echo 'ISOURL="https://github.com/boot2docker/boot2docker/releases/download/v1.5.0/boot2docker.iso"' > ~/.boot2docker/profile
At this point, you can continue with "Start the Boot2Docker Application" section and finish the installation. You should now have a valid Docker launchpad with which to start Managed VMs. It'd be nice to double check that you have the right versions installed by issuing:
$ boot2docker ssh docker version | egrep "(Client|Server) version"
The output should look like:
Client version: 1.5.0
Server version: 1.5.0
Now you can try again your original command:
$ gcloud preview app run app.yaml
Try running:
$ ps uax | egrep "gcloud|appserver"
If you see anything running, kill it... you may even need to kill -9 it.