I am getting the following error while deploying the google app engine
ERROR: gcloud crashed (SSLHandshakeError): [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
I am using python 2.7 also tried turning off firewall settings but doesnot help. Any suggestions?
This is a common network issue seen when there is a networking proxy present on your network or antivirus and similar software that might prevent the connection.
As you mentioned the issue was solved when deactivating an antivirus software. If you still want to run the antivirus, you can configure it properly to allow the connection to GCP.
I tried this to avoid SSL certificate validation and successfully worked
gcloud config set auth/disable_ssl_validation True
Related
I get the error "Error occurred while requesting version information: Connection refused" when I test the connection in Jenkins configuration for Artifactory plug-in. I have tried it with Anonymous access enabled in Artifactory, with Anonymous access disabled, and tried all three options (Supported, Unsupported, Required) for Password Encryption in Artifactory. I have Default Deployer Credentials in my Jenkins Artifactory configuration, and I have tested the connection with 'Use Different Resolver Credentials' and without. I consistently get this error.
Any help/ideas would be greatly appreciated
I also ran in a similar problem yesterday.
Problem:
I was running jenkins and artifactory in two different docker containers on my local. I had exposed port 8086 for artifactory and could access it using http://localhost:8086/artifactory in my browser. But giving the same url for artifactory in jenkins produced the above reported error in question.
Solution:
For some unknown reasons, jenkins artifactory plugin couldn't resolve http://localhost:8086/artifactory even though the docker mappings was correct and it was possible to connect to artifactory web based console with the same URL.
Replacing "localhost" with docker container IP did the trick.
Name of my container in which artifactory was running was docker-plgr_artifactory_1
Admins-MacBook-Pro-2:~ prakash.tiwari$ docker exec -it docker-plgr_artifactory_1 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.2 08038bc9449b
The IP of container was 172.18.0.2. So I replaced http://localhost:8086/artifactory with http://172.18.0.2:8081/artifactory and jenkins was now able to connect to artifactory. (8081 is the port in docker container at which artifactory was running. You'd have given it at the time of running the container. Alternatively, you can find it by running docker ps and checking the value under PORTS field.)
Credit: https://www.arvinep.com/2016/04/jenkins-docker-container-problem.html
Note: I know this solution doesn't explain the cause and why it works, but I hope it at least helps some people and saves their time.
I see that you asked this question a while ago. I just had to deal with a very similar situation. I had loaded the root and intermediate certificates into the cacerts files found under the 4 version of Java on the build server. The problem was that Jenkins uses it's own cacerts file found in the Jenkins install folder. Once I loaded the certs there I was able to test the connection to artifactory and upload the build artifacts. I hope this helps
I'm running the following command to deploy my Managed VMs app (on Windows 10):
gcloud preview app deploy app.yaml --project=<PROJECT> --promote
The deployment starts bug hangs on the following line:
Copying certificates for secure access. You may be prompted to create an SSH keypair.
And after some time I get the error:
ERROR: (gcloud.preview.app.deploy) Unable to copy certificates.
I've already:
Made sure that there are SSH keys in ~\.ssh\google_compute_engine
Tried to run with --quiet - same results
Renamed ssh-term.exe to ssh.exe - same results
Run the command as an administrator.
Run the command with --verbosity debug, which prints the following line multiple times: DEBUG: File [f] does not exist locally.
Any help will be much appreciated!
Found the cause! It was the project's firewall that blocked SSH by default. Fixed that and it worked.
Glad you fixed it, I had the same problem and will use your fix. I did happen accros a work around. By using the Container Build API to perform the build.
enter the command
gcloud config set app/use_cloud_build true
Before you
gcloud preview app deploy
Cite: https://github.com/isusanin/google-cloud-sdk/issues/533
Google App Engine SDK for PHP (at local environment) returns error on any try to use Google Cloud Storage. Error message is:
Fatal error: Uncaught exception 'google\appengine\runtime\RPCFailedError' with message 'Remote implementation for app_identity_service.GetAccessToken failed' in /media/data/home/vladimir/setup/gae/google_appengine/php/sdk/google/appengine/runtime/RemoteApiProxy.php on line 92
It exactly repeats the problem described here:
App engine update breaks CloudStorage in dev php env
Test code from the question above shows the same result.
I tried App Engine SDK for PHP versions 1.9.19, 1.9.20, 1.9.21 without success.
On Win10 this issue can be solved by generating an application-default credentials file:
D:\Workspace\Sourcecode>gcloud auth application-default login
Credentials saved to file: [C:\Users\Otje\AppData\Roaming\gcloud\application_default_credentials.json]
And then setting the environment in commandline:
D:\Workspace\Sourcecode>SET GOOGLE_APPLICATION_DEFAULT=C:\Users\Otje\AppData\Roaming\gcloud\application_default_credentials.json
Seems to me GAE on local just outputs limited error information when it can't find the right credentials to succesfully connect to remote endpoint.
It seems that it was GAE's server side issue. They fixed it. I discovered that it started to work as expected today without any changes applied by me.
Trying to run my web2py app from the development server using GoogleAppEngineLauncher
Not sure if the on-line tutorials are out of date or I'm just missing something, but when I follow the link to download the GoogleAppEngine pythonSDK for OSX I get a dmg for the GoogleAppEngineLauncher.
I download and use that, which installs the proper executables, however after I setup my app.yaml file and run "dev_appserver.py myApp" I get this error:
fancy_urllib.InvalidCertificateException: Host appengine.google.com returned an invalid certificate (_ssl.c:503: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed):
I don't get the error if I try and launch the app from the launcher itself, however I can't launch the app because it says the directory already exists and I don't have write permissions. I even tried chmod 777 on the myApp directory.
Should I not be using the GoogleAppLauncher?
additionally I tried using the linux SDX and received the same "certificate" error. The error message directs me to a link mentioning that I need the "ssl" module, but that is included in python 2.7.2 which I am using.
The link also mentions: "appcfg uses SSL when connecting to the Admin Console by default, unless the --insecure flag is passed." But I cannot find that flag in the help menu.
Found this answer which solves the problem.
Basically:
rm google_appengine/lib/cacerts/cacerts.txt
From the SDK
I'm trying to use the bulkloader to load my data to the App-Engine server. I run the following command using Python 2.5:-
appcfg.py upload_data --application=myappname --kind=mykind
--filename=data_archive.csv --url=http://myappname.appspot.com/remote_api
But its failing with this Authentication error:-
[INFO ] Connecting to myappname.appspot.com/remote_api
[ERROR ] Exception during authentication
URLError: <urlopen error [Errno 10061] No connection could
be made because the target machine actively refused it>
[INFO ] Authentication Failed
My idea is to do a bulk download from my development server and then use this dump to do a upload to the app-engine server. The bulk download worked fine. I used this format for this:-
appcfg.py download_data --application=myappname --kind=mykind
--url=http://localhost:8888/remote_api --filename=data_archive.csv
But the bulk upoad fails. A couple of things: the bulk download asked me for a userid and password, but the bulk upload does not. Also, I don't currently have a app.yaml file which I see mentioned a lot - do I need one to do this ?
Thanks in advance for any help.
M.
EDIT
For anyone else struggling with this, the problem was indeed being behind the proxy server, but there was another 'error' with what is above. The app-id needs the "s~" bit added to it.
appcfg.py upload_data --application=s~myappname --kind=mykind
--filename=data_archive.csv --url=http://myappname.appspot.com/remote_api
This isn't an authentication issue - that message is a red-herring - your machine is unable to contact the App Engine app at all. Do you have a proxy you need to transit through in order to make external connections?
You do not need --application=s~myappname when using bulkloader - Google have mentioned before:
Warning! Do not use the --application= flag to get the application ID
when using the bulk loader. Instead, use --url=.
For more detail take a look here:
https://developers.google.com/appengine/docs/python/tools/uploadingdata
app.yaml is how it finds your server. I am not sure how you can try and upload without one.
In addition to having an app.yaml that points to the production server, the production server also needs to have remote_api turned on (in it's app.yaml and in the version you are trying to reach):
builtins:
- remote_api: on