I installed the Google Cloud SDK
Thought Web UI I created a new instance. I am not knowledgeable of SSH. I followed steps as described here: https://cloud.google.com/compute/docs/instances#sshkeys
I have Window 7 OS
I checked firewall rules as suggested here: https://cloud.google.com/compute/docs/troubleshooting#ssherrors
I checked these through Web UI and found rule
"default-allow-ssh 0.0.0.0/0 tcp:22 Apply to all targets"
STEPS I FOLLOWED:
1) > gcloud auth login
(default browser opens up and I authorize the Google Cloud SDK)
Google SDK Shell outputs:
"Saved Application Credentails. You are now logged as [someuser#gmail]
Your current project is [some-project-999].
2) > gcloud compute ssh my-instance --zone us-central1-a
Google SDK Shell outputs:
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\..\bin\sdk\ssh-keygen.EXE] will be executed to generate
a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
The key fingerprint is:
ssh-rsa 2048 06:73:ac:e8:f2:31:c8:df:d4:b0:a2:3b:a2:53:6c:09
Your private key has been saved in C:\Users\First Last\.ssh\google_compute_engine.
Your public key has been saved in C:\Users\First Last\.ssh\google_compute_engine.pub.
Your putty key has been saved in C:\Users\First Last\.ssh\google_compute_engine.ppk.
Updated [https://www.googleapis.com/compute/v1/projects/arctic-depth-863].
Server refused our key
FATAL ERROR: Disconnected: No supported authentication methods available (server sent: publickey)
Server refused our key
FATAL ERROR: Disconnected: No supported authentication methods available (server sent: publickey)
Server refused our key
FATAL ERROR: Disconnected: No supported authentication methods available (server sent: publickey)
FATAL ERROR: Network error: Software caused connection abort
FATAL ERROR: Network error: Connection timed out
ERROR: (gcloud.compute.ssh) Could not SSH to the instance. It is possible that your SSH key has not propagated to the i
nstance yet. Try running this command again. If you still cannot connect, verify that the firewall and instance are set
to accept ssh traffic.
On the browser's Web UI, I open the Broser's SSH and I navigate to .ssh folder
someuser_gmail_com#my-instance:~$ cd .ssh
someuser_gmail_com#my-instance:~$ cat authorized_keys
# Added by Google
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4OxYxWvIlp...F7As google-ssh {"userName":"someuser#gmail.com","expireOn":"2015-02-21T23:29:06+0000"}
# Added by Google
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzd...KRqcUZmvWr= google-ssh {"userName":"someuser#gmail.com","expireOn":"2015-02-21T23:28:55+0000"}
on Web UI, I navigate to Project's > Compute > Compute Engine > Metadata > SSH KEYS and I see three records
USERNAME KEY
someuser_gmail_com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4...", "edpireOn":"2015-02-21T23:29:06+0000"}
someuser_gmail_com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTIt...", "edpireOn":"2015-02-21T23:29:06+0000"}
First Last ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAi...ZkpSpRt6RQ== First Last#MYPC
In my local computer, I navigate to Users/First Last/.ssh/google_compute_engine.pub and I see
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAit...mGhUKZRgFZkpSpRt6RQ== First Last#MYPC
QUESTIONS:
Does white space in the user's folder path causes problems (i.e. "First Last")?
When the key is created by the Google Cloud SDK, it sets the comment to First Last#MYPC. Is this the correct setting? (I have been reading and trying this and that and I suspect it should be something like someuser#my-instance-public-IP)
When I
Google Cloud SDK > gcloud compute instances describe my-instance --zone us-central1-a --format yaml
canIpForward: false
creationTimestamp: '2015-02-21T14:53:37.276-08:00'
disks:
- autoDelete: true
boot: true
deviceName: my-instance
index: 0
interface: SCSI
kind: compute#attachedDisk
licenses:
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1204-precise
mode: READ_WRITE
source: https://www.googleapis.com/compute/v1/projects/some-project-999/zones/us-central1-a/disks/my-instance
type: PERSISTENT
id: '111812933445597333'
kind: compute#instance
machineType: https://www.googleapis.com/compute/v1/projects/some-project-999/zones/us-central1-a/machineTypes/g1-small
metadata:
fingerprint: w3steEkuQUS=
kind: compute#metadata
name: my-instance
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: External NAT
natIP: 112.134.99.170
type: ONE_TO_ONE_NAT
name: nic0
network: https://www.googleapis.com/compute/v1/projects/some-project-999/global/networks/default
networkIP: 10.356.252.66
scheduling:
automaticRestart: true
onHostMaintenance: MIGRATE
selfLink: https://www.googleapis.com/compute/v1/projects/some-project-999/zones/us-central1-a/instances/my-instance
serviceAccounts:
- email: 78111222333-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
status: RUNNING
tags:
fingerprint: DLYFgkKTlB3=
items:
- http-server
zone: https://www.googleapis.com/compute/v1/projects/some-project-999/zones/us-central1-a
C:\Program Files\Google\Cloud SDK>
This is a known issue when using Cloud SDK from Windows.
Please download pageant.exe at [1] and use it to load your ppk key or use Putty (downloadable from the same link) to SSH to instance as documented at [2].
As a workaround you can even rename ssh.exe to ssh-bak.exe and ssh-term.exe to ssh.exe in C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\sdk\
Link:
[1] - http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
[2] - https://cloud.google.com/compute/docs/console#sshkeys
Related
I have an OpenShift PHP pod running an application and I need to authenticate a user account against an Active Directory server. The LDAP bind is failing with a certificate error
LDAP Error (authenticateUser), Cannot bind to LDAP : error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (unable to get local issuer certificate) - ldap_bind failed
It was suggested I need to install the AD servers certificate into the pod. The certificate has now been copied into the Git repository. I thought I may be able to use a Life Cycle Hook to install the certificate into /etc/openldap/certs/
rollingParams:
post:
execNewPod:
command:
- /bin/sh
- '-c'
- >-
/usr/bin/cp
/opt/app-root/src/certificates/certificate.pem
/etc/openldap/certs/
containerName: application
failurePolicy: Ignore
However there is a permissions error - I assume that the deployment is not running as root so the directory is not accessible.
/usr/bin/cp: cannot create regular file '/etc/openldap/certs/certificate.pem': Permission denied
Is it possible to copy this certificate using the DeploymentConfig or is there a better way to do this?
I managed to resolve this. You need to add the ldap.conf and the certificate to ConfigMaps.
oc create configmap configmap_name --from-file=filenam=path and filename
Then you need to mount the ConfigMaps:
volumeMounts:
- mountPath: /etc/openldap/ldap.conf
name: openldap-ad-config-volume
subPath: ldap.conf
- mountPath: /etc/openldap/certs/certificate.pem
name: ad-certificate-volume
subPath: certificate.pem
volumes:
- configMap:
defaultMode: 292
name: openldap-ad-config
name: openldap-ad-config-volume
- configMap:
defaultMode: 292
name: ad-certificate
name: ad-certificate-volume
Property subPath is used so that the rest of the contents of the directory are not excluded as mounting normally mounts the whole of the directory. Property defaultMode is used otherwise the files will be world read/write.
I am unable to deploy a Dockerized application to App Engine Flexible Environment (AEF) in a Google Cloud Platform (GCP) project with a provisioned Shared Virtual Private Cloud (XPN).
In other words, my application with the following app.yaml:
automatic_scaling:
max_num_instances: 1
min_num_instances: 1
env: flex
network:
instance_tag: incorrect-target-tag
name: projects/$GCP_PROJECT_ID/global/networks/$XPN_NETWORK_NAME
service: $AEF_APPLICATION_NAME
and a confirmed Docker image name and tag in Google Container Registry (GCR):
gcloud container images list-tags \
us.gcr.io/$GCP_PROJECT_NAME/$AEF_APPLICATION_NAME \
--flatten=tags \
--format='value(format("us.gcr.io/$GCP_PROJECT_NAME/$AEF_APPLICATION_NAME:{0}", tags))' \
--project=$GCP_PROJECT_NAME
#=>
. . .
us.gcr.io/$GCP_PROJECT_NAME/$AEF_APPLICATION_NAME:$DOCKER_IMAGE_TAG
. . .
is unable to be deployed to AEF:
yes | gcloud app deploy \
--appyaml=./app.yaml \
--image-url=us.gcr.io/$GCP_PROJECT_NAME/$AEF_APPLICATION_NAME:$DOCKER_IMAGE_TAG
#=>
Services to deploy:
descriptor: [/. . ./app.yaml]
source: [/. . ./$AEF_APPLICATION_NAME]
target project: [$GCP_PROJECT_NAME]
target service: [$AEF_APPLICATION_NAME]
target version: [$AEF_APPLICATION_VERSION]
target url: [. . .]
target service account: [App Engine default service account]
Do you want to continue (Y/n)?
Beginning deployment of service [$AEF_APPLICATION_NAME]...
WARNING: Deployment of service [$AEF_APPLICATION_NAME] will ignore the skip_files field in the configuration file, because the image has already been built.
Updating service [$AEF_APPLICATION_NAME] (this may take several minutes)...
.............................................................failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/$GCP_PROJECT_NAME/regions/$AEF_APPLICATION_REGION/operations/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx error [INTERNAL]: An internal error occurred while processing task /app-engine-flex/insert_flex_deployment/flex_create_resources>1970-01-01T00:00:00.001Z000001.jc.2: <eye3 title='FAILED_PRECONDITION'/> generic::FAILED_PRECONDITION: Validation error: The App Engine flexible Environment Service Agent is unable to find a suitable Flex Firewall Rule in network '$XPN_NETWORK_NAME' in project '$GCP_PROJECT_ID'. Have the Shared VPC Admin create a Flex Firewall Rule as described in https://cloud.google.com/appengine/docs/flexible/python/using-shared-vpc
with the following Virtual Private Cloud (VPC) firewall rule supporting AEF communication through the XPN:
cloud compute firewall-rules list \
--filter="allowed[].ports=(8443) AND allowed[].ports=(10402)" \
--project=$GCP_PROJECT_NAME
#=>
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
aef-instance $XPN_NETWORK_NAME INGRESS 1000 tcp:8443,tcp:10402 False
To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.
gcloud compute firewall-rules describe \
aef-instance \
--format=yaml \
--project=$GCP_PROJECT_NAME
#=>
allowed:
- IPProtocol: tcp
ports:
- '8443'
- '10402'
creationTimestamp: '1970-01-01T00:00:00.000-01:00'
description: allows traffic between aef and xpn
direction: INGRESS
disabled: false
id: 'xxxxxxxxxxxxxxxxxxx'
kind: compute#firewall
logConfig:
enable: false
name: aef-instance
network: https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/global/networks/$XPN_NETWORK_NAME
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/global/firewalls/aef-instance
sourceRanges:
- 35.191.0.0/16
- 130.211.0.0/22
targetTags:
- incorrect-target-tag
Note: this rule is required for using any AEF application with the XPN, described here.
Following the guide to linking AEF and the XPN here, the target tag for VPC Firewall rule aef-instance MUST be aef-instance. Update VPC Firewall rule aef-instance with the correct target tag:
gcloud compute firewall-rules update \
aef-instance \
--project=$GCP_PROJECT_NAME \
--target-tags=aef-instance
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/global/firewalls/aef-instance].
and you will be able to redeploy to AEF without that validation error.
Note: changing the target tag in the app.yaml isn't necessary: the AEF application will be able to communicate over a provisioned XPN as long as there is a firewall rule that meets this criteria exactly, regardless of tags specified in the app.yaml.
I have Jenkins set up on GCE and from there I am trying to access k8s cluster on GKE. I get unauthorized when I try to test a connection on the plugin.
I have enabled GKE API access, created a service account on GKE, created role and role binding.
Installed kubernetes plugin on Jenkins and configured it by providing kubernetes url, certificate and token. I still get following exception -
Expected is - Connection to Kubernetes cluster succeeds.
Actual is - Error testing connection https://35.193.108.106: java.security.cert.CertificateException: Could not parse certificate: java.io.IOException: Empty input (With Disabled Https)
AND
With (Disable https certificate check enabled)
Error testing connection https://35.193.108.106: Failure executing: GET at: https://35.193.108.106/api/v1/namespaces/default/pods. Message: Unauthorized. Received status: Status(apiVersion=v1, code=401, details=null, kind=Status, message=Unauthorized, metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Unauthorized, status=Failure, additionalProperties={}).
Check gcp network rules settings and check connection with kubectl from jenkins vm. I use "Secret text" type credentials to store token. I use jenkins vm in same gcp network to skip such issues.
Service account creation in namespace jenkins with "admin" permissions
kubectl create namespace jenkins && kubectl create serviceaccount jenkins --namespace=jenkins && kubectl describe secret $(kubectl describe serviceaccount jenkins --namespace=jenkins | grep Token | awk '{print $2}') --namespace=jenkins && kubectl create rolebinding jenkins-admin-binding --clusterrole=admin --serviceaccount=jenkins:jenkins --namespace=jenkins
I have a the following setup:
1. Application (Java microservice) deployed on app engine.
2. Custom domain mapped to hit this service:.
myfavmicroservice.project-amazing.dev.corporation.com
3. This endpoint is secured to require authentication by enabling IAP.
4. Configured ESP to intercept, authenticate and fulfill request to all
backend microservices (like above) with a common gateway endpoint.
5. Microservice is deployed using app.yaml.
6. ESP endpoint is configured using api.yaml (OpenAPI API Surface document)
This is the tutorial I am following:
https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine-standard
app.yaml to deploy the microservice:
runtime: java11
entrypoint: java -jar tar/worker.jar
instance_class: F2
service: myfavmicroservice
handlers:
- url: /.*
script: this field is required, but ignored
The ESP api.yaml for describing microservice api surface is like this
swagger: "2.0"
info:
title: "My fav micro Service"
description: "Serve my favorite microservice content"
version: "1.0.0"
# This field will be replaced by the deploy_api.sh script.
host: microservice-system-gateway-5c4s43dedq-ue.a.run.app
schemes:
- https
produces:
- application/json
paths:
/myfavmicroservice:
get:
summary: Greet the user
operationId: hello
description: "Get helloworld mainpage"
x-google-backend:
address: https://myfavmicroservice.project amazing.dev.corporation.com
jwt_audience: .....
responses:
'200':
description: "Success."
schema:
type: string
'400':
description: "The IATA code is invalid or missing."
schema:
type: string
But the problem is that whenever I make request to endpoint like this:
GET
https://microservice-system-gateway-5c4s43dedq-ue.a.run.app/myfavmicroservice
I always get gateway 500 error. Upon inspection of ESP logs I am finding primarily
1. SSL Handshake Error with Error no 40
2. upstream server temporarily disabled while SSL handshaking to upstream
3. request: "GET /metadatasvc-hello HTTP/1.1", upstream: "https://[3461:f4f0:5678:a13::63]:443/myfavmicroservice
So the ESP is intercepting my request correctly, perhaps forwarding the request in correct format as well as evidenced from #3. But I am getting SSL error.
Why am I getting this error?
Ok figured out the issue. For the benefit of stackoverflow community I am posting the solution here.
I figured that if you use custom domains that you map to app engine like this in the OpenAPI Configuration (That you deploy to ESP), SSL handshake fails:
x-google-backend:
address: https://my-microservice.my-custom-domain.company.com
However if you use the default URL that is assigned by APP Engine upon startup of the microservice like this, everything is fine:
x-google-backend:
address: https://my-microservice.appspot.com
So I am trying to figure out how to use custom domain mappings in ESP OpenAPI configuration. For now though, if I do that the SSL proxying is not working inside ESP.
I'm trying to connect to google sql cloud instance from custom runtime environment in App Engine.
When I follow the doc to connect using unix domain socket, it works. The problem is when I try to connect using a TCP connect. It shows:
Warning: mysqli_connect(): (HY000/2002): Connection refused in
/var/www/html/index.php on line 3
Connect error: Connection refused
This is my app.yaml file:
runtime: custom
env: flex
beta_settings:
cloud_sql_instances: testing-mvalcam:europe-west1:testdb=tcp:3306
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
The Dockerfile:
FROM php:7.0-apache
ENV PORT 8080
CMD sed -i "s/80/$PORT/g" /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf && docker-php-entrypoint apache2-foreground
RUN docker-php-ext-install mysqli
RUN a2enmod rewrite
COPY ./src /var/www/html
EXPOSE $PORT
And index.php:
<?php
$link = mysqli_connect('127.0.0.1', 'root', 'root', 'test');
if (!$link){
die('Connect error: '. mysqli_connect_error());
}
echo 'successfully connected';
mysqli_close($link);
?>
What am I doing Wrong?
The ip address ‘172.17.0.1’ is related with the docker container where the webserver is running, you can get more context on that in this documentation.
The documentation page you’re using might be lacking on adjusting the use case if you’re deploying with a presence of a Dockerfile. In the following documentation you can read more information about App Engine flexible runtimes.
As demonstrated by the documentation you’re using (remember to click on the TCP CONNECTION tab on this page), on the section of the app.yaml related to Cloud SQL instances information about the TCP port in use by the database server is needed.