OpenShift pod certificate installation - active-directory

I have an OpenShift PHP pod running an application and I need to authenticate a user account against an Active Directory server. The LDAP bind is failing with a certificate error
LDAP Error (authenticateUser), Cannot bind to LDAP : error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (unable to get local issuer certificate) - ldap_bind failed
It was suggested I need to install the AD servers certificate into the pod. The certificate has now been copied into the Git repository. I thought I may be able to use a Life Cycle Hook to install the certificate into /etc/openldap/certs/
rollingParams:
post:
execNewPod:
command:
- /bin/sh
- '-c'
- >-
/usr/bin/cp
/opt/app-root/src/certificates/certificate.pem
/etc/openldap/certs/
containerName: application
failurePolicy: Ignore
However there is a permissions error - I assume that the deployment is not running as root so the directory is not accessible.
/usr/bin/cp: cannot create regular file '/etc/openldap/certs/certificate.pem': Permission denied
Is it possible to copy this certificate using the DeploymentConfig or is there a better way to do this?

I managed to resolve this. You need to add the ldap.conf and the certificate to ConfigMaps.
oc create configmap configmap_name --from-file=filenam=path and filename
Then you need to mount the ConfigMaps:
volumeMounts:
- mountPath: /etc/openldap/ldap.conf
name: openldap-ad-config-volume
subPath: ldap.conf
- mountPath: /etc/openldap/certs/certificate.pem
name: ad-certificate-volume
subPath: certificate.pem
volumes:
- configMap:
defaultMode: 292
name: openldap-ad-config
name: openldap-ad-config-volume
- configMap:
defaultMode: 292
name: ad-certificate
name: ad-certificate-volume
Property subPath is used so that the rest of the contents of the directory are not excluded as mounting normally mounts the whole of the directory. Property defaultMode is used otherwise the files will be world read/write.

Related

What are the correct /etc/exports settings for Kubernetes NFS Storage?

I have a simple NFS server (followed instructions here) connected to a Kubernetes (v1.24.2) cluster as a storage class. When a new PVC is created, it creates a PV as expected with a new directory on the NFS server.
The NFS provider was deployed as instructed here.
My issue is that containers don't seem to be able to perform all the functions they expect to when interacting with the NFS server. For example:
A PVC and PV are created with the following yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-data
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
This creates a directory on the NFS server as expected.
Then this deployment is created to use the PVC:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 30
hostname: mssqlinst
securityContext:
fsGroup: 10001
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Developer"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: "Password123"
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-data
The server comes up and responds to requests but does so with the error:
[S0002][823] com.microsoft.sqlserver.jdbc.SQLServerException: The operating system returned error 1117(The request could not be performed because of an I/O device error.) to SQL Server during a read at offset 0x0000000009a000 in file '/var/opt/mssql/data/master.mdf'. Additional messages in the SQL Server error log and operating system error log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
My /etc/exports file has the following contents:
/srv *(rw,no_subtree_check,no_root_squash)
When the SQL container starts, it doesn't undergo any container restarts but the SQL service within the container appears to get into some sort of restart loop until a connection is attempted and then it throws the error and appears to stop.
Is there something I'm missing in the /etc/exports file? I tried variations with sync, async, and insecure but can't seem to get past the SQL error.
I gather from the error that this has something to do with the container's ability to read/write from/to the disk. Am I in the right ballpark?
The config that ended up working was:
/srv *(rw,no_root_squash,insecure,sync,no_subtree_check)
This was after a reinstall of the cluster. No significant changes elsewhere but still seems like there may have been more to the issue than this one config.

Unable to Deploy Application to App Engine Flexible Environment with a Shared VPC

I am unable to deploy a Dockerized application to App Engine Flexible Environment (AEF) in a Google Cloud Platform (GCP) project with a provisioned Shared Virtual Private Cloud (XPN).
In other words, my application with the following app.yaml:
automatic_scaling:
max_num_instances: 1
min_num_instances: 1
env: flex
network:
instance_tag: incorrect-target-tag
name: projects/$GCP_PROJECT_ID/global/networks/$XPN_NETWORK_NAME
service: $AEF_APPLICATION_NAME
and a confirmed Docker image name and tag in Google Container Registry (GCR):
gcloud container images list-tags \
us.gcr.io/$GCP_PROJECT_NAME/$AEF_APPLICATION_NAME \
--flatten=tags \
--format='value(format("us.gcr.io/$GCP_PROJECT_NAME/$AEF_APPLICATION_NAME:{0}", tags))' \
--project=$GCP_PROJECT_NAME
#=>
. . .
us.gcr.io/$GCP_PROJECT_NAME/$AEF_APPLICATION_NAME:$DOCKER_IMAGE_TAG
. . .
is unable to be deployed to AEF:
yes | gcloud app deploy \
--appyaml=./app.yaml \
--image-url=us.gcr.io/$GCP_PROJECT_NAME/$AEF_APPLICATION_NAME:$DOCKER_IMAGE_TAG
#=>
Services to deploy:
descriptor: [/. . ./app.yaml]
source: [/. . ./$AEF_APPLICATION_NAME]
target project: [$GCP_PROJECT_NAME]
target service: [$AEF_APPLICATION_NAME]
target version: [$AEF_APPLICATION_VERSION]
target url: [. . .]
target service account: [App Engine default service account]
Do you want to continue (Y/n)?
Beginning deployment of service [$AEF_APPLICATION_NAME]...
WARNING: Deployment of service [$AEF_APPLICATION_NAME] will ignore the skip_files field in the configuration file, because the image has already been built.
Updating service [$AEF_APPLICATION_NAME] (this may take several minutes)...
.............................................................failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/$GCP_PROJECT_NAME/regions/$AEF_APPLICATION_REGION/operations/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx error [INTERNAL]: An internal error occurred while processing task /app-engine-flex/insert_flex_deployment/flex_create_resources>1970-01-01T00:00:00.001Z000001.jc.2: <eye3 title='FAILED_PRECONDITION'/> generic::FAILED_PRECONDITION: Validation error: The App Engine flexible Environment Service Agent is unable to find a suitable Flex Firewall Rule in network '$XPN_NETWORK_NAME' in project '$GCP_PROJECT_ID'. Have the Shared VPC Admin create a Flex Firewall Rule as described in https://cloud.google.com/appengine/docs/flexible/python/using-shared-vpc
with the following Virtual Private Cloud (VPC) firewall rule supporting AEF communication through the XPN:
cloud compute firewall-rules list \
--filter="allowed[].ports=(8443) AND allowed[].ports=(10402)" \
--project=$GCP_PROJECT_NAME
#=>
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
aef-instance $XPN_NETWORK_NAME INGRESS 1000 tcp:8443,tcp:10402 False
To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.
gcloud compute firewall-rules describe \
aef-instance \
--format=yaml \
--project=$GCP_PROJECT_NAME
#=>
allowed:
- IPProtocol: tcp
ports:
- '8443'
- '10402'
creationTimestamp: '1970-01-01T00:00:00.000-01:00'
description: allows traffic between aef and xpn
direction: INGRESS
disabled: false
id: 'xxxxxxxxxxxxxxxxxxx'
kind: compute#firewall
logConfig:
enable: false
name: aef-instance
network: https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/global/networks/$XPN_NETWORK_NAME
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/global/firewalls/aef-instance
sourceRanges:
- 35.191.0.0/16
- 130.211.0.0/22
targetTags:
- incorrect-target-tag
Note: this rule is required for using any AEF application with the XPN, described here.
Following the guide to linking AEF and the XPN here, the target tag for VPC Firewall rule aef-instance MUST be aef-instance. Update VPC Firewall rule aef-instance with the correct target tag:
gcloud compute firewall-rules update \
aef-instance \
--project=$GCP_PROJECT_NAME \
--target-tags=aef-instance
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/global/firewalls/aef-instance].
and you will be able to redeploy to AEF without that validation error.
Note: changing the target tag in the app.yaml isn't necessary: the AEF application will be able to communicate over a provisioned XPN as long as there is a firewall rule that meets this criteria exactly, regardless of tags specified in the app.yaml.

How can I allow connections by specifying docker-compose host names in postgres's pg_hba.conf file?

I'm trying to allow a connection from one Docker container to a postgres container by specifying the host name of the client container in the server's pg_hba.conf file. Postgres's documentation indicates that a host name can be specified, rather than an IP address. Since I'm using Docker Compose to start the two containers, they should be accessible to each other by container name using Docker Compose's DNS. I don't want to open up all IP addresses for security reasons, and when I eventually add access for additional containers, it will be much easier to just specify the container name in the pg_hba.conf file rather than assign static IP addresses to each of them. However, when I attempt to do this, it fails with a message such as this:
psql: FATAL: no pg_hba.conf entry for host "192.168.208.3", user "postgres", database "postgres", SSL off
Here's a minimum reproducible example of what I'm trying to do:
I use the following Docker Compose file:
version: '3'
services:
postgresdb:
image: postgres:9.4
container_name: postgres-server
ports:
- "5432:5432"
volumes:
- "postgres-data:/var/lib/postgresql/data"
postgres-client:
image: postgres:9.4
container_name: postgres-client
depends_on:
- postgres-server
volumes:
postgres-data:
After running docker-compose up, I exec into the server container and modify the pg_hba.conf file in /var/lib/postgresql/data to look like this:
host all postgres postgres-client trust
I then restart the postgres server (docker-compose down then docker-compose up) and it loads the modified pg_hba.conf from the mounted volume.
I exec into the client container and attempt to connect to the postgres server:
docker exec -it postgres-client /bin/bash
psql -U postgres -h postgres-server postgres
This is where I get an error such as the following:
psql: FATAL: no pg_hba.conf entry for host "192.168.208.3", user "postgres", database "postgres", SSL off
I can't seem to find anything online that shows how to get this working. I've found examples where they just open up all or a range of IP addresses, but none where they get the use of a host name working. Here are some related questions and information:
https://www.postgresql.org/docs/9.4/auth-pg-hba-conf.htm
Allow docker container to connect to a local/host postgres database
https://dba.stackexchange.com/questions/212020/using-host-names-in-pg-hba-conf
Any ideas on how to get this working the way I would expect it to work using Docker Compose?
You need to add the full qualified host name of the client container in pg_hba.conf.
host all postgres postgres-client.<network_name> trust
e.g:
host all postgres postgres-client.postgreshostresolution_default trust
If no network has been defined, network_name is <project_name>_default.
By default project_name is the folder the docker-compose.yml resides.
To get the network names you may also call
docker inspect postgres-client | grep Networks -A1
or
docker network ls
to get a list of all docker networks currently defined on your docker host

Lumen App Deployed on GAE is unable to connect to DB in cloud-sql

I developed an API in Lumen and deployed it on GAE. I followed this tutorial for that purpose. It worked fine until the deployment of API but the problem came when I tried to access the DB that is on the cloud-sql instance. I am unable to access the DB in the way described in the above mentioned tutorial. Then I tried the different solution and none of them work. At-last just for test purpose I added the Instances IP address (where the GAE is deployed) in the authorized list of the cloud-sql instance and its working now. But this is a temporary solution as Instances are deployed on VM's and their IP address can change at anytime that will break the connection between GAE and cloud-sql. Any solution or suggestion will be appreciated.
Although I have tried by changing my App.yaml multiple ways but still I have given it below:
runtime: php
env: flex
runtime_config:
document_root: public
# Ensure we skip ".env", which is only for local development
skip_files:
- .env
env_variables:
# Put production environment variables here.
#APP_ENV: production
APP_DEBUG: true
#QUEUE_DRIVER: sync
APP_LOG: errorlog
APP_KEY: 32 char key
STORAGE_DIR: /tmp
CACHE_DRIVER: database
SESSION_DRIVER: database
#APP_TIMEZONE: UTC
## Set these environment variables according to your CloudSQL configuration.
DB_HOST: 127.0.0.1
DB_DATABASE: dbname
DB_USERNAME: root
DB_PASSWORD: pass
DB_SOCKET: "/cloudsql/Instance Connection Name"
MYSQL_DSN: "mysql:unix_socket=/Instance Connection Name;dbname=dbname"
MYSQL_USER: root
MYSQL_PASSWORD: pass
beta_settings:
# for Cloud SQL, set this value to the Cloud SQL connection name,
# e.g. "project:region:cloudsql-instance"
cloud_sql_instances: "Instance Connection Name"

Unable to SSH to Google Cloud

I installed the Google Cloud SDK
Thought Web UI I created a new instance. I am not knowledgeable of SSH. I followed steps as described here: https://cloud.google.com/compute/docs/instances#sshkeys
I have Window 7 OS
I checked firewall rules as suggested here: https://cloud.google.com/compute/docs/troubleshooting#ssherrors
I checked these through Web UI and found rule
"default-allow-ssh 0.0.0.0/0 tcp:22 Apply to all targets"
STEPS I FOLLOWED:
1) > gcloud auth login
(default browser opens up and I authorize the Google Cloud SDK)
Google SDK Shell outputs:
"Saved Application Credentails. You are now logged as [someuser#gmail]
Your current project is [some-project-999].
2) > gcloud compute ssh my-instance --zone us-central1-a
Google SDK Shell outputs:
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\..\bin\sdk\ssh-keygen.EXE] will be executed to generate
a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
The key fingerprint is:
ssh-rsa 2048 06:73:ac:e8:f2:31:c8:df:d4:b0:a2:3b:a2:53:6c:09
Your private key has been saved in C:\Users\First Last\.ssh\google_compute_engine.
Your public key has been saved in C:\Users\First Last\.ssh\google_compute_engine.pub.
Your putty key has been saved in C:\Users\First Last\.ssh\google_compute_engine.ppk.
Updated [https://www.googleapis.com/compute/v1/projects/arctic-depth-863].
Server refused our key
FATAL ERROR: Disconnected: No supported authentication methods available (server sent: publickey)
Server refused our key
FATAL ERROR: Disconnected: No supported authentication methods available (server sent: publickey)
Server refused our key
FATAL ERROR: Disconnected: No supported authentication methods available (server sent: publickey)
FATAL ERROR: Network error: Software caused connection abort
FATAL ERROR: Network error: Connection timed out
ERROR: (gcloud.compute.ssh) Could not SSH to the instance. It is possible that your SSH key has not propagated to the i
nstance yet. Try running this command again. If you still cannot connect, verify that the firewall and instance are set
to accept ssh traffic.
On the browser's Web UI, I open the Broser's SSH and I navigate to .ssh folder
someuser_gmail_com#my-instance:~$ cd .ssh
someuser_gmail_com#my-instance:~$ cat authorized_keys
# Added by Google
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4OxYxWvIlp...F7As google-ssh {"userName":"someuser#gmail.com","expireOn":"2015-02-21T23:29:06+0000"}
# Added by Google
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzd...KRqcUZmvWr= google-ssh {"userName":"someuser#gmail.com","expireOn":"2015-02-21T23:28:55+0000"}
on Web UI, I navigate to Project's > Compute > Compute Engine > Metadata > SSH KEYS and I see three records
USERNAME KEY
someuser_gmail_com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4...", "edpireOn":"2015-02-21T23:29:06+0000"}
someuser_gmail_com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTIt...", "edpireOn":"2015-02-21T23:29:06+0000"}
First Last ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAi...ZkpSpRt6RQ== First Last#MYPC
In my local computer, I navigate to Users/First Last/.ssh/google_compute_engine.pub and I see
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAit...mGhUKZRgFZkpSpRt6RQ== First Last#MYPC
QUESTIONS:
Does white space in the user's folder path causes problems (i.e. "First Last")?
When the key is created by the Google Cloud SDK, it sets the comment to First Last#MYPC. Is this the correct setting? (I have been reading and trying this and that and I suspect it should be something like someuser#my-instance-public-IP)
When I
Google Cloud SDK > gcloud compute instances describe my-instance --zone us-central1-a --format yaml
canIpForward: false
creationTimestamp: '2015-02-21T14:53:37.276-08:00'
disks:
- autoDelete: true
boot: true
deviceName: my-instance
index: 0
interface: SCSI
kind: compute#attachedDisk
licenses:
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1204-precise
mode: READ_WRITE
source: https://www.googleapis.com/compute/v1/projects/some-project-999/zones/us-central1-a/disks/my-instance
type: PERSISTENT
id: '111812933445597333'
kind: compute#instance
machineType: https://www.googleapis.com/compute/v1/projects/some-project-999/zones/us-central1-a/machineTypes/g1-small
metadata:
fingerprint: w3steEkuQUS=
kind: compute#metadata
name: my-instance
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: External NAT
natIP: 112.134.99.170
type: ONE_TO_ONE_NAT
name: nic0
network: https://www.googleapis.com/compute/v1/projects/some-project-999/global/networks/default
networkIP: 10.356.252.66
scheduling:
automaticRestart: true
onHostMaintenance: MIGRATE
selfLink: https://www.googleapis.com/compute/v1/projects/some-project-999/zones/us-central1-a/instances/my-instance
serviceAccounts:
- email: 78111222333-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
status: RUNNING
tags:
fingerprint: DLYFgkKTlB3=
items:
- http-server
zone: https://www.googleapis.com/compute/v1/projects/some-project-999/zones/us-central1-a
C:\Program Files\Google\Cloud SDK>
This is a known issue when using Cloud SDK from Windows.
Please download pageant.exe at [1] and use it to load your ppk key or use Putty (downloadable from the same link) to SSH to instance as documented at [2].
As a workaround you can even rename ssh.exe to ssh-bak.exe and ssh-term.exe to ssh.exe in C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\sdk\
Link:
[1] - http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
[2] - https://cloud.google.com/compute/docs/console#sshkeys

Resources