Problem connection with PostgreSql Docker image - database

I have a problem with connection to PostgreSQL.
I mounted the Docker image throught the docker-compose.yml below:
version: '2'
volumes:
mongodb_repl_data11:
external: true
mongodb_repl_data12:
external: true
mongodb_repl_data13:
external: true
datapostgres:
datapgadmin:
activemqdata:
services:
mongo0:
hostname: mongo0
container_name: mongo0
image: mongo:4.0
expose:
- 30000
ports:
- 30000:30000
volumes:
- 'mongodb_repl_data11:/data/db:z'
restart: always
command: "--bind_ip_all --replSet rs0 --port 30000"
mongo1:
hostname: mongo1
container_name: mongo1
image: mongo:4.0
expose:
- 30001
ports:
- 30001:30001
volumes:
- 'mongodb_repl_data12:/data/db:z'
restart: always
command: "--bind_ip_all --replSet rs0 --port 30001"
mongo2:
hostname: mongo2
container_name: mongo2
image: mongo:4.0
expose:
- 30002
ports:
- 30002:30002
volumes:
- 'mongodb_repl_data13:/data/db:z'
restart: always
command: "--bind_ip_all --replSet rs0 --port 30002"
postgres:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGDATA: /data/postgres
volumes:
- datapostgres:/data/postgres
ports:
- "5432:5432"
pgadmin:
links:
- postgres:postgres
image: fenglc/pgadmin4
volumes:
- datapgadmin:/root/.pgadmin
ports:
- "5050:5050"
activemq:
image: rmohr/activemq:5.15.6
ports:
- "8161:8161"
- "61616:61616"
volumes:
- activemqdata:/opt/activemq/data
environment:
- ACTIVEMQ_CONFIG_SCHEDULERENABLED=true
but when i try to connect throught DBeaver 21.0.5 (username: postgres , password: postgres) it says: FATAL: password authentication failed for user "postgres"
before test connection
drivers
I have already tried to enter the Postgres Docker instance and change the password with ALTER USER postgres PASSWORD 'newPassword';
---------------- UPDATE -----------
I found out from the Docker logs that the server shuts down immediately after boot.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
2021-11-23 10:54:35.410 UTC [1] LOG: starting PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2021-11-23 10:54:35.410 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-11-23 10:54:35.410 UTC [1] LOG: listening on IPv6 address "::", port 5432
2021-11-23 10:54:35.438 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-11-23 10:54:35.464 UTC [61] LOG: database system was shut down at 2021-11-23 10:54:35 UTC
2021-11-23 10:54:35.483 UTC [1] LOG: database system is ready to accept connections
2021-11-23 11:18:15.203 UTC [99] FATAL: role "root" does not exist
2021-11-23 11:32:38.850 UTC [1] LOG: received smart shutdown request
2021-11-23 11:32:38.887 UTC [1] LOG: background worker "logical replication launcher" (PID 67) exited with exit code 1
2021-11-23 11:32:38.888 UTC [62] LOG: shutting down
2021-11-23 11:32:38.943 UTC [1] LOG: database system is shut down
2021-11-23 11:36:05.235 UTC [1] LOG: starting PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2021-11-23 11:36:05.236 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-11-23 11:36:05.236 UTC [1] LOG: listening on IPv6 address "::", port 5432
2021-11-23 11:36:05.253 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-11-23 11:36:05.272 UTC [26] LOG: database system was shut down at 2021-11-23 11:32:38 UTC
2021-11-23 11:36:05.287 UTC [1] LOG: database system is ready to accept connections
2021-11-23 14:38:48.475 UTC [75] FATAL: terminating connection due to administrator command
2021-11-23 14:38:48.721 UTC [1] LOG: received smart shutdown request
2021-11-23 14:38:48.782 UTC [1] LOG: background worker "logical replication launcher" (PID 32) exited with exit code 1
2021-11-23 14:38:48.782 UTC [27] LOG: shutting down
2021-11-24 13:30:19.017 UTC [1] LOG: starting PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2021-11-24 13:30:19.018 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-11-24 13:30:19.018 UTC [1] LOG: listening on IPv6 address "::", port 5432
2021-11-24 13:30:19.044 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-11-24 13:30:19.066 UTC [27] LOG: database system was shut down at 2021-11-23 14:38:48 UTC
2021-11-24 13:30:19.098 UTC [1] LOG: database system is ready to accept connections
2021-11-24 13:32:42.060 UTC [1] LOG: received fast shutdown request
2021-11-24 13:32:42.078 UTC [1] LOG: aborting any active transactions
2021-11-24 13:32:42.081 UTC [1] LOG: background worker "logical replication launcher" (PID 33) exited with exit code 1
2021-11-24 13:32:42.082 UTC [28] LOG: shutting down
2021-11-24 13:32:42.157 UTC [1] LOG: database system is shut down
2021-11-24 13:32:44.238 UTC [1] LOG: starting PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2021-11-24 13:32:44.244 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-11-24 13:32:44.244 UTC [1] LOG: listening on IPv6 address "::", port 5432
2021-11-24 13:32:44.278 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-11-24 13:32:44.309 UTC [27] LOG: database system was shut down at 2021-11-24 13:32:42 UTC
2021-11-24 13:32:44.324 UTC [1] LOG: database system is ready to accept connections
2021-11-24 13:47:09.297 UTC [1] LOG: received fast shutdown request
2021-11-24 13:47:09.307 UTC [1] LOG: aborting any active transactions
2021-11-24 13:47:09.310 UTC [1] LOG: background worker "logical replication launcher" (PID 33) exited with exit code 1
2021-11-24 13:47:09.311 UTC [28] LOG: shutting down
2021-11-24 13:47:09.363 UTC [1] LOG: database system is shut down
2021-11-24 13:47:11.392 UTC [1] LOG: starting PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2021-11-24 13:47:11.392 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-11-24 13:47:11.392 UTC [1] LOG: listening on IPv6 address "::", port 5432
2021-11-24 13:47:11.435 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-11-24 13:47:11.467 UTC [26] LOG: database system was shut down at 2021-11-24 13:47:09 UTC
2021-11-24 13:47:11.482 UTC [1] LOG: database system is ready to accept connections
2021-11-24 13:52:10.719 UTC [1] LOG: starting PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2021-11-24 13:52:10.719 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-11-24 13:52:10.719 UTC [1] LOG: listening on IPv6 address "::", port 5432
2021-11-24 13:52:10.808 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-11-24 13:52:10.847 UTC [27] LOG: database system was interrupted; last known up at 2021-11-24 13:47:11 UTC
2021-11-24 13:52:12.059 UTC [27] LOG: database system was not properly shut down; automatic recovery in progress
2021-11-24 13:52:12.079 UTC [27] LOG: redo starts at 0/16FD130
2021-11-24 13:52:12.079 UTC [27] LOG: invalid record length at 0/16FD168: wanted 24, got 0
2021-11-24 13:52:12.079 UTC [27] LOG: redo done at 0/16FD130 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2021-11-24 13:52:12.182 UTC [1] LOG: database system is ready to accept connections
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /data/postgres ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /data/postgres -l logfile start
waiting for server to start....2021-11-23 10:54:34.889 UTC [49] LOG: starting PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2021-11-23 10:54:34.907 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-11-23 10:54:34.943 UTC [50] LOG: database system was shut down at 2021-11-23 10:54:33 UTC
2021-11-23 10:54:34.961 UTC [49] LOG: database system is ready to accept connections
done
server started
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
2021-11-23 10:54:35.124 UTC [49] LOG: received fast shutdown request
waiting for server to shut down....2021-11-23 10:54:35.139 UTC [49] LOG: aborting any active transactions
2021-11-23 10:54:35.146 UTC [49] LOG: background worker "logical replication launcher" (PID 56) exited with exit code 1
2021-11-23 10:54:35.157 UTC [51] LOG: shutting down
2021-11-23 10:54:35.244 UTC [49] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
PostgreSQL Database directory appears to contain a database; Skipping initialization

Related

mongod.service active : failed (failed to start mongodb server)

I just started learning mongodb everything was going well until I tried stoping the server using sudo systemctl stop mongod and restarting it usingsudo systemctl start mongod.
now its showing:-
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-01-16 12:39:04 IST; 3s ago
Docs: https://docs.mongodb.org/manual
Process: 357998 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)
Main PID: 357998 (code=exited, status=1/FAILURE)
Jan 16 12:39:04 prabal-computer systemd[1]: Started MongoDB Database Server.
Jan 16 12:39:04 prabal-computer mongod[357998]: {"t":{"$date":"2021-01-16T07:09:04.669Z"},"s":"F", "c":"CONTROL", "id":20574, "ctx":"main">
Jan 16 12:39:04 prabal-computer systemd[1]: mongod.service: Main process exited, code=exited, status=1/FAILURE
Jan 16 12:39:04 prabal-computer systemd[1]: mongod.service: Failed with result 'exit-code'.
lines 1-11/11 (END)
whenever I am trying to start and see the status using sudo systemctl status mongod,this error appears.
note: I have already tried reinstalling the mongodb

SQL Server pods with PersistentVolumeClaim

This is the scenario: a SQL Server linux kubernetes setup with minikube.
It runs fine with default settings, databases/tables are created no problem.
But the database files should not be stored within the container so a PersistentVolumeClaim was added and the pod config changed to use the claim and mount /var/opt/mssql to /sqldata on the minikube VM.
apiVersion: v1
kind: PersistentVolume
metadata:
name: sqldata
spec:
capacity:
storage: 1Gi
storageClassName: sqlserver
accessModes:
- ReadWriteMany
hostPath:
path: "/sqldata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dbclaim
spec:
accessModes:
- ReadWriteMany
storageClassName: sqlserver
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: Pod
spec:
initContainers:
- name: volume-permissions
image: busybox
command: ["sh", "-c", "chown -R 10001:0 /var/opt/mssql"]
volumeMounts:
- mountPath: "/var/opt/mssql"
name: sqldata-storage
volumes:
- name: sqldata-storage
persistentVolumeClaim:
claimName: dbclaim
containers:
- image: mcr.microsoft.com/mssql/server
name: foo
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: sql-password
key: sa_password
- name: MSSQL_PID
value: Developer
volumeMounts:
- mountPath: "/var/opt/mssql/data"
name: sqldata-storage
Also tried image: microsoft/mssql-server-linux
chown -R 10001:0 /var/opt/mssql
is called in initcontainer to give mssql user access to the host VM's directory.
But what happens now is that the sql server pod starts up and after a minute or two it stops with a CrashloopBackoff.
The logfile from the pod says:
2020-08-02 14:33:57.55 Server Registry startup parameters:
-d /var/opt/mssql/data/master.mdf
-l /var/opt/mssql/data/mastlog.ldf
-e /var/opt/mssql/log/errorlog 2020-08-02 14:33:57.78 Server Error 87(The parameter is incorrect.) occurred while opening file
'/var/opt/mssql/data/master.mdf' to obtain configuration information
at startup. An invalid startup option might have caused the error.
Verify your startup options, and correct or remove them if necessary
Logging into the minikube VM, it looks like sql server does have access as the master table etc is created in the actual mounted directory although only owner permissions are set which is 10001:
$ ls -l /sqldata
-rw-r----- 1 10001 root 4194304 Aug 9 06:51 master.mdf
What to check for to get it running like this?
I have managed to run this. The only thing that I changed from your spec is removing the storageclassName from the Persistent Volume and PersistentVolumeClaim. This is because I didn't have a storage class created so not specifying the storage class will use the default one.
Here is yaml I run.
#pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: sqldata
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/sqldata"
#pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dbclaim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
#sqlserver.yaml
apiVersion: v1
kind: Pod
metadata:
name: sqlserver
spec:
initContainers:
- name: volume-permissions
image: busybox
command: ["sh", "-c", "chown -R 10001:0 /var/opt/mssql"]
volumeMounts:
- mountPath: "/var/opt/mssql"
name: sqldata-storage
volumes:
- name: sqldata-storage
persistentVolumeClaim:
claimName: dbclaim
containers:
- image: mcr.microsoft.com/mssql/server
name: foo
volumeMounts:
- mountPath: "/var/opt/mssql/data"
name: sqldata-storage
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: sql-password
key: sa_password
- name: MSSQL_PID
value: Developer
This is how I created a secret
kubectl create secret generic sql-password --from-literal=sa_password=Passw0rd
Here is the output of describing the pod.
vagrant#kubemaster:~$ kubectl describe pod sqlserver
Name: sqlserver
Namespace: default
Priority: 0
Node: kubenode02/192.168.56.4
Start Time: Thu, 13 Aug 2020 20:10:06 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.36.0.2
IPs:
IP: 10.36.0.2
Init Containers:
volume-permissions:
Container ID: docker://dbc81ddda15aa5af4b56085ee1923b530f1154ba147c589dcc76fb80121c2d0a
Image: busybox
Image ID: docker-pullable://busybox#sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977
Port: <none>
Host Port: <none>
Command:
sh
-c
chown -R 10001:0 /var/opt/mssql
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 13 Aug 2020 20:10:11 +0000
Finished: Thu, 13 Aug 2020 20:10:11 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/opt/mssql from sqldata-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-w9t6t (ro)
Containers:
foo:
Container ID: docker://f43e9321d85daa1b5695dc2944f42a4e12db34b97ba0f333d8a8b9afeace0f31
Image: mcr.microsoft.com/mssql/server
Image ID: docker-pullable://mcr.microsoft.com/mssql/server#sha256:1a69a5e5f7b00feae9edab6bd72e2f6fd5bbb4e74e4ca46e3cc46f1b911e1305
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 13 Aug 2020 20:10:14 +0000
Ready: True
Restart Count: 0
Environment:
ACCEPT_EULA: Y
SA_PASSWORD: <set to the key 'sa_password' in secret 'sql-password'> Optional: false
MSSQL_PID: Developer
Mounts:
/var/opt/mssql/data from sqldata-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-w9t6t (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
sqldata-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: dbclaim
ReadOnly: false
default-token-w9t6t:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-w9t6t
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/sqlserver to kubenode02
Normal Pulling 84s kubelet, kubenode02 Pulling image "busybox"
Normal Pulled 80s kubelet, kubenode02 Successfully pulled image "busybox"
Normal Created 80s kubelet, kubenode02 Created container volume-permissions
Normal Started 80s kubelet, kubenode02 Started container volume-permissions
Normal Pulling 79s kubelet, kubenode02 Pulling image "mcr.microsoft.com/mssql/server"
Normal Pulled 78s kubelet, kubenode02 Successfully pulled image "mcr.microsoft.com/mssql/server"
Normal Created 78s kubelet, kubenode02 Created container foo
Normal Started 77s kubelet, kubenode02 Started container foo
vagrant#kubemaster:~$
And here is logs from the pod.
vagrant#kubemaster:~$ kubectl logs sqlserver
SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
Your master database file is owned by mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
2020-08-13 20:10:17.89 Server Setup step is FORCE copying system data file 'C:\templatedata\model_replicatedmaster.mdf' to '/var/opt/mssql/data/model_replicatedmaster.mdf'.
2020-08-13 20:10:17.96 Server Setup step is FORCE copying system data file 'C:\templatedata\model_replicatedmaster.ldf' to '/var/opt/mssql/data/model_replicatedmaster.ldf'.
2020-08-13 20:10:17.96 Server Setup step is FORCE copying system data file 'C:\templatedata\model_msdbdata.mdf' to '/var/opt/mssql/data/model_msdbdata.mdf'.
2020-08-13 20:10:17.97 Server Setup step is FORCE copying system data file 'C:\templatedata\model_msdblog.ldf' to '/var/opt/mssql/data/model_msdblog.ldf'.
2020-08-13 20:10:18.06 Server Microsoft SQL Server 2019 (RTM-CU6) (KB4563110) - 15.0.4053.23 (X64)
Jul 25 2020 11:26:55
Copyright (C) 2019 Microsoft Corporation
Developer Edition (64-bit) on Linux (Ubuntu 18.04.4 LTS) <X64>
2020-08-13 20:10:18.07 Server UTC adjustment: 0:00
2020-08-13 20:10:18.07 Server (c) Microsoft Corporation.
2020-08-13 20:10:18.07 Server All rights reserved.
2020-08-13 20:10:18.07 Server Server process ID is 36.
2020-08-13 20:10:18.07 Server Logging SQL Server messages in file '/var/opt/mssql/log/errorlog'.
2020-08-13 20:10:18.07 Server Registry startup parameters:
-d /var/opt/mssql/data/master.mdf
-l /var/opt/mssql/data/mastlog.ldf
-e /var/opt/mssql/log/errorlog
2020-08-13 20:10:18.08 Server SQL Server detected 1 sockets with 2 cores per socket and 2 logical processors per socket, 2 total logical processors; using 2 logical processors based on SQL Server licensing. This is an informational message; no user action is required.
2020-08-13 20:10:18.09 Server SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.
2020-08-13 20:10:18.09 Server Detected 1594 MB of RAM. This is an informational message; no user action is required.
2020-08-13 20:10:18.09 Server Using conventional memory in the memory manager.
2020-08-13 20:10:18.09 Server Page exclusion bitmap is enabled.
2020-08-13 20:10:18.12 Server Buffer pool extension is not supported on Linux platform.
2020-08-13 20:10:18.12 Server Buffer Pool: Allocating 262144 bytes for 180348 hashPages.
2020-08-13 20:10:18.34 Server Buffer pool extension is already disabled. No action is necessary.
2020-08-13 20:10:18.90 Server Successfully initialized the TLS configuration. Allowed TLS protocol versions are ['1.0 1.1 1.2']. Allowed TLS ciphers are ['ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:!DHE-RSA-AES256-GCM-SHA384:!DHE-RSA-AES128-GCM-SHA256:!DHE-RSA-AES256-SHA:!DHE-RSA-AES128-SHA'].
2020-08-13 20:10:18.94 Server Query Store settings initialized with enabled = 1,
2020-08-13 20:10:18.96 Server The maximum number of dedicated administrator connections for this instance is '1'
2020-08-13 20:10:18.97 Server Node configuration: node 0: CPU mask: 0x0000000000000003:0 Active CPU mask: 0x0000000000000003:0. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.
2020-08-13 20:10:18.98 Server Using dynamic lock allocation. Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node. This is an informational message only. No user action is required.
2020-08-13 20:10:19.01 Server In-Memory OLTP initialized on lowend machine.
2020-08-13 20:10:19.05 Server [INFO] Created Extended Events session 'hkenginexesession'
2020-08-13 20:10:19.06 Server Database Instant File Initialization: enabled. For security and performance considerations see the topic 'Database Instant File Initialization' in SQL Server Books Online. This is an informational message only. No user action is required.
ForceFlush is enabled for this instance.
2020-08-13 20:10:19.09 Server Total Log Writer threads: 1. This is an informational message; no user action is required.
2020-08-13 20:10:19.12 Server clflushopt is selected for pmem flush operation.
2020-08-13 20:10:19.14 Server Software Usage Metrics is disabled.
2020-08-13 20:10:19.16 Server CLR version v4.0.30319 loaded.
2020-08-13 20:10:19.18 spid8s [1]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1.
2020-08-13 20:10:19.18 spid8s Starting up database 'master'.
ForceFlush feature is enabled for log durability.
2020-08-13 20:10:19.61 Server Common language runtime (CLR) functionality initialized.
2020-08-13 20:10:19.76 spid8s Service Master Key could not be decrypted using one of its encryptions. See sys.key_encryptions for details.
2020-08-13 20:10:19.77 spid8s An error occurred during Service Master Key initialization. SQLErrorCode=33095, State=8, LastOsError=0.
2020-08-13 20:10:19.91 spid8s Resource governor reconfiguration succeeded.
2020-08-13 20:10:19.91 spid8s SQL Server Audit is starting the audits. This is an informational message. No user action is required.
2020-08-13 20:10:19.92 spid8s SQL Server Audit has started the audits. This is an informational message. No user action is required.
2020-08-13 20:10:20.00 spid8s SQL Trace ID 1 was started by login "sa".
2020-08-13 20:10:20.03 spid8s Server name is 'sqlserver'. This is an informational message only. No user action is required.
2020-08-13 20:10:20.07 spid23s Always On: The availability replica manager is starting. This is an informational message only. No user action is required.
2020-08-13 20:10:20.08 spid23s Always On: The availability replica manager is waiting for the instance of SQL Server to allow client connections. This is an informational message only. No user action is required.
2020-08-13 20:10:20.08 spid8s [4]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1.
2020-08-13 20:10:20.11 spid10s [32767]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1.
2020-08-13 20:10:20.12 spid8s Starting up database 'msdb'.
2020-08-13 20:10:20.13 spid10s Starting up database 'mssqlsystemresource'.
2020-08-13 20:10:20.14 spid10s The resource database build version is 15.00.4053. This is an informational message only. No user action is required.
2020-08-13 20:10:20.19 spid22s A self-generated certificate was successfully loaded for encryption.
2020-08-13 20:10:20.21 spid22s Server is listening on [ 0.0.0.0 <ipv4> 1433].
2020-08-13 20:10:20.22 Server Server is listening on [ ::1 <ipv6> 1434].
2020-08-13 20:10:20.22 Server Server is listening on [ 127.0.0.1 <ipv4> 1434].
2020-08-13 20:10:20.23 Server Dedicated admin connection support was established for listening locally on port 1434.
2020-08-13 20:10:20.25 spid22s Server is listening on [ ::1 <ipv6> 1431].
2020-08-13 20:10:20.25 spid10s [3]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1.
2020-08-13 20:10:20.26 spid22s Server is listening on [ 127.0.0.1 <ipv4> 1431].
2020-08-13 20:10:20.26 spid10s Starting up database 'model'.
2020-08-13 20:10:20.28 spid22s SQL Server is now ready for client connections. This is an informational message; no user action is required.
2020-08-13 20:10:20.57 spid10s Clearing tempdb database.
2020-08-13 20:10:20.94 spid10s [2]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1.
2020-08-13 20:10:20.95 spid10s Starting up database 'tempdb'.
2020-08-13 20:10:21.21 spid10s The tempdb database has 1 data file(s).
2020-08-13 20:10:21.22 spid23s The Service Broker endpoint is in disabled or stopped state.
2020-08-13 20:10:21.23 spid23s The Database Mirroring endpoint is in disabled or stopped state.
2020-08-13 20:10:21.24 spid8s Recovery is complete. This is an informational message only. No user action is required.
2020-08-13 20:10:21.26 spid23s Service Broker manager has started.
vagrant#kubemaster:~$
Here is how I checked if the persistent volume works by creating the test file "test file" inside the mounted path /var/opt/mysql/data and delete the pod and created it again. You can still find the test file I created in the same path.
vagrant#kubemaster:~$ kubectl exec -ti sqlserver -- /bin/bash
mssql#sqlserver:/$
mssql#sqlserver:/$ cd /var/opt/mssql/data/
mssql#sqlserver:/var/opt/mssql/data$ ls -lrt
total 72068
-rw-r----- 1 mssql root 256 Aug 13 19:28 Entropy.bin
-rw-r----- 1 mssql root 14090240 Aug 13 20:06 msdbdata.mdf
-rw-r----- 1 mssql root 4194304 Aug 13 20:10 model_replicatedmaster.mdf
-rw-r----- 1 mssql root 524288 Aug 13 20:10 model_replicatedmaster.ldf
-rw-r----- 1 mssql root 14090240 Aug 13 20:10 model_msdbdata.mdf
-rw-r----- 1 mssql root 524288 Aug 13 20:10 model_msdblog.ldf
-rw-r----- 1 mssql root 4194304 Aug 13 20:10 master.mdf
-rw-r----- 1 mssql root 524288 Aug 13 20:10 msdblog.ldf
-rw-r----- 1 mssql root 8388608 Aug 13 20:10 modellog.ldf
-rw-r----- 1 mssql root 8388608 Aug 13 20:10 model.mdf
-rw-r----- 1 mssql root 8388608 Aug 13 20:10 templog.ldf
-rw-r----- 1 mssql root 8388608 Aug 13 20:10 tempdb.mdf
-rw-r----- 1 mssql root 2097152 Aug 13 20:10 mastlog.ldf
mssql#sqlserver:/var/opt/mssql/data$
mssql#sqlserver:/var/opt/mssql/data$ touch testfile
mssql#sqlserver:/var/opt/mssql/data$ exit
exit
vagrant#kubemaster:~$ kubectl delete pod sqlserver
pod "sqlserver" deleted
vagrant#kubemaster:~$ kubectl create -f sqlserver.yaml
pod/sqlserver created
vagrant#kubemaster:~$
vagrant#kubemaster:~$ kubectl exec -ti sqlserver -- /bin/bash
mssql#sqlserver:/$
mssql#sqlserver:/$ ls -lrt /var/opt/mssql/data/
total 72068
-rw-r----- 1 mssql root 256 Aug 13 19:28 Entropy.bin
-rw-r--r-- 1 mssql root 0 Aug 13 20:17 testfile
-rw-r----- 1 mssql root 14090240 Aug 13 20:17 msdbdata.mdf
-rw-r----- 1 mssql root 4194304 Aug 13 20:18 model_replicatedmaster.mdf
-rw-r----- 1 mssql root 524288 Aug 13 20:18 model_replicatedmaster.ldf
-rw-r----- 1 mssql root 14090240 Aug 13 20:18 model_msdbdata.mdf
-rw-r----- 1 mssql root 524288 Aug 13 20:18 model_msdblog.ldf
-rw-r----- 1 mssql root 4194304 Aug 13 20:18 master.mdf
-rw-r----- 1 mssql root 524288 Aug 13 20:18 msdblog.ldf
-rw-r----- 1 mssql root 8388608 Aug 13 20:18 modellog.ldf
-rw-r----- 1 mssql root 8388608 Aug 13 20:18 model.mdf
-rw-r----- 1 mssql root 8388608 Aug 13 20:18 templog.ldf
-rw-r----- 1 mssql root 8388608 Aug 13 20:18 tempdb.mdf
-rw-r----- 1 mssql root 2097152 Aug 13 20:18 mastlog.ldf
mssql#sqlserver:/$
mssql#sqlserver:/$ exit
exit
vagrant#kubemaster:~$
The problem is in your mountPath.
Can you please try and change it to /var/opt/mssql/data?
containers:
- image: mcr.microsoft.com/mssql/server
name: foo
volumeMounts:
- mountPath: "/var/opt/mssql/data"
name: sqldata-storage
I could not comment but, creating a PV and PVC without the storageClassName breaks the link between the two constructs, you will note that the PVC will create a dynamic PV that is then bound to the default Storage Class. This is especially true when running docker desktop with kubernetes turned on as the orchestrator. I had the same issue on my local install where I wanted to run everything hosted out of docker.

Zabbix agent can't connect to Zabbix5.0 server

I installed zabbix agent on Ubuntu 18.04 server. I changed the IP address and hostname pointing to the Zabbix server in zabbix_agentd.config file in /etc. And Restart Zabbix agent. Seems like the server is running, but the agent is not conneting to the server. How can I fix it?
/etc/zabbix# sudo systemctl status zabbix-agent
● zabbix-agent.service - Zabbix Agent
Loaded: loaded (/lib/systemd/system/zabbix-agent.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-07-29 15:26:30 NZST; 5s ago
Process: 16401 ExecStop=/bin/kill -SIGTERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 16403 ExecStart=/usr/sbin/zabbix_agentd -c $CONFFILE (code=exited, status=0/SUCCESS)
Main PID: 16406 (zabbix_agentd)
Tasks: 6 (limit: 2325)
CGroup: /system.slice/zabbix-agent.service
├─16406 /usr/sbin/zabbix_agentd -c /etc/zabbix/zabbix_agentd.conf
├─16407 /usr/sbin/zabbix_agentd: collector [idle 1 sec]
├─16408 /usr/sbin/zabbix_agentd: listener #1 [waiting for connection]
├─16409 /usr/sbin/zabbix_agentd: listener #2 [waiting for connection]
├─16410 /usr/sbin/zabbix_agentd: listener #3 [waiting for connection]
└─16411 /usr/sbin/zabbix_agentd: active checks #1 [idle 1 sec]
Jul 29 15:26:29 UbuntuServe-001 systemd[1]: Starting Zabbix Agent...
Jul 29 15:26:30 UbuntuServe-001 systemd[1]: zabbix-agent.service: Can't open PID file /run/zabbix/zabbix_agentd.pid (yet?) after start: No such file or directory
Jul 29 15:26:30 UbuntuServe-001 systemd[1]: Started Zabbix Agent.
Can't open PID file /run/zabbix/zabbix_agentd.pid (yet?) after start: No such file or directory
The /run/zabbix directory does not exists. Remove the PIDFILE= line from config file, and zabbxi will create the pid file in temporary directory. Alternatively, create /run/zabbix directory and chown it to zabbix user.

NTP does not synchronize on Ubuntu 18.04.02

My OS is Ubuntu 18.04.02 - There is no NTP, I had to install it using apt-get. I have an application which I must use NTP for compatibility. I disabled and removed timesyncd to avoid conflicts.
I configured ntp.conf to use:
0.north-america.pool.ntp.org
1.north-america.pool.ntp.org
2.north-america.pool.ntp.org
I can ping them without issues. The internet connection is fine. NTP service is running but is in the soliciting state forever and it is not synchronizing the time.
This is just a client IoT device. I just need to synchronize the time to execute our tasks in sync with the other computers on the same network.
What am I missing ? Do I have to enable firewall or ports? I am blocking IPV6, I added -4 to NTPD_OPTS.
$ sudo service ntp status
● ntp.service - Network Time Service
Loaded: loaded (/lib/systemd/system/ntp.service; enabled; vendor preset: enab
Active: active (running) since Mon 2020-03-02 13:07:14 EST; 17h ago
Docs: man:ntpd(8)
Process: 2078 ExecStart=/usr/lib/ntp/ntp-systemd-wrapper (code=exited, status=
Main PID: 2090 (ntpd)
Tasks: 2 (limit: 1600)
CGroup: /system.slice/ntp.service
└─2090 /usr/sbin/ntpd -p /var/run/ntpd.pid -4 -g -u 106:113
Mar 03 06:36:13 FD50-AE ntpd[2090]: Soliciting pool server 209.115.181.1
Mar 03 06:36:34 FD50-AE ntpd[2090]: Soliciting pool server 45.33.2.219
Mar 03 06:36:38 FD50-AE ntpd[2090]: Soliciting pool server 64.79.100.197
Mar 03 06:36:55 FD50-AE ntpd[2090]: Soliciting pool server 91.189.91.157
Mar 03 06:37:20 FD50-AE ntpd[2090]: Soliciting pool server 149.56.47.60
Mar 03 06:37:20 FD50-AE ntpd[2090]: bind(19) AF_INET 127.0.0.1#123 flags
Mar 03 06:37:20 FD50-AE ntpd[2090]: unable to create socket on lo (212)
Mar 03 06:37:20 FD50-AE ntpd[2090]: failed to init interface for address
Mar 03 06:37:38 FD50-AE ntpd[2090]: Soliciting pool server 45.63.54.13
Mar 03 06:37:43 FD50-AE ntpd[2090]: Soliciting pool server 173.255.140.3
[1]+ Stopped sudo service ntp status
$ ntpstat
unsynchronised
polling server every 8 s
$ timedatectl
Local time: Tue 2020-03-03 06:58:34 EST
Universal time: Tue 2020-03-03 11:58:34 UTC
RTC time: Tue 2020-03-03 11:58:34
Time zone: America/New_York (EST, -0500)
System clock synchronized: no
systemd-timesyncd.service active: no
RTC in local TZ: no
$ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
0.north-america .POOL. 16 p - 64 0 0.000 0.000 0.000
1.north-america .POOL. 16 p - 64 0 0.000 0.000 0.000
2.north-america .POOL. 16 p - 64 0 0.000 0.000 0.000
ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 0.000 0.000
$
$ sudo systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:systemd-timesyncd.service(8)
my ntp.conf below:
#interface listen IPv4
#interface ignore IPv6
interface ignore wildcard
# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
driftfile /var/lib/ntp/ntp.drift
# Leap seconds definition provided by tzdata
leapfile /usr/share/zoneinfo/leap-seconds.list
# Enable this if you want statistics to be logged.
#statsdir /var/log/ntpstats/
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
# Specify one or more NTP servers.
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
# more information.
#pool 0.ubuntu.pool.ntp.org iburst
#pool 1.ubuntu.pool.ntp.org iburst
#pool 2.ubuntu.pool.ntp.org iburst
#pool 3.ubuntu.pool.ntp.org iburst
pool 0.north-america.pool.ntp.org iburst
pool 1.north-america.pool.ntp.org iburst
pool 2.north-america.pool.ntp.org iburst
# Use Ubuntu's ntp server as a fallback.
pool ntp.ubuntu.com
# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details. The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
#
# Note that "restrict" applies to both servers and clients, so a configuration
# up blocking replies from your own upstream servers.
# By default, exchange time with everybody, but don't allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery limited
restrict -6 default kod notrap nomodify nopeer noquery limited
# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1
# Needed for adding pool entries
restrict source notrap nomodify noquery
# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust
# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
#broadcast 192.168.123.255
# If you want to listen to time broadcasts on your local subnet, de-comment the
# next lines. Please do this only if you trust everybody on the network!
#disable auth
#broadcastclient
#Changes recquired to use pps synchonisation as explained in documentation:
#http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#AEN3918
#server 127.127.8.1 mode 135 prefer # Meinberg GPS167 with PPS
#fudge 127.127.8.1 time1 0.0042 # relative to PPS for my hardware
#server 127.127.22.1 # ATOM(PPS)
#fudge 127.127.22.1 flag3 1 # enable PPS API
I had some problems setting up my Ubuntu 18.04 machine before. I just looked at the some logs and I might be able to help.
$ timedatectl
Local time: Wed 2020-03-18 18:01:20 GMT
Universal time: Wed 2020-03-18 18:01:20 UTC
RTC time: Wed 2020-03-18 18:01:20
Time zone: Europe/London (GMT, +0000)
System clock synchronized: yes
This is the my timedatectl output. I got to syncronize it by adding this to "/etc/systemd/timesyncd.conf" :
[Time]
NTP=10.199.999.99 10.999.999.999
Just put the ip addresses of the servers you are trying to synchronize with. The restart timedate and ntp and systemd timesync service. (or reboot :) ). And you should be fine, I don't remember doing anything else. Hope this helps.

Nagios Core 4.2.0 Service not running after reboot

I am running Nagios Core 4.2.0 to monitor hosts and services via Check_MK plugin. This is running on a VM. Once I rebooted, I cannot get a PID for Nagios.
root#hostname:/etc/init.d# service nagios status
● nagios.service - Nagios
Loaded: loaded (/etc/systemd/system/nagios.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-08-10 09:09:34 CDT; 1min 2s ago
Process: 2369 ExecStart=/usr/local/nagios/bin/nagios /usr/local/nagios/etc/nagios.cfg (code=exited, status=1/FAILURE)
Main PID: 2369 (code=exited, status=1/FAILURE)
Aug 10 09:09:34 hostname systemd[1]: Started Nagios.
Aug 10 09:09:34 hostname systemd[1]: nagios.service: Main process exited, code=exited, status=1/FAILURE
Aug 10 09:09:34 hostname systemd[1]: nagios.service: Unit entered failed state.
Aug 10 09:09:34 hostname systemd[1]: nagios.service: Failed with result 'exit-code'.
Is there a way to get this PID to run? I am not sure how a simple reboot can make this go bad.

Resources