PostgreSQL - Very slow to connect to fresh database - database

I just installed Postgres (v12.2) and created a database, but it's taking around ~20 seconds to connect (using psql). This seems to be a long time and it's causing problems in some of my workflows.
I installed PostgreSQL using Anaconda (for corporate reasons I don't have access to either apt or Docker). Here are the steps I followed to get the database up and running:
conda install postgresql
initdb -D airflow
pg_ctl -D airflow -l logfile start
createuser --encrypted --pwprompt airflow_user
createdb --owner=airflow_user airflow_db
I also changed the following lines on the pg_hba.conf file:
# IPv4 local connections:
host    all             all             0.0.0.0/0            trust
and set the logs to be as verbose as possible on the postgresql.conf file (also added a couple of things like the application name to the log message).
Testing the connection with:
psql -h 127.0.0.1 -p 5432 -d airflow_db -U airflow_user
yields the following log messages:
2021-02-17 15:56:25.590 UTC [unknown] [unknown] LOG: connection received: host=127.0.0.1 port=45006
2021-02-17 15:56:41.454 UTC DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2021-02-17 15:56:41.454 UTC DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2021-02-17 15:56:41.455 UTC DEBUG: received inquiry for database 0
2021-02-17 15:56:41.455 UTC DEBUG: writing stats file "pg_stat_tmp/global.stat"
2021-02-17 15:56:41.455 UTC DEBUG: writing stats file "pg_stat_tmp/db_0.stat"
2021-02-17 15:56:41.466 UTC DEBUG: InitPostgres
2021-02-17 15:56:41.466 UTC DEBUG: my backend ID is 3
2021-02-17 15:56:41.466 UTC DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2021-02-17 15:56:41.467 UTC DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2021-02-17 15:56:41.467 UTC DEBUG: autovacuum: processing database "airflow_db"
2021-02-17 15:56:41.467 UTC DEBUG: received inquiry for database 16385
2021-02-17 15:56:41.467 UTC DEBUG: writing stats file "pg_stat_tmp/global.stat"
2021-02-17 15:56:41.467 UTC DEBUG: writing stats file "pg_stat_tmp/db_16385.stat"
2021-02-17 15:56:41.467 UTC DEBUG: writing stats file "pg_stat_tmp/db_0.stat"
2021-02-17 15:56:41.477 UTC DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2021-02-17 15:56:41.477 UTC DEBUG: pg_statistic: vac: 0 (threshold 134), anl: 0 (threshold 92)
2021-02-17 15:56:41.478 UTC DEBUG: pg_type: vac: 0 (threshold 131), anl: 0 (threshold 91)
2021-02-17 15:56:41.478 UTC DEBUG: pg_authid: vac: 0 (threshold 52), anl: 1 (threshold 51)
2021-02-17 15:56:41.478 UTC DEBUG: pg_attribute: vac: 0 (threshold 633), anl: 0 (threshold 341)
2021-02-17 15:56:41.478 UTC DEBUG: pg_proc: vac: 0 (threshold 642), anl: 0 (threshold 346)
2021-02-17 15:56:41.478 UTC DEBUG: pg_class: vac: 0 (threshold 129), anl: 0 (threshold 90)
2021-02-17 15:56:41.478 UTC DEBUG: pg_index: vac: 0 (threshold 82), anl: 0 (threshold 66)
2021-02-17 15:56:41.478 UTC DEBUG: pg_operator: vac: 0 (threshold 204), anl: 0 (threshold 127)
2021-02-17 15:56:41.478 UTC DEBUG: pg_opclass: vac: 0 (threshold 76), anl: 0 (threshold 63)
2021-02-17 15:56:41.478 UTC DEBUG: pg_am: vac: 0 (threshold 51), anl: 0 (threshold 51)
2021-02-17 15:56:41.478 UTC DEBUG: pg_amop: vac: 0 (threshold 193), anl: 0 (threshold 122)
2021-02-17 15:56:41.478 UTC DEBUG: pg_amproc: vac: 0 (threshold 139), anl: 0 (threshold 95)
2021-02-17 15:56:41.478 UTC DEBUG: pg_rewrite: vac: 0 (threshold 75), anl: 0 (threshold 63)
2021-02-17 15:56:41.478 UTC DEBUG: pg_cast: vac: 0 (threshold 93), anl: 0 (threshold 72)
2021-02-17 15:56:41.478 UTC DEBUG: pg_namespace: vac: 0 (threshold 51), anl: 0 (threshold 51)
2021-02-17 15:56:41.478 UTC DEBUG: pg_database: vac: 0 (threshold 50), anl: 1 (threshold 50)
2021-02-17 15:56:41.478 UTC DEBUG: pg_tablespace: vac: 0 (threshold 50), anl: 0 (threshold 50)
2021-02-17 15:56:41.478 UTC DEBUG: pg_shdepend: vac: 0 (threshold 52), anl: 1 (threshold 51)
2021-02-17 15:56:41.478 UTC DEBUG: pg_toast_2618: vac: 0 (threshold 100), anl: 0 (threshold 75)
2021-02-17 15:56:41.478 UTC DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2021-02-17 15:56:41.478 UTC DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2021-02-17 15:56:41.478 UTC DEBUG: shmem_exit(0): 7 on_shmem_exit callbacks to make
2021-02-17 15:56:41.478 UTC DEBUG: proc_exit(0): 2 callbacks to make
2021-02-17 15:56:41.478 UTC DEBUG: exit(0)
2021-02-17 15:56:41.478 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:56:41.478 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:56:41.478 UTC DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:56:41.478 UTC DEBUG: reaping dead processes
2021-02-17 15:56:41.478 UTC DEBUG: server process (PID 29567) exited with exit code 0
2021-02-17 15:56:47.646 UTC [unknown] [unknown] DEBUG: shmem_exit(0): 0 before_shmem_exit callbacks to make
2021-02-17 15:56:47.646 UTC [unknown] [unknown] DEBUG: shmem_exit(0): 0 on_shmem_exit callbacks to make
2021-02-17 15:56:47.646 UTC [unknown] [unknown] DEBUG: proc_exit(0): 1 callbacks to make
2021-02-17 15:56:47.646 UTC [unknown] [unknown] DEBUG: exit(0)
2021-02-17 15:56:47.646 UTC [unknown] [unknown] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:56:47.646 UTC [unknown] [unknown] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:56:47.646 UTC [unknown] [unknown] DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:56:47.647 UTC DEBUG: forked new backend, pid=29613 socket=9
2021-02-17 15:56:47.647 UTC DEBUG: reaping dead processes
2021-02-17 15:56:47.647 UTC DEBUG: server process (PID 29413) exited with exit code 0
2021-02-17 15:56:47.647 UTC [unknown] [unknown] LOG: connection received: host=127.0.0.1 port=45008
2021-02-17 15:56:47.647 UTC [unknown] airflow_user DEBUG: postgres child[29613]: starting with (
2021-02-17 15:56:47.647 UTC [unknown] airflow_user DEBUG: postgres
2021-02-17 15:56:47.647 UTC [unknown] airflow_user DEBUG: )
2021-02-17 15:56:47.647 UTC [unknown] airflow_user DEBUG: InitPostgres
2021-02-17 15:56:47.648 UTC [unknown] airflow_user DEBUG: my backend ID is 3
2021-02-17 15:56:47.648 UTC [unknown] airflow_user DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2021-02-17 15:56:47.648 UTC [unknown] airflow_user LOG: connection authorized: user=airflow_user database=airflow_db application_name=psql
2021-02-17 15:56:47.649 UTC psql airflow_user DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2021-02-17 15:56:50.746 UTC psql airflow_user DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2021-02-17 15:56:50.746 UTC psql airflow_user DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2021-02-17 15:56:50.746 UTC psql airflow_user DEBUG: proc_exit(0): 4 callbacks to make
2021-02-17 15:56:50.746 UTC psql airflow_user LOG: disconnection: session time: 0:00:03.098 user=airflow_user database=airflow_db host=127.0.0.1 port=45008
2021-02-17 15:56:50.746 UTC psql airflow_user DEBUG: exit(0)
2021-02-17 15:56:50.746 UTC psql airflow_user DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:56:50.746 UTC psql airflow_user DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:56:50.746 UTC psql airflow_user DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:56:50.746 UTC DEBUG: reaping dead processes
2021-02-17 15:56:50.746 UTC DEBUG: server process (PID 29613) exited with exit code 0
2021-02-17 15:57:00.668 UTC DEBUG: postmaster received signal 2
2021-02-17 15:57:00.668 UTC LOG: received fast shutdown request
2021-02-17 15:57:00.668 UTC LOG: aborting any active transactions
2021-02-17 15:57:00.668 UTC DEBUG: sending signal 15 to process 26962
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: autovacuum launcher shutting down
2021-02-17 15:57:00.668 UTC DEBUG: logical replication launcher shutting down
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(0): 5 on_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(1): 2 before_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: proc_exit(0): 2 callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: exit(0)
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(1): 6 on_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: proc_exit(0): 2 callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: proc_exit(1): 2 callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: exit(0)
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: exit(1)
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(0): 5 on_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: proc_exit(0): 2 callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: exit(0)
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.668 UTC DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:57:00.669 UTC DEBUG: reaping dead processes
2021-02-17 15:57:00.669 UTC LOG: background worker "logical replication launcher" (PID 26962) exited with exit code 1
2021-02-17 15:57:00.669 UTC DEBUG: reaping dead processes
2021-02-17 15:57:00.669 UTC LOG: shutting down
2021-02-17 15:57:00.670 UTC DEBUG: performing replication slot checkpoint
2021-02-17 15:57:00.672 UTC DEBUG: attempting to remove WAL segments older than log file 000000000000000000000000
2021-02-17 15:57:00.672 UTC DEBUG: SlruScanDirectory invoking callback on pg_subtrans/0000
2021-02-17 15:57:00.672 UTC DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2021-02-17 15:57:00.672 UTC DEBUG: shmem_exit(0): 5 on_shmem_exit callbacks to make
2021-02-17 15:57:00.672 UTC DEBUG: proc_exit(0): 2 callbacks to make
2021-02-17 15:57:00.672 UTC DEBUG: exit(0)
2021-02-17 15:57:00.672 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.672 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.672 UTC DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:57:00.672 UTC DEBUG: reaping dead processes
2021-02-17 15:57:00.672 UTC DEBUG: writing stats file "pg_stat/global.stat"
2021-02-17 15:57:00.672 UTC DEBUG: writing stats file "pg_stat/db_16385.stat"
2021-02-17 15:57:00.673 UTC DEBUG: removing temporary stats file "pg_stat_tmp/db_16385.stat"
2021-02-17 15:57:00.673 UTC DEBUG: writing stats file "pg_stat/db_12738.stat"
2021-02-17 15:57:00.673 UTC DEBUG: removing temporary stats file "pg_stat_tmp/db_12738.stat"
2021-02-17 15:57:00.673 UTC DEBUG: writing stats file "pg_stat/db_0.stat"
2021-02-17 15:57:00.673 UTC DEBUG: removing temporary stats file "pg_stat_tmp/db_0.stat"
2021-02-17 15:57:00.673 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.673 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.673 UTC DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:57:00.673 UTC DEBUG: reaping dead processes
2021-02-17 15:57:00.673 UTC DEBUG: shmem_exit(0): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.673 UTC DEBUG: shmem_exit(0): 5 on_shmem_exit callbacks to make
2021-02-17 15:57:00.673 UTC DEBUG: cleaning up dynamic shared memory control segment with ID 521889379
2021-02-17 15:57:00.676 UTC DEBUG: proc_exit(0): 2 callbacks to make
2021-02-17 15:57:00.676 UTC LOG: database system is shut down
2021-02-17 15:57:00.676 UTC DEBUG: exit(0)
2021-02-17 15:57:00.676 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.676 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.676 UTC DEBUG: proc_exit(-1): 0 callbacks to make
2021-02-17 15:57:00.677 UTC DEBUG: logger shutting down
2021-02-17 15:57:00.677 UTC DEBUG: shmem_exit(0): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.677 UTC DEBUG: shmem_exit(0): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.677 UTC DEBUG: proc_exit(0): 0 callbacks to make
2021-02-17 15:57:00.677 UTC DEBUG: exit(0)
2021-02-17 15:57:00.677 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2021-02-17 15:57:00.677 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2021-02-17 15:57:00.677 UTC DEBUG: proc_exit(-1): 0 callbacks to make
One thing I noticed is that between the "connection received" message and the very next step, it takes ~11 seconds. I've tested this on my personal desktop (not work) and the connection is almost instant, which seems to indicate that this is some sort of environment problem.
If anyone has any insight I would love to hear it! I've been blocked on this for ages now.
Thanks a lot!

Problem solved. There is something wrong with Anaconda's PostgreSQL package that was causing every connection attempt to fail once, and then succeed on the second try. No idea what exactly was going on, but we built PostgreSQL from source using good old make, and it works like a charm.
Cheers everyone

Related

Postgres + Go + Docker-compose Can't ping database: dial tcp 127.0.0.1:5432: connect: connection refused

It is not my golang script, but i should to use them in this task:
package main
import (
"database/sql"
"fmt"
_ "github.com/lib/pq"
"log"
"net/http"
"github.com/caarlos0/env"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
type config struct {
PostgresUri string `env:"POSTGRES_URI" envDefault:"postgres://root:pass#127.0.0.1/postgres"`
ListenAddress string `env:"LISTEN_ADDRESS" envDefault:":5432"`
}
var (
db *sql.DB
errorsCount = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "gocalc_errors_count",
Help: "Gocalc Errors Count Per Type",
},
[]string{"type"},
)
requestsCount = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "gocalc_requests_count",
Help: "Gocalc Requests Count",
})
)
func main() {
var err error
// Initing prometheus
prometheus.MustRegister(errorsCount)
prometheus.MustRegister(requestsCount)
// Getting env
cfg := config{}
if err = env.Parse(&cfg); err != nil {
fmt.Printf("%+v\n", err)
}
// Connecting to database
db, err = sql.Open("postgres", cfg.PostgresUri)
if err != nil {
log.Fatalf("Can't connect to postgresql: %v", err)
}
defer db.Close()
err = db.Ping()
if err != nil {
log.Fatalf("Can't ping database: %v", err)
}
http.HandleFunc("/", handler)
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(cfg.ListenAddress, nil))
}
func handler(w http.ResponseWriter, r *http.Request) {
requestsCount.Inc()
keys, ok := r.URL.Query()["q"]
if !ok || len(keys[0]) < 1 {
errorsCount.WithLabelValues("missing").Inc()
log.Println("Url Param 'q' is missing")
http.Error(w, "Bad Request", 400)
return
}
q := keys[0]
log.Println("Got query: ", q)
var result string
sqlStatement := fmt.Sprintf("SELECT (%s)::numeric", q)
row := db.QueryRow(sqlStatement)
err := row.Scan(&result)
if err != nil {
log.Println("Error from db: %s", err)
errorsCount.WithLabelValues("db").Inc()
http.Error(w, "Internal Server Error", 500)
return
}
fmt.Fprintf(w, "query %s; result %s", q, result)
}
It is my docker-compose:
version: "3"
services:
db:
image: postgres:10
environment:
- POSTGRES_PASSWORD=pass
- POSTGRES_USER=root
expose:
- 5432
backend:
image: dkr-14-gocalc:latest
environment:
- POSTGRES_URI=postgres://root:pass#db/postgres
- LISTEN_ADDRESS=7000
depends_on:
- postgres
proxy:
image: nginx
volumes:
- type: bind
source: ./nginx.conf
target: /etc/nginx/conf.d/default.conf
ports:
- 8000:80
depends_on:
- backend
And it is dkr14-gocalc image:
FROM golang:1.19.1-alpine AS builder
ENV GO111MODULE=auto
WORKDIR /go/src/
RUN apk add --no-cache git
COPY main.go ./
#init is initializing and writting new go.mod in current dir.
RUN go mod init main.go
RUN go get -d -v github.com/caarlos0/env \
&& go get -d -v github.com/prometheus/client_golang/prometheus \
&& go get -d -v github.com/prometheus/client_golang/prometheus/promhttp \
&& go get -d -v github.com/lib/pq \
&& go get -d -v database/sql \
&& go get -d -v fmt \
&& go get -d -v log \
&& go get -d -v net/http
RUN go build -o app .
FROM alpine:3.10.3
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/app ./
CMD ["./app"]
I should to make docker compose environment with 3 services GO+POSTGRES+NGINX. The main idea of task is to learn about environment. Database should have password and golalc should to connect to this database. But what have i done incorrect?
It is my log:
backend_1 | 2022/11/07 23:27:33 Can't ping database: dial tcp 127.0.0.1:5432: connect: connection refused
postgres_1 | The files belonging to this database system will be owned by user "postgres".
postgres_1 | This user must also own the server process.
postgres_1 |
proxy_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
proxy_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
postgres_1 | The default database encoding has accordingly been set to "UTF8".
postgres_1 | The default text search configuration will be set to "english".
postgres_1 |
postgres_1 | Data page checksums are disabled.
proxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
proxy_1 | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
postgres_1 |
task14_backend_1 exited with code 1
proxy_1 | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1 | creating subdirectories ... ok
proxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
postgres_1 | selecting default max_connections ... 100
proxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
postgres_1 | selecting default shared_buffers ... 128MB
postgres_1 | selecting default timezone ... Etc/UTC
postgres_1 | selecting dynamic shared memory implementation ... posix
proxy_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
postgres_1 | creating configuration files ... ok
postgres_1 | running bootstrap script ... ok
postgres_1 | performing post-bootstrap initialization ... ok
postgres_1 | syncing data to disk ... ok
postgres_1 |
postgres_1 | Success. You can now start the database server using:
postgres_1 |
postgres_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres_1 |
postgres_1 |
postgres_1 | WARNING: enabling "trust" authentication for local connections
postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1 | --auth-local and --auth-host, the next time you run initdb.
proxy_1 | 2022/11/07 23:27:33 [emerg] 1#1: host not found in upstream "backend" in /etc/nginx/conf.d/default.conf:5
proxy_1 | nginx: [emerg] host not found in upstream "backend" in /etc/nginx/conf.d/default.conf:5
postgres_1 | waiting for server to start....2022-11-07 23:27:33.704 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2022-11-07 23:27:33.722 UTC [49] LOG: database system was shut down at 2022-11-07 23:27:33 UTC
postgres_1 | 2022-11-07 23:27:33.729 UTC [48] LOG: database system is ready to accept connections
postgres_1 | done
postgres_1 | server started
task14_proxy_1 exited with code 1
postgres_1 | CREATE DATABASE
postgres_1 |
postgres_1 |
postgres_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres_1 |
postgres_1 | 2022-11-07 23:27:34.130 UTC [48] LOG: received fast shutdown request
postgres_1 | waiting for server to shut down....2022-11-07 23:27:34.133 UTC [48] LOG: aborting any active transactions
postgres_1 | 2022-11-07 23:27:34.135 UTC [48] LOG: worker process: logical replication launcher (PID 55) exited with exit code 1
postgres_1 | 2022-11-07 23:27:34.135 UTC [50] LOG: shutting down
postgres_1 | 2022-11-07 23:27:34.157 UTC [48] LOG: database system is shut down
postgres_1 | done
postgres_1 | server stopped
postgres_1 |
postgres_1 | PostgreSQL init process complete; ready for start up.
postgres_1 |
postgres_1 | 2022-11-07 23:27:34.254 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2022-11-07 23:27:34.254 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2022-11-07 23:27:34.259 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2022-11-07 23:27:34.283 UTC [76] LOG: database system was shut down at 2022-11-07 23:27:34 UTC
postgres_1 | 2022-11-07 23:27:34.294 UTC [1] LOG: database system is ready to accept connections
The issue could be in how you're exposing your database container in the docker-compose.yaml file. "connection refused" is a typical error in this scenario. Usually, I export the ports in a different way (in this code I reported only the relevant parts):
docker-compose.yaml
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
ports:
- "5432:5432"
main.go
package main
import (
"database/sql"
"log"
_ "github.com/lib/pq"
)
func main() {
// Connecting to database
db, err := sql.Open("postgres", "host=127.0.0.1 user=postgres password=postgres dbname=postgres port=5432 sslmode=disable")
if err != nil {
log.Fatalf("Can't connect to postgresql: %v", err)
}
defer db.Close()
err = db.Ping()
if err != nil {
log.Fatalf("Can't ping database: %v", err)
}
}
With this solution, I was able to successfully ping the Postgres database directly from the code. Let me know if that helps.
Edit
There is a little difference in the way you interact with db instance:
If you're trying to contact the db directly from your host machine (like in the main.go file), you've to refer to it with the IP address of your machine (e.g., 127.0.0.1).
If you're trying to contact the db from your web container, you've to use something like postgres://db:5432 with db as the hostname.
Let me know if this clarifies a little bit.

Getting the status of Ansible import_tasks to satisfy an until loop's condition

I'm attempting to get an until loop working for an import_tasks, and to break the loop when it meets a condition within the import_tasks. Not sure if this is even possible, or if there's a better way to acheive this? Making it slightly more tricky is one of the tasks in include_tasks is a powershell script which returns a status message which is used to satisfy the until requirement.
So yeah, the goal is to run check_output.yml until there is no longer any RUNNING status reported by the script.
main.yml:
- import_tasks: check_output.yml
until: "'RUNNING' not in {{outputStatus}}"
retries: 100
delay: 120
check_output.yml
---
- name: "Output Check"
shell: ./get_output.ps1" # this will return `RUNNING` in std_out if it's still running
args:
executable: /usr/bin/pwsh
register: output
- name: "Debug output"
debug: outputStatus=output.stdout_lines
For the record, this works just fine if I don't use import_tasks and just use an until loop on the "Output Check" task. The problem with that approach is you have to run Ansible with -vvv to get the status message for each of the loops which causes a ton of extra, unwanted debug messages. I'm trying to get the same status for each loop without having to add verbosity.
Ansible version is 2.11
Q: "(Wait) until there is no longer any RUNNING status reported by the script."
A: An option might be the usage of the module wait_for.
For example, create the script on the remote host. The script takes two parameters. The PID of a process to be monitored and DELAY monitoring interval in seconds. The script writes the status to /tmp/$PID.status
# cat /root/bin/get_status.sh
#!/bin/sh
PID=$1
DELAY=$2
ps -p $PID > /dev/null 2>&1
STATUS=$?
while [ "$STATUS" -eq "0" ]
do
echo "$PID is RUNNING" > /tmp/$PID.status
sleep $DELAY
ps -p $PID > /dev/null 2>&1
STATUS=$?
done
echo "$PID is NONEXIST" > /tmp/$PID.status
exit 0
Start the script asynchronously
- command: "/root/bin/get_status.sh {{ _pid }} {{ _delay }}"
async: "{{ _timeout }}"
poll: 0
register: get_status
In the next step wait for the process to stop running
- wait_for:
path: "/tmp/{{ _pid }}.status"
search_regex: NONEXIST
retries: "{{ _retries }}"
delay: "{{ _delay }}"
After the condition passed or the module wait_for reached the timeout run async_status to make sure the script terminated
- async_status:
jid: "{{ get_status.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: "{{ _retries }}"
delay: "{{ _delay }}"
Example of a complete playbook
- hosts: test_11
vars:
_timeout: 60
_retries: "{{ (_timeout/_delay|int)|int }}"
tasks:
- debug:
msg: |-
time: {{ '%H:%M:%S'|strftime }}
_pid: {{ _pid }}
_delay: {{ _delay }}
_timeout: {{ _timeout }}
_retries: {{ _retries }}
when: debug|d(false)|bool
- command: "/root/bin/get_status.sh {{ _pid }} {{ _delay }}"
async: "{{ _timeout }}"
poll: 0
register: get_status
- debug:
var: get_status
when: debug|d(false)|bool
- wait_for:
path: "/tmp/{{ _pid }}.status"
search_regex: NONEXIST
retries: "{{ _retries }}"
delay: "{{ _delay }}"
- debug:
msg: "time: {{ '%H:%M:%S'|strftime }}"
when: debug|d(false)|bool
- async_status:
jid: "{{ get_status.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: "{{ _retries }}"
delay: "{{ _delay }}"
- debug:
msg: "time: {{ '%H:%M:%S'|strftime }}"
when: debug|d(false)|bool
- file:
path: "/tmp/{{ _pid }}.status"
state: absent
when: _cleanup|d(true)|bool
On the remote, start a process to be monitored. For example,
root#test_11:/ # sleep 60 &
[1] 28704
Run the playbook. Fit _timeout and _delay to your needs
shell> ansible-playbook pb.yml -e debug=true -e _delay=3 -e _pid=28704
PLAY [test_11] *******************************************************************************
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: |-
time: 09:06:34
_pid: 28704
_delay: 3
_timeout: 60
_retries: 20
TASK [command] *******************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
get_status:
ansible_job_id: '331975762819.28719'
changed: true
failed: 0
finished: 0
results_file: /root/.ansible_async/331975762819.28719
started: 1
TASK [wait_for] ******************************************************************************
ok: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: 'time: 09:07:27'
TASK [async_status] **************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: 'time: 09:07:28'
TASK [file] **********************************************************************************
changed: [test_11]
PLAY RECAP ***********************************************************************************
test_11: ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Check if a file is 20 hours old in ansible

I'm able to get the timestamp of a file using Ansible stat module.
- stat:
path: "/var/test.log"
register: filedets
- debug:
msg: "{{ filedets.stat.mtime }}"
The above prints mtime as 1594477594.631616 which is difficult to understand.
I wish to know how can I put a when condition check to see if the file is less than 20 hours old?
You can also achieve this kind of tasks without going in the burden to do any computation via find and its age parameter:
In your case, you will need a negative value for the age:
Select files whose age is equal to or greater than the specified time.
Use a negative age to find files equal to or less than the specified time.
You can choose seconds, minutes, hours, days, or weeks by specifying the first letter of any of those words (e.g., "1w").
Source: https://docs.ansible.com/ansible/latest/modules/find_module.html#parameter-age
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- file:
path: /var/test.log
state: touch
- find:
paths: /var
pattern: 'test.log'
age: -20h
register: test_log
- debug:
msg: "The file is exactly 20 hours old or less"
when: test_log.files | length > 0
- file:
path: /var/test.log
state: touch
modification_time: '202007102230.00'
- find:
paths: /var
pattern: 'test.log'
age: -20h
register: test_log
- debug:
msg: "The file is exactly 20 hours old or less"
when: test_log.files | length > 0
This gives the recap:
PLAY [all] **********************************************************************************************************
TASK [file] *********************************************************************************************************
changed: [localhost]
TASK [find] *********************************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************
ok: [localhost] => {
"msg": "The file is exactly 20 hours old or less"
}
TASK [file] *********************************************************************************************************
changed: [localhost]
TASK [find] *********************************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************
skipping: [localhost]
PLAY RECAP **********************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
- stat:
path: "/var/test.log"
register: filedets
- debug:
msg: "{{ (ansible_date_time.epoch|float - filedets.stat.mtime ) > (20 * 3600) }}"

Unable to edit file using lineinfile Ansible module

I'm trying to add a line export TMOUT=50000 to a file /backup/backup.sh using Ansible.
Below is my playbook:
- name: Add Timeout Entry if not present
lineinfile:
path: "/backup/backup.sh"
insertbefore: BOF
state: present
line: 'export TMOUT=50000'
Getting the below error for ansible playbook:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 18] Cannot link to a file on another device.: b'/tmp/tmphtvszlfc' -> b'/wd/backup.sh'
fatal: [10.9.16.133]: FAILED! => {"changed": false, "msg": "The destination directory (/wd) is not writable by the current user. Error was: [Errno 13] The file access permissions do not allow the specified action.: b'/wd/.ansible_tmp3yhi8j7bbackup.sh'"}
Below is the drive information of the target server:
df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 6291456 3323144 48% 42948 6% /
/dev/hd2 6684672 3028820 55% 54304 8% /usr
/dev/hd9var 15728640 810228 95% 13874 7% /var
/dev/hd3 6422528 1856680 72% 4035 1% /tmp
/proc - - - - - /proc
/dev/wdlv 55574528 30603740 45% 315925 5% /wd
/dev/dcclv 37748736 10042348 74% 364803 14% /dcc
/dev/userslv 1048576 1039616 1% 59 1% /users
10.9.9.105:/ifs/data/NAS_RMANBKP 418298160992 14912895200 97% 589129885906 94% /backup
10.9.12.25:/ORACLE_1 32212254720 9509599696 71% 706 1% /ORACLE_1
The file is definately editable:
/wd>ls -ltr /wd/backup.sh
-rwxr-xr-x 1 user1 dba 173 Apr 05 17:15 /wd/backup.sh
/wd>ls -lad /wd
drwxr-xr-x 1 root system 173 Apr 05 17:15 /wd/backup.sh
Kindly suggest how can I overcome the issue?
The error message is explicit: the user cannot write into the directory of the file, /wd; you can even see that in your ls output: drwxr-xr-x 1 root system says the directory is w-ritable only by root
The fix is very simple: add become: yes to your task, to enable privilege escalation when running that lineinfile task:
- name: Add Timeout Entry if not present
become: yes
lineinfile:
path: "/backup/backup.sh"
insertbefore: BOF
state: present
line: 'export TMOUT=50000'

why eight httpd processes?

I am using linux EC2, I see 8 httpd processes immediately after I start apache, 2 of which are owned by root and the rest are owned by 'apache':
root 22966 1 0 08:03 ? 00:00:00 /usr/sbin/httpd
root 22968 22966 0 08:03 ? 00:00:00 /usr/sbin/httpd
apache 22969 22966 0 08:03 ? 00:00:00 /usr/sbin/httpd
apache 22970 22966 0 08:03 ? 00:00:00 /usr/sbin/httpd
apache 22971 22966 0 08:03 ? 00:00:00 /usr/sbin/httpd
apache 22972 22966 0 08:03 ? 00:00:00 /usr/sbin/httpd
apache 22973 22966 0 08:03 ? 00:00:00 /usr/sbin/httpd
apache 22974 22966 0 08:03 ? 00:00:00 /usr/sbin/httpd
Why are there 8 processes? I assumed this has something to do with the prefork/worker configuration, probably prefork, but my /etc/httpd/conf/httpd.conf file looks like this, and nowhere is 8 or 6 defined, maybe I am missing something?
# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxClients for the lifetime of the server
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule prefork.c>
StartServers 5
MinSpareServers 3
MaxSpareServers 9
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000
</IfModule>
# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule worker.c>
StartServers 4
MaxClients 300
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>

Resources