cannot connect to PostgreSQL DB running on EC2 instance - database

I have a simple PostgreSQL DB running on an EC2 instance.
ubuntu#ip-172-31-38-xx:~$ service postgresql status
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2020-06-19 14:04:12 UTC; 7h ago
Main PID: 11065 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 1152)
CGroup: /system.slice/postgresql.service
Jun 19 14:04:12 ip-172-31-38-xx systemd[1]: Starting PostgreSQL RDBMS...
Jun 19 14:04:12 ip-172-31-38-xx systemd[1]: Started PostgreSQL RDBMS.
ubuntu#ip-172-31-38-xx:~$ psql -U postgres
Password for user postgres:
psql (10.12 (Ubuntu 10.12-0ubuntu0.18.04.1))
Type "help" for help.
postgres=# SELECT *
postgres-# FROM pg_settings
postgres-# WHERE name = 'port';
name | setting | unit | category | short_desc | extra_desc | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | sourcefile | sourceline | pending_restart
------+---------+------+------------------------------------------------------+------------------------------------------+------------+------------+---------+--------------------+---------+---------+----------+----------+-----------+-----------------------------------------+------------+-----------------
port | 5432 | | Connections and Authentication / Connection Settings | Sets the TCP port the server listens on. | | postmaster | integer | configuration file | 1 | 65535 | | 5432 | 5432 | /etc/postgresql/10/main/postgresql.conf | 63 | f
(1 row)
The only Security Group that is associated with this EC2 instance has inbound rules wide open:
5432, TCP, 0.0.0.0/0
But when I use a client to connect to this DB with the correct hostname (public IP/DNS), port number, DB name, user name and password typed in, it always says:
could not connect to server: Connection refused, is the server running on host "ec2-dns.com(172.public.ip)" and accepting TCP/IP connections on port 5432?

All right, I've figured it out from this answer
Two things I did to enable myself to connect (exactly from the link above, I'm duplicating it here for convenience):
open this file: sudo vi /etc/postgresql/10/main/pg_hba.conf
immediately below this line:
host all all 127.0.0.1/32 md5
added this line:
host all all 0.0.0.0/0 md5
open this file: sudo vi /etc/postgresql/10/main/postgresql.conf
find a line that starts with this:
#listen_addresses = 'localhost'
Uncomment the line by deleting the #, and change 'localhost' to '*'.
The line should now look like this:
listen_addresses = '*' # what IP address(es) to listen on;.
then restart your service:
sudo service postgresql restart
then you should be able to connect to your DB via a SQL client.

Are you sure PostgreSQL is listening on the IP address and Port number that you are using as host and port parameters. Try by modifying your postgresql.conf file and restarting the server.
sudo nano /etc/postgresql/{YOUR_POSTGRES_VERSION}/main/postgresql.conf
Now go on and find the connection settings and update the following values.
listen_addresses = {YOUR_IP_ADDRESS}
port = {YOUR_PORT_NUMBER}
Now save the file and restart postgresl server:
sudo systemctl restart postgres
Checkout documentation
here:

Related

can't connect to mongodb remotely after opening ubuntu firewall and mongod.conf

Can't connect to mongodb remotely on fresh installation of mongodb on ubuntu 20.10 server on Linode.
root#localhost:~# sudo ufw status
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
27017 ALLOW 0.0.0.0
22 (v6) ALLOW Anywhere (v6)
/etc/mongod.conf
net:
port: 27017
bindIp: 0.0.0.0
mongo server is up and running
root#localhost:~# sudo service mongod status
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-04-04 18:34:05 UTC; 19min ago
Docs: https://docs.mongodb.org/manual
Main PID: 1332 (mongod)
Memory: 161.0M
CGroup: /system.slice/mongod.service
└─1332 /usr/bin/mongod --config /etc/mongod.conf
Apr 04 18:34:05 localhost systemd[1]: Started MongoDB Database Server.
netstat on host running mongo server
root#localhost:~# sudo netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 1332/mongod
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 640/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 817/sshd: /usr/sbin
tcp6 0 0 :::22 :::* LISTEN 817/sshd: /usr/sbin
udp 0 0 127.0.0.53:53 0.0.0.0:* 640/systemd-resolve
nc -zv IP_ADDRESS 27017 times out, so mongo -u $DB_USERNAME -p $DB_PASSWORD IP_ADDRESS/admin will just time out as well
nc -zv IP_ADDRESS 22 works as expected
solved by doing sudo ufw allow 27017 instead of sudo ufw allow from 0.0.0.0 to any port 27017
root#localhost:~# sudo ufw status
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
27017 ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
27017 (v6) ALLOW Anywhere (v6)

SSL_ERROR_SSL MSSQL handshake fail while seeding/migrating data in dockerized C sharp micro service

Trying to seed/migrate data from C# (C Sharp) micro service within MSSQL database container (Image is mssql-server-linux:2017-latest)...
Connection is successful
Exception message is as below
exampleapi_1 | fail: Microsoft.EntityFrameworkCore.Database.Connection[20004]
exampleapi_1 | An error occurred using the connection to database 'Domain.exampleManagement.Docker' on server 'DOMAIN-DB'.
exampleapi_1 | fail: Puma.exampleManagement.API.Program[0]
exampleapi_1 | An error occurred while migrating or seeding the database.
exampleapi_1 | Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 35 - An internal exception was caught)
exampleapi_1 | ---> System.Security.Authentication.AuthenticationException: Authentication failed, see inner exception.
exampleapi_1 | ---> Interop+OpenSsl+SslException: SSL Handshake failed with OpenSSL error - SSL_ERROR_SSL.
exampleapi_1 | ---> Interop+Crypto+OpenSslCryptographicException: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
Can I pass any environment variable similar to ACCEPT_EULA to disable SSL?
In Connection String, I have updated Encrypted to False. Also, I could update Connection timeout to 600.
Do I need to create a custom Dockerfile from this docker image as base? Should I be adding below line to this Dockerfile?
RUN sed -i "s|TLSv1.2|TLSv1.0|g" /etc/ssl/openssl.cnf
If there is tag with SSL enabled already in Dockerhub repository, kindly share it's link
This workaround helped me finally with the mcr.microsoft.com/dotnet/runtime:5.0-buster-slim docker image:
RUN sed -i 's/MinProtocol = TLSv1.2/MinProtocol = TLSv1/g' /etc/ssl/openssl.cnf
RUN sed -i 's/MinProtocol = TLSv1.2/MinProtocol = TLSv1/g' /usr/lib/ssl/openssl.cnf
RUN sed -i 's/DEFAULT#SECLEVEL=2/DEFAULT#SECLEVEL=1/g' /etc/ssl/openssl.cnf
RUN sed -i 's/DEFAULT#SECLEVEL=2/DEFAULT#SECLEVEL=1/g' /usr/lib/ssl/openssl.cnf

Docker-compose v3 not persisting postgres database

I'm having difficulty persisting postgres data after a docker-compose v3 container is brought down and restarted. This seems to be a common problem, but after a lot of searching I have not been able to find a solution that works.
My question is similar to here: How to persist data in a dockerized postgres database using volumes, but the solution does not work - so please don't close. I'm going to walk through all the steps below to replicate the problem.
Here is my docker-compose file:
version: "3"
services:
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
volumes:
- pgdata:/var/lib/postgresql/data:rw
ports:
- 5432:5432
app:
build: .
command: ["go", "run", "main.go"]
ports:
- 8081:8081
depends_on:
- db
links:
- db
volumes:
pgdata:
Here is the terminal output after I bring it up and write to my database:
patientplatypus:~/Documents/zennify.me/backend:08:54:03$docker-compose up
Starting backend_db_1 ... done
Starting backend_app_1 ... done
Attaching to backend_db_1, backend_app_1
db_1 | 2018-08-19 13:54:53.661 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-08-19 13:54:53.661 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-08-19 13:54:53.664 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-08-19 13:54:53.692 UTC [24] LOG: database system was shut down at 2018-08-19 13:54:03 UTC
db_1 | 2018-08-19 13:54:53.712 UTC [1] LOG: database system is ready to accept connections
app_1 | db init successful
app_1 | create_userinfo_table started
app_1 | create_userinfo_table finished
app_1 | inside RegisterUser in Golang
app_1 | here is the users email:
app_1 | %s pwoiioieind#gmail.com
app_1 | here is the users password:
app_1 | %s ANOTHERSECRETPASSWORD
app_1 | value of randSeq, 7NLHzuVRuTSxYZyNP6MxPqdvS0qy1L6k
app_1 | search_userinfo_table started
app_1 | value of OKtoAdd, %t true
app_1 | last inserted id = 1 //I inserted in database!
app_1 | value of initUserRet, added
I can also connect to postgres in another terminal tab and verify that the database was written to correctly using psql -h 0.0.0.0 -p 5432 -U patientplatypus zennify. Here is the output of the userinfo table:
zennify=# TABLE userinfo
;
email | password | regstring | regbool | uid
-----------------------+--------------------------------------------------------------+----------------------------------+---------+-----
pwoiioieind#gmail.com | $2a$14$u.mNBrITUJaVjly15BOV9.Q9XmELYRjYQbhEUi8i4vLWtOr9QnXJ6 | r33ik3Jtf0m9U3zBRelFoWyYzpQp7KzR | f | 1
(1 row)
So writing to the database once works!
HOWEVER
Let's now do the following:
$docker-compose stop
backend_app_1 exited with code 2
db_1 | 2018-08-19 13:55:51.585 UTC [1] LOG: received smart shutdown request
db_1 | 2018-08-19 13:55:51.589 UTC [1] LOG: worker process: logical replication launcher (PID 30) exited with exit code 1
db_1 | 2018-08-19 13:55:51.589 UTC [25] LOG: shutting down
db_1 | 2018-08-19 13:55:51.609 UTC [1] LOG: database system is shut down
backend_db_1 exited with code 0
From reading the other threads on this topic using docker-compose stop as opposed to docker-compose down should persist the local database. However, if I again use docker-compose up and then, without writing a new value to the database simply query the table in postgres it is empty:
zennify=# TABLE userinfo;
email | password | regstring | regbool | uid
-------+----------+-----------+---------+-----
(0 rows)
I had thought that I may have been overwriting the table in my code on the initialization step, but I only have (golang snippet):
_, err2 := db.Exec("CREATE TABLE IF NOT EXISTS userinfo(email varchar(40) NOT NULL, password varchar(240) NOT NULL, regString varchar(32) NOT NULL, regBool bool NOT NULL, uid serial NOT NULL);")
Which should, of course, only create the table if it has not been previously created.
Does anyone have any suggestions on what is going wrong? As far as I can tell this has to be a problem with docker-compose and not my code. Thanks for the help!
EDIT:
I've looked into using version 2 of docker and following the format shown in this post (Docker compose not persisting data) by using the following compose:
version: "2"
services:
app:
build: .
command: ["go", "run", "main.go"]
ports:
- "8081:8081"
depends_on:
- db
links:
- db
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
volumes:
pgdata: {}
Unfortunately again I get the same problem - the data can be written but does not persist between shutdowns.
EDIT EDIT:
Just a quick note that using docker-compose up to instantiate the service and then relying on docker-compose stop and docker-compose start has no material affect on persistence of the data. Still does not persist across restarts.
EDIT EDIT EDIT:
Couple more things I've been finding out. If you want to properly exec into the docker container to see the value of the database you can do the following:
docker exec -it backend_db_1 psql -U patientplatypus -W zennify
where backend_db_1 is the name of the docker container database patientplatypus is my username and zennify is the name of the database in the database container.
I've also tried, with no luck, to add a network bridge to the docker-compose file as follows:
version: "3"
services:
db:
build: ./db
image: postgres:latest
environment:
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRET
POSTGRES_DB: zennify
ports:
- 5432:5432
volumes:
- ./db/pgdata:/var/lib/postgresql/data
networks:
- mynet
app:
build: ./
command: bash -c 'while !</dev/tcp/db/5432; do sleep 5; done; go run main.go'
ports:
- 8081:8081
depends_on:
- db
links:
- db
networks:
- mynet
networks:
mynet:
driver: "bridge"
My current working theory is that, for whatever reason, my golang container is writing the postgres values it has to local storage rather than the shared volume and I don't know why. Here is my latest idea of what the golang open command should look like:
data.InitDB("postgres://patientplatypus:SUPERSECRET#db:5432/zennify/?sslmode=disable")
...
func InitDB(dataSourceName string) {
db, _ := sql.Open(dataSourceName)
...
}
Again this works, but it does not persist the data.
I had exactly the same problem with a postgres db and a Django app running with docker-compose.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.
Docker named volumes are persisted with the original docker-compose you are using.
version: "3"
services:
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
volumes:
- pgdata:/var/lib/postgresql/data:rw
ports:
- 5432:5432
volumes:
pgdata:
How to prove it?
1) Run docker-compose up -d to create container and volume.
docker-compose up -d
Creating network "docker_default" with the default driver
Creating volume "docker_pgdata" with default driver
Pulling db (postgres:latest)...
latest: Pulling from library/postgres
be8881be8156: Already exists
bcc05f43b4de: Pull complete
....
Digest: sha256:0bbfbfdf7921a06b592e7dc24b3816e25edfe6dbdad334ed227cfd64d19bdb31
Status: Downloaded newer image for postgres:latest
Creating docker_db_1 ... done
2) Write a file on the volume location
docker-compose exec db /bin/bash -c 'echo "File is persisted" > /var/lib/postgresql/data/file-persisted.txt'
3) run docker-compose down
Notice that when you run down it removes no volumes just containers and network as per documentation. You would need to run it with -v to remove volumes.
Stopping docker_db_1 ... done
Removing docker_db_1 ... done
Removing network docker_default
Also notice your volume still exists
docker volume ls | grep pgdata
local docker_pgdata
4) Run docker-compose up -d again to start containers and remount volumes.
5) See file is still in the volume
docker-compose exec db /bin/bash -c 'ls -la /var/lib/postgresql/data | grep persisted '
-rw-r--r-- 1 postgres root 18 Aug 20 04:40 file-persisted.txt
Named volumes are not host volumes. Read the documentation or look up some articles to explain the difference. Also see Docker manages where it stores named volumes files and you can use different drivers but for the moment it is best you just learn the basic difference.
Under Environment Variables:
PGDATA
This optional environment variable can be used to define another location - like a subdirectory - for the database files. The default is /var/lib/postgresql/data, but if the data volume you're using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.
more info here --> https://hub.docker.com/_/postgres/

Camel-Netty4 TCP not able to connect to remote server

I'm facing problem while trying to connect remote server1 with remote server2 by using camel-netty4.
While trying to connect with remote server, throws below exception, but works for localhost.
leTCPNettyServerBootstrapFactory | 313 - org.apache.camel.camel-netty4 - 2.17.0.redhat-630187 | ServerBootstrap unbinding from :
NettyConsumer | 313 - org.apache.camel.camel-netty4 - 2.17.0.redhat-630187 | Netty consumer unbound from: :
BlueprintCamelContext | 234 - org.apache.camel.camel-blueprint - 2.17.0.redhat-630187 | Error occurred during starting Camel: CamelContext() due Cannot assign requested address
java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)[:1.8.0_131]
Please advice to resolve this issue, thank you.
I made a mistake while configuring TCP client and server, now created a consumer which is listening to same host, and a producer created to send message to remote server.

Unable to access Apache2 HTTPD server on browser from remote machine

I have a website deployed on Apache2. The Apache2 server is setup on a VM.
When I try to access the site using a browser from a remote machine (my laptop), I get a connection timed out error.
When I try to access something deployed on Tomcat on the same VM it works fine. But Apache gives a problem.
Please let me know what I am missing.
Thanks.
1) check that the httpd process is running
ps -ef | grep httpd |grep -v grep
2) make sure you are broadcasting on port 80
netstat -atn |grep :80
3) verify in your conf (/etc/httpd/conf.d/*.conf) file that you are binding Apache to port 80
<virtualhost *:80>
or
<virtualhost xxx.xxx.xxx.xxx:80>
Your Tomcat process may be bound to port 80 and the socket is not available.
on centos run this commands:
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
/etc/init.d/iptables save

Resources