What does the operator -i do in `impala-shell -i localhost` - database

As per impala documentation, you start impala shell using this command
$ impala-shell -i localhost --quiet
http://www.cloudera.com/documentation/enterprise/latest/topics/impala_tutorial.html#tut_beginner
Any idea what -i does? and why we needed?

Ok,
I think I found the answer.
By default impala-shell connect to localhost. So by only doing
$ impala-shell
This is going to go to the default impala host, port 21000
[localhost:21000] >
In case you are you want to manage impala in a cluster or a remote server you should specify a host by using the operator -i
such as impala-shell -i host.yourdomain.com
[host.yourdomain.com:21000] >
Or to a different port
such as impala-shell -i host.yourdomain.com:26000
[host.yourdomain.com:26000] >

Related

How to write data to database to a remote from another remote in another newtork thru SSH tunnel in bash script?

I have three remote, PC_A have the python code while PC_B are on different network have the database and PC_C also have the database which have the same network with PC_A
I was able to load the data from PC_A to PC_C in script like the following:
IN PC_A,
#!/bin/sh
# echo "i am in bin testing"
/home/user/env/bin/python3.8 /home/user/load_to_db.py -e IP_OF_PC_C
And to load the data to PC_B , i open a SSH tunnel among PC_A and PC_B,
i was able to load the data to PC_B if i do the following:
ssh -L 9200:127.0.0.1:9200 -L 5601:127.0.0.1:5601 user#IP_PC_B -p 30020
which 30020 is the SSH portt, 5601/9200 are the ElasticSearch and Kibana port
and then run (in PC_A) the script:
#!/bin/sh
# echo "i am in bin testing"
/home/user/env/bin/python3.8 /home/user/load_to_db.py -e localhost
I fail if i write the following in the
#!/bin/sh
ssh user#PC_B -p 30020
/home/user/env/bin/python3.8 /home/user/load_to_db.py -e PC_B
I was wondering how i write the above to script to transfer the data to PC_B with SSH
Thanks
i google a bit in "using local port forwarding in background" and "kill the ssh process" , i do a little test and below is what i need
ssh -N -f -L 9200:127.0.0.1:9200 user#IP_OF_PC_B -p 30020
do python code in PC_A
kill $(ps aux | grep ssh | grep 9200 | awk '{print $2}')

Import an .sql file into a rds database from local machine

I'm trying to import an .sql file into a new database on my AWS RDS. the RDS can only be reached by BastionHost via SSH and is not publicly available.
Right now I copy the file to the Bastion Host like that:
scp -i key.pem ~/databases/Datenmodell_init.sql ubuntu#ec2-88-255-112-
102.eu-west-1.compute.amazonaws.com:~/ubuntu/Datenmodell_init.sql
But I want to recreate the database directly without copying the file to the EC2 instance; the usual command doesnt work, obviously, since the SSH part is missing:
mysql -h mydb.co4qgzotzpzu.eu-west-1.rds.amazonaws.com -u masteruser -p new1 < ~/databases/Datenmodell_init.sql
How can I achieve the import of the .sqlfile through the Bastion Host to the RDS via Terminal?
Merci A
You should be able to set up an SSH tunnel and then use that to connect to the db:
ssh -i key.pem -L 10000:mydb.co4qgzotzpzu.eu-west-1.rds.amazonaws.com:3306 ubuntu#ec2-88-255-112-
102.eu-west-1.compute.amazonaws.com -N
mysql -h localhost -P 10000 -u masteruser -p new1 < ~/databases/Datenmodell_init.sql
There's a full explanation at eg https://medium.com/#michalisantoniou6/connect-to-an-aws-rds-using-an-ssh-tunnel-22f3bd597924

scp through intermediate hosts

I am trying to scp files to and from a remote server through an intermediate host. I can successfully do the following:
Scp from remote server (lome.1470mad.mssm.edu) to local desktop through intermediate host (shell.mssm.edu):
scp -r -o 'Host lome.1470mad.mssm.edu' -o 'ProxyCommand ssh hernam13#shell.mssm.edu nc %h %p' matt#lome.1470mad.mssm.edu:/dir1/matt/ .
But I am having trouble copying files in the other direction (from local host to lome.1470mad.mssm.edu through the intermediate host (shell.mssm.edu).
Can someone please clarify on how to do this?
Thanks!
It should just work the other way round (switching source and destination):
scp -r -o ProxyCommand="ssh -W %h:%p hernam13#shell.mssm.edu" local.file matt#lome.1470mad.mssm.edu:/dir1/matt/
The -o 'Host lome.1470mad.mssm.edu' is no useful. The ProxyCommand ssh hernam13#shell.mssm.edu nc %h %p is better to use -W switch to ssh. If it does not work, what errors you get?

Backup/Restore a dockerized PostgreSQL database

I'm trying to backup/restore a PostgreSQL database as is explained on the Docker website, but the data is not restored.
The volumes used by the database image are:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and the CMD is:
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
I create the DB container with this command:
docker run -it --name "$DB_CONTAINER_NAME" -d "$DB_IMAGE_NAME"
Then I connect another container to insert some data manually:
docker run -it --rm --link "$DB_CONTAINER_NAME":db "$DB_IMAGE_NAME" sh -c 'exec bash'
psql -d test -h $DB_PORT_5432_TCP_ADDR
# insert some data in the db
<CTRL-D>
<CTRL-D>
The tar archive is then created:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /etc/postgresql /var/log/postgresql /var/lib/postgresql
Now I remove the container used for the db and create another one, with the same name, and try to restore the data inserted before:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar xvf /backup/backup.tar
But the tables are empty, why is the data not properly restored ?
Backup your databases
docker exec -t your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
Restore your databases
cat your_dump.sql | docker exec -i your-db-container psql -U postgres
Backup Database
generate sql:
docker exec -t your-db-container pg_dumpall -c -U your-db-user > dump_$(date +%Y-%m-%d_%H_%M_%S).sql
to reduce the size of the sql you can generate a compress:
docker exec -t your-db-container pg_dumpall -c -U your-db-user | gzip > ./dump_$(date +"%Y-%m-%d_%H_%M_%S").gz
Restore Database
cat your_dump.sql | docker exec -i your-db-container psql -U your-db-user -d your-db-name
to restore a compressed sql:
gunzip < your_dump.sql.gz | docker exec -i your-db-container psql -U your-db-user -d your-db-name
PD: this is a compilation of what worked for me, and what I got from here and elsewhere. I am beginning to make contributions, any feedback will be appreciated.
I think you can also use a postgres backup container which would backup your databases within a given time duration.
pgbackups:
container_name: Backup
image: prodrigestivill/postgres-backup-local
restart: always
volumes:
- ./backup:/backups
links:
- db:db
depends_on:
- db
environment:
- POSTGRES_HOST=db
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_EXTRA_OPTS=-Z9 --schema=public --blobs
- SCHEDULE=#every 0h30m00s
- BACKUP_KEEP_DAYS=7
- BACKUP_KEEP_WEEKS=4
- BACKUP_KEEP_MONTHS=6
- HEALTHCHECK_PORT=81
cat db.dump | docker exec ... way didn't work for my dump (~2Gb). It took few hours and ended up with out-of-memory error.
Instead, I cp'ed dump into container and pg_restore'ed it from within.
Assuming that container id is CONTAINER_ID and db name is DB_NAME:
# copy dump into container
docker cp local/path/to/db.dump CONTAINER_ID:/db.dump
# shell into container
docker exec -it CONTAINER_ID bash
# restore it from within
pg_restore -U postgres -d DB_NAME --no-owner -1 /db.dump
Okay, I've figured this out. Postgresql does not detect changes to the folder /var/lib/postgresql once it's launched, at least not the kind of changes I want it do detect.
The first solution is to start a container with bash instead of starting the postgres server directly, restore the data, and then start the server manually.
The second solution is to use a data container. I didn't get the point of it before, now I do.
This data container allows to restore the data before starting the postgres container. Thus, when the postgres server starts, the data are already there.
The below command can be used to take dump from docker postgress container
docker exec -t <postgres-container-name> pg_dump --no-owner -U <db-username> <db-name> > file-name-to-backup-to.sql
The top answer didn't work for me. I kept getting this error:
psql: error: FATAL: Peer authentication failed for user "postgres"
To get it to work I had to specify a user for the docker container:
Backup
docker exec -t --user postgres your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
Restore
cat your_dump.sql | docker exec -i --user postgres your-db-container psql -U postgres
Another approach (based on docker-postgresql-workflow)
Local running database (not in docker, but same approach would work) to export:
pg_dump -F c -h localhost mydb -U postgres export.dmp
Container database to import:
docker run -d -v /local/path/to/postgres:/var/lib/postgresql/data postgres #ex runs container as `CONTAINERNAME` #find via `docker ps`
docker run -it --link CONTAINERNAME:postgres --volume $PWD/:/tmp/ postgres bash -c 'exec pg_restore -h postgres -U postgres -d mydb -F c /tmp/sonar.dmp'
I had this issue while trying to use a db_dump to restore a db. I normally use dbeaver to restore- however received a psql dump, so had to figure out a method to restore using the docker container.
The methodology recommended by Forth and edited by Soviut worked for me:
cat your_dump.sql | docker exec -i your-db-container psql -U postgres -d dbname
(since this was a single db dump and not multiple db's i included the name)
However, in order to get this to work, I had to also go into the virtualenv that the docker container and project were in. This eluded me for a bit before figuring it out- as I was receiving the following docker error.
read unix #->/var/run/docker.sock: read: connection reset by peer
This can be caused by the file /var/lib/docker/network/files/local-kv.db .I don't know the accuracy of this statement: but I believe I was seeing this as I do not user docker locally, so therefore did not have this file, which it was looking for, using Forth's answer.
I then navigated to correct directory (with the project) activated the virtualenv and then ran the accepted answer. Boom, worked like a top. Hope this helps someone else out there!
dksnap (https://github.com/kelda/dksnap) automates the process of running pg_dumpall and loading the dump via /docker-entrypoint-initdb.d.
It shows you a list of running containers, and you pick which one you want to backup. The resulting artifact is a regular Docker image, so you can then docker run it, or share it by pushing it to a Docker registry.
(disclaimer: I'm a maintainer on the project)
This is the command worked for me.
cat your_dump.sql | sudo docker exec -i {docker-postgres-container} psql -U {user} -d {database_name}
for example
cat table_backup.sql | docker exec -i 03b366004090 psql -U postgres -d postgres
Reference: Solution given by GMartinez-Sisti in this discussion.
https://gist.github.com/gilyes/525cc0f471aafae18c3857c27519fc4b
Solution for docker-compose users:
At First run the docker-compose file by any on of following commands: $ docker-compose -f loca.yml up OR docker-compose -f loca.yml up -d
For taking backup: $ docker-compose -f local.yml exec postgres backup
To see list of backups inside container: $ docker-compose -f local.yml exec postgres backups
Open another terminal and run following command: $ docker ps
Look for the CONTAINER ID of postgres image and copy the ID. Let's assume the CONTAINER ID is: ba78c0f9bcee
Now to bring that backup into your local file system, run the following command: $ docker cp ba78c0f9bcee:/backups ./local_backupfolder
Hope this will help someone who was lost just like me..
N.B: The full details of this solution can be found here.
Another way to do it is to run the pg_restore (of course if you have postgres set up in your host machine) command from the host machine.
Assuming that you have port mapping "5436:5432" for the postgres service in your docker-compose file. Having this port mapping will let you access the container's postgres (running on port 5432) via your host machine's port 5436
pg_restore -h localhost -p 5436 -U <POSTGRES_USER> -d <POSTGRES_DB> /Path/to/the/.psql/file/in/your/host_machine
This way you do not have to dive into the container's terminal or copy the dump file to the container.
I would like to add the official docker documentation for backups and restores. This applies to all kinds of data within a volume, not just postegres.
Backup a container
Create a new container named dbstore:
$ docker run -v /dbdata --name dbstore ubuntu /bin/bash
Then in the next command, we:
Launch a new container and mount the volume from the dbstore container
Mount a local host directory as /backup
Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.
$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
When the command completes and the container stops, we are left with a backup of our dbdata volume.
Restore container from backup
With the backup just created, you can restore it to the same container, or another that you made elsewhere.
For example, create a new container named dbstore2:
$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
Then un-tar the backup file in the new container`s data volume:
$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
You can use the techniques above to automate backup, migration and restore testing using your preferred tools.
Using a File System Level Backup on Docker Volumes
Example Docker Compose
version: "3.9"
services:
db:
container_name: pg_container
image: platerecognizer/parkpow-postgres
# restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: admin
volumes:
postgres_data:
Backup Postgresql Volume
docker run --rm \
--user root \
--volumes-from pg_container \
-v /tmp/db-bkp:/backup \
ubuntu tar cvf /backup/db.tar /var/lib/postgresql/data
Then copy /tmp/db-bkp to second host
Restore Postgresql Volume
docker run --rm \
--user root \
--volumes-from pg_container \
-v /tmp/db-bkp:/backup \
ubuntu bash -c "cd /var && tar xvf /backup/db.tar --strip 1"

Mysqldump connecting issue

I'm trying to make dump with next command:
mysqldump -v -u root -p -h 127.0.0.1 -P 3308 -x --add-drop-table
--add-locks --create-options -K -e -q -A > database.sql
The result (after password input) is message "Connecting to 127.0.0.1...". After this is nothing (no any errors, just waiting).
database.sql is empty file.
Why I see no any activity? Is it bug?
From http://linuxcommand.org/man_pages/mysqldump1.html
The password to use when connecting to the server. If you use the
short option form (-p), you cannot have a space between the option and
the password. If you omit the password value following the --password
or -p option on the command line, you are prompted for one.
The system may be waiting for you to input a password.
If you want to avoid that just add the password in the command. Assuming your password is "FLOWER":
mysqldump -v -u root -pFLOWER -h 127.0.0.1 -P 3308 -x --add-drop-table --add-locks --create-options -K -e -q -A > database.sql
This problem, as you describe it, can be caused by the mysql server not running or not being available on the host (in your case, localhost), or it is running but not on that port.
What kind of a system is it? If it is a flavor of linux/unix, you can run
ps -ef|egrep mysql
to see if the mysql server is running. Check the equivalent command on Windows or whatever else you may be running. Also, you can verify that this is the problem by seeing if this works:
mysql -u root -p -h 127.0.0.1 -P 3308
The solution is to start the server:
/etc/init.d/mysqld start
or the equivalent on your system.
Note: if it is running, determine what port it is on - it is possible that you are not specifying the right port number. The default is 3306 - it is unusual that you are using a non-standard port.

Resources