disable NOTICES in psql output - database

How do I stop psql (PostgreSQL client) from outputting notices? e.g.
psql:schema/auth.sql:20: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "users_pkey" for table "users"
In my opinion a program should be silent unless it has an error, or some other reason to output stuff.

SET client_min_messages TO WARNING;
That could be set only for the session or made persistent with ALTER ROLE or ALTER DATABASE.
Or you could put that in your ".psqlrc".

Probably the most comprehensive explanation is on Peter Eisentrauts blog entry here (Archive)
I would strongly encourage that the original blog be studied and digested but the final recommendation is something like :
PGOPTIONS='--client-min-messages=warning' psql -X -q -a -1 -v ON_ERROR_STOP=1 --pset pager=off -d mydb -f script.sql

Use --quiet when you start psql.
A notice is not useless, but that's my point of view.

It can be set in the global postgresql.conf file as well with modifiying the client_min_messages parameter.
Example:
client_min_messages = warning

I tried the various solutions suggested (and permutations thereof) suggested in this thread, but I was unable to completely suppress PSQL output / notifications.
I am executing a claws2postgres.sh BASH script that does some preliminary processing then calls/executes a PSQL .sql script, to insert 1000's of entries into PostgreSQL.
...
PGOPTIONS="-c client_min_messages=error"
psql -d claws_db -f claws2postgres.sql
Output
[victoria#victoria bash]$ ./claws2postgres.sh
pg_terminate_backend
----------------------
DROP DATABASE
CREATE DATABASE
You are now connected to database "claws_db" as user "victoria".
CREATE TABLE
SELECT 1
INSERT 0 1
UPDATE 1
UPDATE 1
UPDATE 1
Dropping tmp_table
DROP TABLE
You are now connected to database "claws_db" as user "victoria".
psql:/mnt/Vancouver/projects/ie/claws/src/sql/claws2postgres.sql:33: NOTICE: 42P07: relation "claws_table" already exists, skipping
LOCATION: transformCreateStmt, parse_utilcmd.c:206
CREATE TABLE
SELECT 1
INSERT 0 1
UPDATE 2
UPDATE 2
UPDATE 2
Dropping tmp_table
DROP TABLE
[ ... snip ... ]
SOLUTION
Note this modified PSQL line, where I redirect the psql output:
psql -d claws_db -f $SRC_DIR/sql/claws2postgres.sql &>> /tmp/pg_output.txt
The &>> /tmp/pg_output.txt redirect appends all output to an output file, that can also serve as a log file.
BASH terminal output
[victoria#victoria bash]$ time ./claws2postgres.sh
pg_terminate_backend
----------------------
DROP DATABASE
CREATE DATABASE
2:40:54 ## 2 h 41 min
[victoria#victoria bash]$
Monitor progress:
In another terminal, execute
PID=$(pgrep -l -f claws2postgres.sh | grep claws | awk '{ print $1 }'); while kill -0 $PID >/dev/null 2>&1; do NOW=$(date); progress=$(cat /tmp/pg_output.txt | wc -l); printf "\t%s: %i lines\n" "$NOW" $progress; sleep 60; done; for i in seq{1..5}; do aplay 2>/dev/null /mnt/Vancouver/programming/scripts/phaser.wav && sleep 0.5; done
...
Sun 28 Apr 2019 08:18:43 PM PDT: 99263 lines
Sun 28 Apr 2019 08:19:43 PM PDT: 99391 lines
Sun 28 Apr 2019 08:20:43 PM PDT: 99537 lines
[victoria#victoria output]$
pgrep -l -f claws2postgres.sh | grep claws | awk '{ print $1 }' gets the script PID, assigned to $PID
while kill -0 $PID >/dev/null 2>&1; do ... : while that script is running, do ...
cat /tmp/pg_output.txt | wc -l : use the output file line count as a progress indicator
when done, notify by playing phaser.wav 5 times
phaser.wav: https://persagen.com/files/misc/phaser.wav
Output file:
[victoria#victoria ~]$ head -n22 /tmp/pg_output.txt
You are now connected to database "claws_db" as user "victoria".
CREATE TABLE
SELECT 1
INSERT 0 1
UPDATE 1
UPDATE 1
UPDATE 1
Dropping tmp_table
DROP TABLE
You are now connected to database "claws_db" as user "victoria".
psql:/mnt/Vancouver/projects/ie/claws/src/sql/claws2postgres.sql:33: NOTICE: 42P07: relation "claws_table" already exists, skipping
LOCATION: transformCreateStmt, parse_utilcmd.c:206
CREATE TABLE
SELECT 1
INSERT 0 1
UPDATE 2
UPDATE 2
UPDATE 2
Dropping tmp_table
DROP TABLE
References
[re: solution, above] PSQL: How can I prevent any output on the command line?
[re: this SO thread] disable NOTICES in psql output
[related SO thread] Postgresql - is there a way to disable the display of INSERT statements when reading in from a file?
[relevant to solution] https://askubuntu.com/questions/350208/what-does-2-dev-null-mean
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.

Offering a suggestion that is useful for a specific scenario I had:
Windows command shell calls psql.exe call to execute one essential SQL command
Only want to see warnings or errors, and suppress NOTICES
Example:
psql.exe -c "SET client_min_messages TO WARNING; DROP TABLE IF EXISTS mytab CASCADE"
(I was unable to make things work with PGOPTIONS as a Windows environment variable--couldn't work out the right syntax. Tried multiple approaches from different posts.)

Related

How to filter dbaccess output in Informix?

I want to run dbaccess <dbname> <sqlfile.sql>, and store the output to a shell variable. I know there are two methods to do (i) output to pipe and (ii) unload to file. I want to use method (i) kind of approach to store the query output to a shell variable, but along with the query output I am getting unwanted things (connected to database, column headings, disconnected) — see the image attached. I don't want to use method (ii) because I need to store query output to a shell variable, not a file. Please help me with this.
One way, not the best for some cases is sending stderr to /dev/null.
Let's create a table to test it:
[infx1210#tardis ~]$ dbaccess demo -
Database selected.
> CREATE TABLE starc (col1 INT, col2 INT);
Table created.
> INSERT INTO starc VALUES (1,1);
1 row(s) inserted.
> INSERT INTO starc VALUES (2,2);
1 row(s) inserted.
>
Database closed.
[infx1210#tardis ~]$
For one column and one row, ore more, this is quite enough:
[infx1210#tardis ~]$ out_1r1c=`echo "select col1 FROM starc WHERE col1 = 1" | dbaccess demo 2>/dev/null | uniq`
[infx1210#tardis ~]$ echo $out_1r1c
col1 1
[infx1210#tardis ~]$ out_2r1c=`echo "select col1 FROM starc" | dbaccess demo 2>/dev/null | uniq`
[infx1210#tardis ~]$ echo $out_2r1c
col1 1 2
[infx1210#tardis ~]$
For more than one column, probably not the best option:
[infx1210#tardis ~]$ out_1r2c=`echo "select * FROM starc WHERE col1 = 1" | dbaccess demo 2>/dev/null | uniq`
[infx1210#tardis ~]$ echo $out_1r2c
col1 col2 1 1
[infx1210#tardis ~]$ out_2r2c=`echo "select * FROM starc" | dbaccess demo 2>/dev/null | uniq`
[infx1210#tardis ~]$ echo $out_2r2c
col1 col2 1 1 2 2
[infx1210#tardis ~]$
Follow up question
For what you're doing simple pass the information on the eco command to a SQL file script and execute it.
For example:
[infx1210#tardis ~]$ echo "CONNECT TO 'sysmaster#infx1210' USER 'starc' USING '${PASSWD}'; SELECT USER FROM sysdual;" | dbaccess -
32412: USING clause unsupported. DB-Access will prompt you for a password.
Error in line 1
Near character position 45
[infx1210#tardis ~]$ finderr 32412
-32412 USING clause unsupported. DB-Access will prompt you for a password.
DB-Access does not support the USING password clause in a CONNECT ...
USER statement when it violates security. For example, do not type a
password on the screen where it can be seen or include it in a command
file that someone other than the user can read. To maintain security,
DB-Access prompts you to enter the password on the screen and uses echo
suppression to hide it from view.
[infx1210#tardis ~]$ echo "CONNECT TO 'sysmaster#infx1210' USER 'starc' USING '${PASSWD}'; SELECT USER FROM sysdual;" > file.sql
[infx1210#tardis ~]$ dbaccess - file.sql 2>> test.log
(expression)
starc
[infx1210#tardis ~]$
I don't like this approach. You should consider using SET SESSION AUTHORIZATION statement.
Now for a user to use it DBA database level privilege must be granted and, also, SETSESSIONAUTH access privilege is required, and only a user who holds the DBSECADM role can grant the SETSESSIONAUTH privilege, and only a DBSA can grant the DBSECADM role for a user.
Normally the members of the OS group that owns the $INFORMXIDR/etc are DBSA, in this case:
[infx1210#tardis ~]$ ls -ld $INFORMIXDIR/etc
drwxrwxr-x. 5 informix informix 4096 May 18 13:33 /opt/IBM/informix/V12.1/etc
[infx1210#tardis ~]$ grep informix /etc/group
informix:x:501:ricardo
[infx1210#tardis ~]$
So, besides the informix user only ricardo is a member of DBSA. Let's stick with informix for simplicity.
The next step is to GRANT the DBSECADM role for informix, this is a special role that will spread across all databases, you don't have to do it one by one:
[infx1210#tardis ~]$ echo "GRANT DBSECADM TO 'informix'" | dbaccess sysmaster
Database selected.
DBSECADM granted.
Database closed.
[infx1210#tardis ~]$
Now, the SETSESSIONAUTH cannot be given to the user itself, so let's give it to ricardo:
[infx1210#tardis ~]$ echo "GRANT SETSESSIONAUTH ON 'starc' TO 'ricardo'" | dbaccess demo
Database selected.
SETSESSIONAUTH privilege granted.
Database closed.
[infx1210#tardis ~]$
Switching to the user ricardo, remember that it should have DBA privilege, we can now:
[infx1210#tardis ~]$ echo "SET SESSION AUTHORIZATION TO 'starc'; SELECT USER FROM systables WHERE tabid = 1;" | dbaccess demo 2>>/dev/null
(expression)
starc
[infx1210#tardis ~]$
As noted by Ricardo Henriques in his answer, you can do a certain amount by redirecting standard error.
Also consider the OUTPUT statement:
OUTPUT TO "/dev/stdout" WITHOUT HEADINGS
SELECT * FROM YourTable WHERE …
or the UNLOAD statement:
UNLOAD TO "/dev/stdout"
SELECT * FROM YourTable WHERE …
Using "/dev/stdout" is a trick — a useful one on occasion. You can specify any file name there. You may still want to redirect errors. Be aware that DB-Access blunders on after errors — you can stop it doing so by setting DBACCNOIGN=1 in the environment.
Also, consider checking out SQLCMD which I wrote because it behaves in shell scripting contexts and DB-Access doesn't. It dates back to 1986 (before there was dbaccess; in those days, you used isql instead — DB-Access was carved out of isql in an evening). The current version is SQLCMD 90.00 (2015-11-08). It bears no relation to Microsoft's johnny-come-lately program of the same name — except for the name.
I would add to the previously stated, to first output the STDERR to null then grep -v for $^.
IE, select the current date and time from the database:
$> echo "output to /dev/stdout without headings select first 1 current from systables;" > query.sql
$> FOO=`dbaccess mydb query.sql 2>/dev/null | grep -v "^$"`
$> echo $FOO
2017-07-27 14:25:30.000
$>
You just need to make sure your query returns one row, one column only.

Suppress tempdb message when outputting result set

Using SQLCMD, I am running a script to output to STDOUT then gziping the output. When I look at the output file, I see this warning message:
Database name 'tempdb' ignored, referencing object in tempdb.
In my script, I have a check at the start of the script to drop the temp table if it exists:
IF OBJECT_ID('tempdb..#TheTable') IS NOT NULL
BEGIN
DROP TABLE tempdb..#TheTable
END
However - I have also SET NOCOUNT ON, but file still captures the warning message.
SQLCMD Script:
sqlcmd -i TheScript.sql -h-1 -k1 -s"," -W -u | gzip > "C:\TheOutput.gz"
Is there a way to suppress a message like that?
Change your if condition to the following pattern:
IF 0 < OBJECT_ID('tempdb..#TheTable')
DROP TABLE #TheTable
This should not result in any error messages.
Simple clean version that works on SQL Server 2016 and above without any messages:
drop table if exists #TheTable

Insert SQL statements via command line without reopening connection to remote database

I have a large amount of data files to process and to be stored in the remote database. Each line of a data file represents a row in the database, but must be formatted before inserting into the database.
My first solution was to process data files by writing bash scripts and produce SQL data files, and then import the dump SQL files into the database. This solution seems to be too slow and as you can see involves an extra step of creating intermediary SQL file.
My second solution was to write bash scripts that while processing each line of the data file, creates and INSERT INTO ... statement and sends the SQL statement to the remote database:
echo sql_statement | psql -h remote_server -U username -d database
i.e. does not create SQL file. This solution, however, has one major issue that I am searching an advice on:
Each time I have to reconnect to the remote database to insert one single row.
Is there a way to connect to the remote database, stay connected and then "pipe" or "send" the insert-SQL-statement without creating a huge SQL file?
Answer to your actual question
Yes. You can use a named pipe instead of creating a file. Consider the following demo.
Create a schema x in my database event for testing:
-- DROP SCHEMA x CASCADE;
CREATE SCHEMA x;
CREATE TABLE x.x (id int, a text);
Create a named pipe (fifo) from the shell like this:
postgres#db:~$ mkfifo --mode=0666 /tmp/myPipe
Either 1) call the SQL command COPY using a named pipe on the server:
postgres#db:~$ psql event -p5433 -c "COPY x.x FROM '/tmp/myPipe'"
This will acquire an exclusive lock on the table x.x in the database. The connection stays open until the fifo gets data. Be careful not to leave this open for too long! You can call this after you have filled the pipe to minimize blocking time. You can chose the sequence of events. The command executes as soon as two processes bind to the pipe. The first waits for the second.
Or 2) you can execute SQL from the pipe on the client:
postgres#db:~$ psql event -p5433 -f /tmp/myPipe
This is better suited for your case. Also, no table locks until SQL is executed in one piece.
Bash will appear blocked. It is waiting for input to the pipe. To do it all from one bash instance, you can send the waiting process to the background instead. Like this:
postgres#db:~$ psql event -p5433 -f /tmp/myPipe 2>&1 &
Either way, from the same bash or a different instance, you can fill the pipe now.
Demo with three rows for variant 1):
postgres#db:~$ echo '1 foo' >> /tmp/myPipe; echo '2 bar' >> /tmp/myPipe; echo '3 baz' >> /tmp/myPipe;
(Take care to use tabs as delimiters or instruct COPY to accept a different delimiter using WITH DELIMITER 'delimiter_character')
That will trigger the pending psql with the COPY command to execute and return:
COPY 3
Demo for for variant 2):
postgres#db:~$ (echo -n "INSERT INTO x.x VALUES (1,'foo')" >> /tmp/myPipe; echo -n ",(2,'bar')" >> /tmp/myPipe; echo ",(3,'baz')" >> /tmp/myPipe;)
INSERT 0 3
Delete the named pipe after you are done:
postgres#db:~$ rm /tmp/myPipe
Check success:
event=# select * from x.x;
id | a
----+-------------------
1 | foo
2 | bar
3 | baz
Useful links for the code above
Reading compressed files with postgres using named pipes
Introduction to Named Pipes
Best practice to run bash script in background
Advice you may or may not not need
For bulk INSERT you have better solutions than a separate INSERT per row. Use this syntax variant:
INSERT INTO mytable (col1, col2, col3) VALUES
(1, 'foo', 'bar')
,(2, 'goo', 'gar')
,(3, 'hoo', 'har')
...
;
Write your statements to a file and do one mass INSERT like this:
psql -h remote_server -U username -d database -p 5432 -f my_insert_file.sql
(5432 or whatever port the db-cluster is listening on)
my_insert_file.sql can hold multiple SQL statements. In fact, it's common practise to restore / deploy whole databases like that. Consult the manual about the -f parameter, or in bash: man psql.
Or, if you can transfer the (compressed) file to the server, you can use COPY to insert the (decompressed) data even faster.
You can also do some or all of the processing inside PostgreSQL. For that you can COPY TO (or INSERT INTO) a temporary table and use plain SQL statements to prepare and finally INSERT / UPDATE your tables. I do that a lot. Be aware that temporary tables live and die with the session.
You could use a GUI like pgAdmin for comfortable handling. A session in an SQL Editor window remains open until you close the window. (Therefore, temporary tables live until you close the window.)
I know I'm late to the party, but why couldn't you combine all your INSERT statements into a single string, with a semicolon marking the end of each statement? (Warning! Pseudocode ahead...)
Instead of:
for each line
sql_statement="INSERT whatever YOU want"
echo $sql_statement | psql ...
done
Use:
sql_statements=""
for each line
sql_statement="INSERT whatever YOU want;"
sql_statements="$sql_statements $sql_statement"
done
echo $sql_statements | psql ...
That way you don't have to create anything on your filesystem, do a bunch of redirection, run any tasks in the background, remember to delete anything on your filesystem afterwards, or even remind yourself what a named pipe is.

On-the-fly compression of stdin failing?

From what was suggested here, I am trying to pipe the output from sqlcmd to 7zip so that I can save disk space when dumping a 200GB database. I have tried the following:
> sqlcmd -S <DBNAME> -Q "SELECT * FROM ..." | .\7za.exe a -si <FILENAME>
This does not seem to be working even when I leave the system for a whole day. However, the following works:
> sqlcmd -S <DBNAME> -Q "SELECT TOP 100 * FROM ..." | .\7za.exe a -si <FILENAME>
and even this one:
> sqlcmd -S <DBNAME> -Q "SELECT * FROM ..."
When I remove the pipe symbol, I can see the results and can even redirect it to a file within finishes in 7 hours.
I am not sure what is going on with piping large amount of output but what I could understand up until this point is that 7zip seems to be waiting to consume the whole input before it creates an archive file (because I don't really see a file being created to begin with) so I am not sure if it is actually performing on-the-fly compression. So I tried gzip and here's my experience:
> echo "Test" | .\gzip.exe > test.gz
> .\gzip.exe test.gz
gzip: test.gz: not in gzip format
I am not sure I am doing this the right way. Any suggestions?
Oh boy! It was PowerShell all along! I have no idea why this is happening at least with gzip. Gzip kept complaining that the input was not in gzip format. I switched over to the normal command prompt and everything started working.
I did observe this before. Looks like | and > have a slightly different functionality in PowerShell and Command prompt. Not sure what exactly it is but if someone knows about it, please add in here.

How do I restore one database from a mysqldump containing multiple databases?

I have a mysql dump with 5 databases and would like to know if there is a way to import just one of those (using mysqldump or other).
Suggestions appreciated.
You can use the mysql command line --one-database option.
mysql> mysql -u root -p --one-database YOURDBNAME < YOURFILE.SQL
Of course be careful when you do this.
You can also use a mysql dumpsplitter.
You can pipe the dumped SQL through sed and have it extract the database for you. Something like:
cat mysqldumped.sql | \
sed -n -e '/^CREATE DATABASE.*`the_database_you_want`/,/^CREATE DATABASE/ p' | \
sed -e '$d' | \
mysql
The two sed commands:
Only print the lines matching between the CREATE DATABASE lines (including both CREATE DATABASE lines), and
Delete the last CREATE DATABASE line from the output since we don't want mysqld to create a second database.
If your dump does not contain the CREATE DATABASE lines, you can also match against the USE lines.

Resources