MySQL sql_mode has unwanted, extra modes - my.cnf

I'm on a Mac, using MySQL workbench, which is telling me the location of my my.cnf file is in /etc/, where I'm editing it. I set the permission on that file with chmod a-w.
In that my.cnf file, I have the following:
sql-mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
Which is what I want. However, upon restarting MySQL and logging into its command line, I get this:
mysql> SELECT ##sql_mode;
| ##sql_mode |
| ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
1 row in set (0.00 sec)
Additional modes are being added. If I run the following (but not persistent) command:
set global sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES;
This is what I'm seeing on console:
mysql> SELECT ##sql_mode;
| ##sql_mode |
| STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION |
Exactly what I want.
Can anyone explain to me how/why these extra, unwanted sql modes are being added? And/or how I can get just these two modes to persist without the others?

Related

Camel sftp fails if there are a lot of files ( more as 10,000 ) in the remote directory queried

Did anyone encounter this behavior also and knows a solution? so_timeout seems the parameter to enlarge but I had no success with this.
In the log files I found caused by pipe closed.
A manual sftp and 'ls *' command took more as 20 minutes to get a listing back. So I guess it is a Camel timeout. Can this set per route?
2020-02-07T15:54:29,624 WARN [com.bank.fuse.filetransfer.config.bankFileTransferManagerLoggingNotifier] (Camel (rabobank-file-transfer-manager-core) thread #4494 - sftp://server.eu/outgoing/attachments) ExchangeFailedEvent | RouteName: SAPSF-ONE-TIME-MIGRATION-18 | OriginatingUri: sftp://server.eu/outgoing/attachments?antInclude=*.pgp&consumer.bridgeErrorHandler=true&delay=20000&inProgressRepository=%23inProgressRepo-SAPSF-ONE-TIME-MIGRATION&knownHostsFile=%2Fhome%2Fjboss%2F.ssh%2Fknown_hosts&move=sent&onCompletionExceptionHandler=%23errorStatusOnCompletionExceptionHandler&password=xxxxxx&privateKeyFile=%2Fhome%2Fjboss%2F.ssh%2Fid_rsa&readLock=none&soTimeout=1800000&streamDownload=true&throwExceptionOnConnectFailed=true&username=account | completedWithoutException: false | toEndpoint: | owner: [SAP] | sm9CI: [CI5328990] | priority: [Low] | BreadcrumbId: ID-system-linux-bank-com-42289-1580217016920-0-5929700 | exception: Cannot change directory to: ..
Maybe soTimeout=1800000 was too short. A manual sftp and ls * command took about 20 minutes.
Since this was a one time action. I resolved it with a manual sftp.

Generating unique log files when running SnowSQL

(submitting on behalf of a Snowflake User)
I understand that you can configure log files by following this documentation (https://docs.snowflake.net/manuals/user-guide/snowsql-config.html#configuration-options-section) and the following snippet:
| log_bootstrap_file | ~/.snowsql/log_... | SnowSQL bootstrap log file location |
| log_file | ~/.snowsql/log | SnowSQL main log file location
BUT(!) is there a way to save log file of different jobs under different paths?
Any recommendations would be greatly appreciated! THANKS!
I would do something like the following, where I add the log file location to the snowsql command and my config file has a config called configName.
snowsql -c configName -o log_file=~/.snowsql/"$(date +'%Y%m%d_%H%M%S')"log
This example uses a pretty-close-to-unique-name for the logfile name, assuming you don't have two processes starting at the same second, this should work.
If you need to modify the path (e.g. /tmp/log/uniqueName/logfile.log), you could use OS environment variables in the same fashion, note however you'd likely have to create that folder/directory first.
I hope this helps...Rich

Display ASCII graph of Cleartool ls commands

In git it is possible to show an ASCII graph of the log with git log --graph which outputs a commandline graph something like:
* 040cc7c (HEAD, master) Mannual is NOT built by default
* a29ceb7 Removed offensive binary file that was compiled on my machine
| * 901c7dd (cvc3) cvc3 now configured before building
| * d9e8b5e More sane Yices SMT solver caller
| | * 5b98a10 (nullvars) All uninitialized variables get zero inits
| |/
| * 1cad874 CFLAGS for cvc3 to work succesfully
|/
* d642f88 Option -aliasstat, by default stats are suppressed
Is this also possible with ClearCase / ClearTool when using the lsstream or lsvtree commands, without the need to open a GUI?
Since I couldn't find anything that suited me, I created my own python script with this ability. It is still a little rough, but works for me.
For anyone interested, it is available here as a github gist
With command line, you have cleartool lsvtree.
If you want the history to focus on the branch you currently are (instead of starting by default at /main), you need to use the -bra/nch branch-pname option.
Starts the version tree listing at the specified branch.
You can also use an extended name as the pname argument (for example, foo.c##\main\bug405) to start the listing at a particular branch.
But if you need additional information like the author, then you would need to fallback to cleartool lshistory: see "How to ask cleartool lsvtree to show the author's name"

On-the-fly compression of stdin failing?

From what was suggested here, I am trying to pipe the output from sqlcmd to 7zip so that I can save disk space when dumping a 200GB database. I have tried the following:
> sqlcmd -S <DBNAME> -Q "SELECT * FROM ..." | .\7za.exe a -si <FILENAME>
This does not seem to be working even when I leave the system for a whole day. However, the following works:
> sqlcmd -S <DBNAME> -Q "SELECT TOP 100 * FROM ..." | .\7za.exe a -si <FILENAME>
and even this one:
> sqlcmd -S <DBNAME> -Q "SELECT * FROM ..."
When I remove the pipe symbol, I can see the results and can even redirect it to a file within finishes in 7 hours.
I am not sure what is going on with piping large amount of output but what I could understand up until this point is that 7zip seems to be waiting to consume the whole input before it creates an archive file (because I don't really see a file being created to begin with) so I am not sure if it is actually performing on-the-fly compression. So I tried gzip and here's my experience:
> echo "Test" | .\gzip.exe > test.gz
> .\gzip.exe test.gz
gzip: test.gz: not in gzip format
I am not sure I am doing this the right way. Any suggestions?
Oh boy! It was PowerShell all along! I have no idea why this is happening at least with gzip. Gzip kept complaining that the input was not in gzip format. I switched over to the normal command prompt and everything started working.
I did observe this before. Looks like | and > have a slightly different functionality in PowerShell and Command prompt. Not sure what exactly it is but if someone knows about it, please add in here.

disable NOTICES in psql output

How do I stop psql (PostgreSQL client) from outputting notices? e.g.
psql:schema/auth.sql:20: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "users_pkey" for table "users"
In my opinion a program should be silent unless it has an error, or some other reason to output stuff.
SET client_min_messages TO WARNING;
That could be set only for the session or made persistent with ALTER ROLE or ALTER DATABASE.
Or you could put that in your ".psqlrc".
Probably the most comprehensive explanation is on Peter Eisentrauts blog entry here (Archive)
I would strongly encourage that the original blog be studied and digested but the final recommendation is something like :
PGOPTIONS='--client-min-messages=warning' psql -X -q -a -1 -v ON_ERROR_STOP=1 --pset pager=off -d mydb -f script.sql
Use --quiet when you start psql.
A notice is not useless, but that's my point of view.
It can be set in the global postgresql.conf file as well with modifiying the client_min_messages parameter.
Example:
client_min_messages = warning
I tried the various solutions suggested (and permutations thereof) suggested in this thread, but I was unable to completely suppress PSQL output / notifications.
I am executing a claws2postgres.sh BASH script that does some preliminary processing then calls/executes a PSQL .sql script, to insert 1000's of entries into PostgreSQL.
...
PGOPTIONS="-c client_min_messages=error"
psql -d claws_db -f claws2postgres.sql
Output
[victoria#victoria bash]$ ./claws2postgres.sh
pg_terminate_backend
----------------------
DROP DATABASE
CREATE DATABASE
You are now connected to database "claws_db" as user "victoria".
CREATE TABLE
SELECT 1
INSERT 0 1
UPDATE 1
UPDATE 1
UPDATE 1
Dropping tmp_table
DROP TABLE
You are now connected to database "claws_db" as user "victoria".
psql:/mnt/Vancouver/projects/ie/claws/src/sql/claws2postgres.sql:33: NOTICE: 42P07: relation "claws_table" already exists, skipping
LOCATION: transformCreateStmt, parse_utilcmd.c:206
CREATE TABLE
SELECT 1
INSERT 0 1
UPDATE 2
UPDATE 2
UPDATE 2
Dropping tmp_table
DROP TABLE
[ ... snip ... ]
SOLUTION
Note this modified PSQL line, where I redirect the psql output:
psql -d claws_db -f $SRC_DIR/sql/claws2postgres.sql &>> /tmp/pg_output.txt
The &>> /tmp/pg_output.txt redirect appends all output to an output file, that can also serve as a log file.
BASH terminal output
[victoria#victoria bash]$ time ./claws2postgres.sh
pg_terminate_backend
----------------------
DROP DATABASE
CREATE DATABASE
2:40:54 ## 2 h 41 min
[victoria#victoria bash]$
Monitor progress:
In another terminal, execute
PID=$(pgrep -l -f claws2postgres.sh | grep claws | awk '{ print $1 }'); while kill -0 $PID >/dev/null 2>&1; do NOW=$(date); progress=$(cat /tmp/pg_output.txt | wc -l); printf "\t%s: %i lines\n" "$NOW" $progress; sleep 60; done; for i in seq{1..5}; do aplay 2>/dev/null /mnt/Vancouver/programming/scripts/phaser.wav && sleep 0.5; done
...
Sun 28 Apr 2019 08:18:43 PM PDT: 99263 lines
Sun 28 Apr 2019 08:19:43 PM PDT: 99391 lines
Sun 28 Apr 2019 08:20:43 PM PDT: 99537 lines
[victoria#victoria output]$
pgrep -l -f claws2postgres.sh | grep claws | awk '{ print $1 }' gets the script PID, assigned to $PID
while kill -0 $PID >/dev/null 2>&1; do ... : while that script is running, do ...
cat /tmp/pg_output.txt | wc -l : use the output file line count as a progress indicator
when done, notify by playing phaser.wav 5 times
phaser.wav: https://persagen.com/files/misc/phaser.wav
Output file:
[victoria#victoria ~]$ head -n22 /tmp/pg_output.txt
You are now connected to database "claws_db" as user "victoria".
CREATE TABLE
SELECT 1
INSERT 0 1
UPDATE 1
UPDATE 1
UPDATE 1
Dropping tmp_table
DROP TABLE
You are now connected to database "claws_db" as user "victoria".
psql:/mnt/Vancouver/projects/ie/claws/src/sql/claws2postgres.sql:33: NOTICE: 42P07: relation "claws_table" already exists, skipping
LOCATION: transformCreateStmt, parse_utilcmd.c:206
CREATE TABLE
SELECT 1
INSERT 0 1
UPDATE 2
UPDATE 2
UPDATE 2
Dropping tmp_table
DROP TABLE
References
[re: solution, above] PSQL: How can I prevent any output on the command line?
[re: this SO thread] disable NOTICES in psql output
[related SO thread] Postgresql - is there a way to disable the display of INSERT statements when reading in from a file?
[relevant to solution] https://askubuntu.com/questions/350208/what-does-2-dev-null-mean
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Offering a suggestion that is useful for a specific scenario I had:
Windows command shell calls psql.exe call to execute one essential SQL command
Only want to see warnings or errors, and suppress NOTICES
Example:
psql.exe -c "SET client_min_messages TO WARNING; DROP TABLE IF EXISTS mytab CASCADE"
(I was unable to make things work with PGOPTIONS as a Windows environment variable--couldn't work out the right syntax. Tried multiple approaches from different posts.)

Resources