When I use the !spool command, the target file is appended with the results.
Example:
$ touch current.spool
$ cat curr_ts.sql
!spool current.spool
select CURRENT_TIMESTAMP;
$ snowsql -f curr_ts.sql <--- 1st execution of the script
* SnowSQL * v1.1.86
Type SQL statements or !help
+-------------------------------+
| CURRENT_TIMESTAMP |
|-------------------------------|
| 2020-04-21 13:35:59.983 -0400 |
+-------------------------------+
1 Row(s) produced. Time Elapsed: 0.096s
Goodbye!
$ cat current.spool
+-------------------------------+
| CURRENT_TIMESTAMP |
|-------------------------------|
| 2020-04-21 13:35:59.983 -0400 |
+-------------------------------+
$ snowsql -f curr_ts.sql <--- 2nd (supposedly independent) execution of the script
* SnowSQL * v1.1.86
Type SQL statements or !help
+-------------------------------+
| CURRENT_TIMESTAMP |
|-------------------------------|
| 2020-04-21 13:36:17.629 -0400 |
+-------------------------------+
1 Row(s) produced. Time Elapsed: 0.098s
Goodbye!
[1019] bjs13b#igloo:/home/bjs13b/snowsql $ cat current.spool
+-------------------------------+
| CURRENT_TIMESTAMP |
|-------------------------------|
| 2020-04-21 13:35:59.983 -0400 |
+-------------------------------+
+-------------------------------+
| CURRENT_TIMESTAMP | <--- file NOT replaced by 2nd execution!
|-------------------------------|
| 2020-04-21 13:36:17.629 -0400 |
+-------------------------------+
[edit to hopefully clarify the problem]
If each execution is supposed to create an hourly file, then each execution ADDS DATA to the hourly file ... so much for an hourly dataset.
I know plenty of workarounds, but this is a specific question about spool. I'd prefer option that changes the behavior of the spool command. In other systems, I'm used to the file being overwritten and this keeps biting me!
A dtrace/strace confirms that the spool feature flag always opens the file in an O_APPEND mode, leading to the behaviour you're observing. There's no documented override to this behaviour as of this post's date.
~> cat test.sql
!spool filename.txt
SELECT 1;
~> dtruss snowsql -f test.sql
…
open("filename.txt\0", 0x1000209, 0x1B6)
^^^^^^^^^
(Flags include O_CREAT and O_APPEND)
…
~> dtruss snowsql -f test.sql
…
open("filename.txt\0", 0x1000209, 0x1B6)
…
SnowSQL, being a custom client to Snowflake DB, does not follow a specific, standard behaviour (such as some of Snowflake DB's other connectors do - JDBC, ODBC, SQL Alchemy (Python), etc.). In relation to the comment comparing it with Oracle's SQL*Plus features however, it makes sense to request a similar feature. If you have a support account with Snowflake, I'd recommend raising a feature request.
Related
I'm working on an security oriented project based on Erlang. This application needs to access some parts of the system restricted to root or other privileged users. Currently, this project will only work on Unix/Linux/BSD systems and should not alter files (read-only access).
I've thought (and tested) some of these solutions, but, I don't know what should I take with Erlang. What is the worst? What is the best? What is the easiest to maintain?
Thanks!
1 node (as root)
This solution is the worst, and, I want to avoid it even on testing servers.
_____________________________
| |
| (root) |
| ___________ _______ |
| | | | | |
| | Erlang VM |<---| Files | |
| |___________| |_______| |
|_____________________________|
You can see here a big picture of what I don't currently want.
#!/usr/bin/env escript
main([]) ->
ok;
main([H|T]) ->
{ok, Data} = file:read_file(H),
io:format("~p: ~p~n", [H,Data]),
main(T).
Run it as root, and voilà.
su - root
${script_path}/readfile.escript /etc/shadow
1 node (as root) + 1 node (as restricted user)
I need to start 2 nodes, one running as root or with another privileged user and one other running node with restricted users, easily accessible from outside world. This method work pretty well but has many issue. Firstly, I can't connect to privileged user node with standard Erlang distributed protocol due to remote procedure call between connected nodes (restricted node can execute arbitrary commands on privileged node). I don't know if Erlang can actually filter RPC before executing them. Secondly, I need to manage two nodes on one host.
________________ ____________________________
| | | |
| (r_user) | | (root) |
| ___________ | | ___________ _______ |
| | | | | | | | | |
| | Erlang VM |<===[socket]===>| Erlang VM |<---| Files | |
| |___________| | | |___________| |_______| |
|________________| |____________________________|
In following examples, I will start two Erlang shell. The first shell will be in restricted mode:
su - myuser
erl -sname restricted -cookie ${mycookie}
The second one will run with a privileged user:
su - root
erl -sname privileged -cookie ${mycookie}
Standard Erlang RPC (not enough security)
Finally, on restricted node (via shell for this example), I can have access to all files:
{ok, Data} = rpc:call(privileged, file, read_file, ["/etc/shadow"]).
With "Firewall" Method
I'm using local unix socket in this example, supported only until R19/R20.
Restricted user need to have access to this socket, stored somewhere in
/var/run.
1 node (as restricted user) + external commands (with sudo)
I give the right to Erlang VM process to execute commands with sudo. I just need to execute specific program, get stdout and parse it. In this case, I need to use existing available programs from my system or create a new one.
________________ _______________________
| | | |
| (r_user) | | (root) |
| ___________ | | ______ _______ |
| | | | | | | | | |
| | Erlang VM |<===[stdin]===>| sudo |<---| Files | |
| |___________| | | |______| |_______| |
|________________| |_______________________|
1 node (as restricted user) + ports (setuid)
I create a ports set with setuid flag. This program has now right to read from files but, I need to place it in secure place on the server. If I want to make it dynamic, I should also define a strict protocol between Erlang VM and this ports. IMO, setuid is rarely a good answer.
________________ ________________________
| | | |
| (r_user) | | (root) [setuid] |
| ___________ | | _______/ _______ |
| | | | | | | | | |
| | Erlang VM |<===[stdin]===>| ports |<---| Files | |
| |___________| | | |_______| |_______| |
|________________| |________________________|
1 node (as restricted user) + NIF
I don't think I can give specific right to a NIF inside Erlang VM, maybe with capsicum or other non-portable/OS-specific kernel features.
_______________
| |
| (r_user) |
| ___________ |
| | | |
| | Erlang VM | |
| |___________| |
| | | |
| | NIF | |
| |___________| | _______
| | | | | |
| | ??? |<---| Files |
| |___________| | |_______|
|_______________|
1 node (as restricted user) + 1 daemon (as root)
I can create a daemon running as root, connected to Erlang VM with an Unix Socket or another methods. This solution is a bit like ports or external command with sudo, except I need to manage a long living daemon with privilege.
________________ _________________________
| | | |
| (r_user) | | (root) |
| ___________ | | ________ _______ |
| | | | | | | | | |
| | Erlang VM |<===[socket]==>| daemon |<---| Files | |
| |___________| | | |________| |_______| |
|________________| |_________________________|
Custom Erlang VM
OpenSSH and lot of other secure software runs as root and create 2 interconnected process with pipes. When starting Erlang VM as root, 2 processes are spawned, one as root, and another in restricted user. When some action require root privilege, restricted process send a request to root process and wait for its answer. I guess its the more complex solution currently, and I don't master enough C and Erlang VM to make this thing working well.
______________ _______________
| | | |
| (root) | | (r_user) |
| __________ | | ___________ |
| | | | | | | |
| | PrivProc |<===[pipe]===>| Erlang VM | |
| |__________| | | |___________| |
|______________| |_______________|
From security perspective your best option is to minimise the amount and complexity of code running with root privileges. So I would rule out all the options when you run a whole Erlang VM as root - there's simply too much code there to lock it down safely.
As long as you only need to read some files, the best option would be to write a small C program that you run from the Erlang VM with sudo. All this program has to do is to open the file for you and hand over the file descriptor to the Erlang process via a Unix socket. I used to work on a project that relied on this technique to open privileged TCP ports, and it worked like a charm. Unfortunately that wasn't an open source project, but with some googling I found this library that does exactly the same thing: https://github.com/msantos/procket
I'd advise you to fork procket and strip down it a bit (you don't seem to need icmp support, only regular files opened in read-only mode).
Once you have the file descriptor in the Erlang VM, you can read from it in different ways:
Using a NIF like procket:read/2 does.
Access the file descriptor as an Erlang port, see the network sniffing example in the procket docs.
I was trying to copy my ClearCase view from one server to another server.
But the command for findmerge is not working and gives error.
Can anyone help me out on this ?
Here is the error message in full:
/home/xkuldub 23>cleartool findmerge `cleartool find -avobs -ele 'brtype(xkuldub_vse_4.0_integration_sds_GIC)' -print -nxname`
cleartool: Error: Cannot get view info for current view: not a ClearCase object.
cleartool: Error: Pathname required.
Usage: findmerge -graphical
findmerge {pname ... | [pname ...] -all | -avobs}
{-ftag view-tag |-fversion version-selector | -flatest}
[-depth | -nrecurse | -directory] [-follow] [-visible]
<general options>
findmerge activity-selector ... -fcsets
[-ftag view-tag |-fversion version-selector | -flatest]
<general options>
[-user login-name] [-group group-name] [-type {f|d|fd}]
[-name 'pattern'] [-element query]
[-nzero] [-nback] [-whynot] [-log log-pname]
[-c checkout-comment | -cfile pname | -cq | -cqe | -nc]
[-unreserved [-nmaster]] [-query | -abort | -qall | -qntrivial]
[-serial] [{-btag | -fbtag } view-tag]
{-print [-long | -short | -nxname]
| {-merge | -okmerge} -blank_ignore
| {-gmerge | -okgmerge}
| -exec command-invocation
| -ok command-invocation
| -co
} ...
It really depends on:
what is your ClearCase version (7? 8? 9?)
what is your client OS? Your ClearCase server OS?
how you moved your view to another server (for instance, did you follow "Moving a view to a host with the same architecture or to a NAS device" or "Moving a view to a host with a different architecture"?)
Your view needs to be properly registered in order for the cleartool command to recognize the current folder as part of a view.
The main reason this isn't working is that your current directory is not in a clearcase view. Findmerge needs a view context to resolve the merges that need to be done.
Start with the embedded find command to get it working, and then branch out.
You also need an action for findmerge to perform, like -merge, -gmerge, or -print.
Be aware that passing a large changeset this way could fail depending on the maximum allowed command line length. If this is a UCM environment, you could do this cross-stream merge either through deliver or findmerge {activity list} -fcsets.
I'm on a Mac, using MySQL workbench, which is telling me the location of my my.cnf file is in /etc/, where I'm editing it. I set the permission on that file with chmod a-w.
In that my.cnf file, I have the following:
sql-mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
Which is what I want. However, upon restarting MySQL and logging into its command line, I get this:
mysql> SELECT ##sql_mode;
| ##sql_mode |
| ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
1 row in set (0.00 sec)
Additional modes are being added. If I run the following (but not persistent) command:
set global sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES;
This is what I'm seeing on console:
mysql> SELECT ##sql_mode;
| ##sql_mode |
| STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION |
Exactly what I want.
Can anyone explain to me how/why these extra, unwanted sql modes are being added? And/or how I can get just these two modes to persist without the others?
I have the following databases
sudo -u postgres psql -c "\list"
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+-------+-----------------------
postgres | postgres | LATIN1 | en_US | en_US |
template0 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
How can I change encoding from LATIN1 to UTF8 in the database template1 or template0?
Since you don't appear to have any actual data here, just shutdown and delete the cluster (server and set of databases) and re-create it. Which operating system are you using? The standard PostgreSQL command to create a new cluster is initdb, but on Debian/Ubuntu et al you'd typically use pg_createcluster
See also How do you change the character encoding of a postgres database?
Although you can try to tweak the encodings, it's not recommended. Even though I suggested it in that linked question, if you had data with latin1 characters here, you'd need to recode them to utf-8.
Just use:
update pg_database set encoding = pg_char_to_encoding('LATIN1') where datname = 'seguros'
I often have multiple viewports opened in vim, using the :vsp and :sp commands. After I've been editing for a while, I'll often run the :make command from within vim. When I get errors, vim will then show me the lines that gcc says caused my errors. However, vim will often open the file with errors in another viewport, even if that file is already open. An Example:
Before Make
--------------------
| | |
| file 1 | file 2 |
| | |
| | |
--------------------
Ok, assume there are errors in file 2
--------------------
| | |
| file 2 | file 2 |
| | |
| | |
--------------------
vim now jumps to the error line in the left viewport, even though the right viewport already had that file open.
Is there some way to tell vim not to use the file one viewport if the file that the error is in is already open in vim?
Try setting the option switchbuf=useopen.