Snowflake Snowsql !source tab complete - snowflake-cloud-data-platform

Is there a way to enable tab or some other auto-complete when using Snowflake's Snowsql cli?
I keep having to copy and paste the entire file path whenever running a script from the command line.
[USER]#(Analytics_WH)#PROD.(data_white)>!source [file_name]

Related

FileZilla Pro CLI Batch command file not executing

I am currently attempting to use FileZilla Pro CLI on a Windows machine to connect and upload to a site in that is working in the Site Manager.
The issue is, the command below works perfectly when pasting it directly into the cmd line. However when saving it as a batch file, it simply just gets to the fzcli> prompt and then nothing happens.
The two line breaks are on purposes to override the requirement for a password and it works perfectly when pasted in.
Does anyone know if this is a cmd line issue, or if my commands need to be different to work in batch file mode?
fzcli
connect --site 0testsite01
put C:/inetpub/wwwroot/websites/sftp/files/customer/test-01.txt /test-sftp/testuser01/test/test-01-uploaded.txt
PAUSE
Your batch file executes fzcli in an interactive mode. The fzcli then waits for you to interactively enter the commands. Only after you would exit the fzcli, the batch file would continue. And fail, as it will try to execute connect as a batch file command. The fzcli does not know about the batch file. Nor does the batch file interpreter know about the fzcli commands.
It's a common misconception. You will find plenty of similar questions basically about scripting any tool that has its own commands. For example: sftp, ftp, psftp, winscp.
To provide commands to fzcli, it seems that you need to use --script switch. The fzcli documentation gives this example:
fzcli --mode standalone --script C:\Scripts\script-file

253006 (n/a): File doesn't exist:

I am trying to PUT a file into Snowflake staging area using PUT
command.
When I run the command using snowsql I am able to PUT the file in
stage area.
When I try to execute the same command from a shell script it is
throwing below error.
253006 (n/a): File doesn't exist: ['/home/raghu/sample_pipeline_config.csv']
Any help will be highly appreciated. FYI I am running the commands in
Ubuntu terminal.

SQL Server Agent Job and gsutil

I can successfully run a gsutil command with a windows domain account from the command line in Windows (setting up service account key etc.). When I try to run the same command from a SQL Agent Job using a CmdExec task the job hangs and doesn't complete. I can't see any logging so have no clue what it's waiting for. I've setup the job to run with the same Proxy User that i use to run the gsutil command manually.
Any ideas how I can get this to work or how to see more logging?
Are you using standalone gsutil? Or did you get it as part of installing the Cloud SDK (gcloud)?
If the job hangs for a long time, it could be stuck retrying multiple times. To test if this is the case, you can set the num_retries option to be very small, but above 0 (e.g. 1) either in your .boto file or the the command arguments via this option:
gsutil -o 'Boto:num_retries=1' <rest of command here...>
A second thing to note (at least for the version of gsutil that doesn't come with gcloud) is that gsutil looks for your boto config file (which specifies the credentials it should use) in your home directory by default. If you're running gsutil as a different user (maybe your SQL Agent Job runs as its own dedicated user?), it will look for a .boto file in that user's home directory. The same should apply for the gcloud version -- gcloud uses credentials based on the user executing it. You can avoid this by copying your .boto file to somewhere that the job has permission to read from, along with setting the BOTO_CONFIG environment variable to that path before running gsutil. From the cmd shell, this would look something like:
set BOTO_CONFIG=C:\some\path\.boto && gsutil <rest of command here...>
Note: If you're not sure which boto config file you're normally using, you can find out by running gsutil version -l and looking at the line that displays your config path(s).

Automating putty with .bat file [duplicate]

I want to run a few shell commands every time I SSH to a server via PuTTY. I'm connecting to a production web server managed by someone else, and I don't want to store my own scripts there.
I see the option Connection > SSH > Remote Command, but if I put my initialization commands there, after starting the session, it closes immediately after the commands execute. How can I run the Remote Command, and then keep the session open so I can continue using it?
The SSH session closes (and PuTTY with it) as soon as the command finishes. By default the "command" is a shell. As you have overridden this default "command" and yet you want to run the shell nevertheless, you have to explicitly execute the shell yourself:
my-command ; /bin/bash
See also Executing a specific command on the server.
One option to go is set up your putty remote command like this:
ls > dir.ls & /bin/bash
In this example command you want to run is "ls > dir.ls" what creates file dir.ls with content of directory listing.
And as you want to leave shell open you can add aditional command "/bin/bash" or any other shell of your choice.

how to override the already existing workspaces in rtc using command scm or lscm

I have the requirement as i need to connect to the rtc and automatically checkout the files from the stream to the repository workspace.
I am writing the following commands in the bat file.
lscm login -r https://rtc.usaa.com/ccm -u uname -P password -n nickname -c
scm create workspace (workspacename) -r nickname -s (streamname)
lscm load workspace name -r nickname -d directorypath(c:codebase/rtc)
lscm logout -r nickname
while i am executing the above batch file for the first time it is creating the workspace and loading the project into the workspace path.
while i am executing the above batch file for the second time again it is creating the duplicate workspace with the same name and getting exception while loading.
I want to override the already existing workspace every time while loading but I didn't find a command for that.
can you please provide me any other way of doing it or any command that solves my problem
It will be good to delete existing local workspace sandbox before loading the new one. In my setup, we execute the following steps:
1. Delete local sandbox (if it makes sense delete existing repository workspace too)
2. Create new repository workspace
3. Load the new repository workspace to local sandbox
Either create a uniquely named workspace (perhaps by sticking a time stamp into the name?) and then delete it when you're done, or use the workspace's UUID from the creation step.
Instead of deleting and again writing the files into workspace, you can try accept incoming changes before load and then using "--force" attribute you can overwrite only the changes made files.
Accept using - SCM accept --flow-components -r <> -u <> -p <> --target
Use force at the end of the load command which you using.
this should work fine.

Resources