I want to start to track my SnowSQL connections, since I find I am calling the same data file, with different data multiple times a day. I see I can set a log file for my sessions in the config file under SnowSQL Config: Configuration Options Section
Is there a way to organize these log files, so I can track jobs I ran?
Something like Generating Unique Log files... but different file folder names. Can I create different config files for the same SnowSQL installation?
./snowsql -configfile nameoffileor folder
you can use -o log_file=~/yourlogfilename
snowsql -a xy12345 -u jsmith -f /tmp/input_script.sql -o log_file=~/yourlogfilename
Related
I am trying to download files with particular datestamp as an extension from a folder through FTP server. Since the folder contains all other files, I wanted to download only files with a particular datestamp.
I tried using wget files_datestamp*.extension, which didn't work.
I also tried using wget -i files_datestamp*.extension, which downloads all.
My question is: What option to use with wget to download only particular files that I am interested in?
wget http://collaboration.cmc.ec.gc.ca/cmc/CMOI/NetCDF/NMME/1p0deg/#%23%23/CanCM3_201904_r4i1p1_20190501*.nc4
The link you've shared is over HTTP and not FTP. As a result, it is not possible to glob over the filenames, that is feasible only over FTP.
With HTTP, it is imperative that you have access to a directory listing page which tells you which files are available. Then use -r --accept-regex=<regex here> to download your files
I am new to clear case. Our organizations code is versioned using clear case and I have to edit some code. The codes are database .ddl file. so 2 .ddl files for a package.
I have checked out the pieces of code that I have to use. But I can not see them anywhere. I have checked the FTP client I am using, as well as my local.
Now I am confused about two parts:
After checking out do I copy the .ddl files from the current location to my clearcase home and then download them to pc and use them? That is what I am doing right now.
or is there any other way to generate the ddl files from PL/SQL developer?
I can see the package and package body but can not find the .ddl files.
here I am attaching the clearcase terminal commands and responses:
denoad32:ddl $ cleartool lsco -me
--04-03T03:02 Sayan.Sikdar checkout version "XXONT_OH_REL_SC_HOLD_PB.ddl" from /main/R12/8 (reserved)
--04-03T03:02 Sayan.Sikdar checkout version "XXONT_OH_REL_SC_HOLD_PS.ddl" from /main/R12/3 (reserved)
What I am doing right now is I have check the files out. Now that I have checked the files out I am copying it from their current location to my view home. Then I am downloading it and using it.
Basically, you have checked out the files with the command "cleartool co ". in order to be able to access the files, you need to be inside your Clearcase view. If you are in the same session as when you performed the check out, you should have access to the files you have checked out.
The usual workflow is :
checkout the file
modify and save the file
checkin the file
All these must be done inside a Clearcase view.
download them to pc and use them?
If your PC has a ClearCase Client, it can host a ClearCase view (snapshot or dynamic) and will download automatically checked out files.
is there any other way to generate the ddl files from PL/SQL developer
If there is, that would explain why you don't see those files: they can be generated.
pg_dump -U user_name -h host database -s -t table_or_view_names -f table_or_view_names.sql
I was trying to restore an SEC form preloaded database from Arelle.org using postgres. Below is the link:
http://arelle.org/documentation/xbrl-database/
It's the one towards the bottom of the page where it says "Preloaded Database".
I was able to download the file, but unable to gunzipped it at first. So, I copied the file and renamed it with .gz extension instead of .gzip. Then, I was able to gunzip it, but noot sure if that affects the file.
After that I tried the following command on postgress to restore the database in the database that I created:
psql -U username -d mydb -f secfile.pg (no luck)
I also tried:
pg_restore -C -d mydb secfile.pg (also no luck)
I am not sure if it's because I copied and renamed the file. But, I'd really appreciate it if anyone could help.
I am trying to download an artifact from a Jenkins project using a DOS batch script. The reason that this is more than trivial is that my artifact is a ZIP file which includes the Jenkins build number in its name, hence I don't know the exact file name.
My current plan of attack is to use wget pointing at: /lastSuccessfulBuild/artifact/
to do some sort of recursive/mirror download.
If I do the following:
wget -r -np -l 1 -A zip --auth-no-challenge --http-user=**** --http-password=**** http://*.*.*.*:8080/job/MyProject/lastSuccessfulBuild/artifact/
(*s are chars I've changed for posting to SO)
I never get a ZIP file. If I omit the -A zip option, I do get the index.html, so I think the authorisation is working, unless it's some sort of session caching issue?
With -A zip I get as part of the response:
Removing ...+8080/job/MyProject/lastSuccessfulBuild/artifact/index.html since it should be rejected.
So I'm not sure if maybe it's removing that file and so not following its links? But doing -A zip,html doesn't work either.
I've tried several wget options, and also curl, but I am getting nowhere.
I don't know if I have the wrong wget options or whether there is something special about Jenkins authentication.
You can add /*zip*/desired_archive_name.zip to any folder of the artifacts location.
If your ZIP file is the only artifact that the job archives, you can use:
http://*.*.*.*:8080/job/MyProject/lastSuccessfulBuild/artifact/*zip*/myfile.zip
where myfile.zip is just a name you assign to the downloadable archive, could be anything.
If you have multiple artifacts archived, you can either still get the ZIP file of all of them, and deal with individual ones on extraction. Or place the artifact that you want into a separate folder, and apply the /*zip*/ to that folder.
I have the requirement as i need to connect to the rtc and automatically checkout the files from the stream to the repository workspace.
I am writing the following commands in the bat file.
lscm login -r https://rtc.usaa.com/ccm -u uname -P password -n nickname -c
scm create workspace (workspacename) -r nickname -s (streamname)
lscm load workspace name -r nickname -d directorypath(c:codebase/rtc)
lscm logout -r nickname
while i am executing the above batch file for the first time it is creating the workspace and loading the project into the workspace path.
while i am executing the above batch file for the second time again it is creating the duplicate workspace with the same name and getting exception while loading.
I want to override the already existing workspace every time while loading but I didn't find a command for that.
can you please provide me any other way of doing it or any command that solves my problem
It will be good to delete existing local workspace sandbox before loading the new one. In my setup, we execute the following steps:
1. Delete local sandbox (if it makes sense delete existing repository workspace too)
2. Create new repository workspace
3. Load the new repository workspace to local sandbox
Either create a uniquely named workspace (perhaps by sticking a time stamp into the name?) and then delete it when you're done, or use the workspace's UUID from the creation step.
Instead of deleting and again writing the files into workspace, you can try accept incoming changes before load and then using "--force" attribute you can overwrite only the changes made files.
Accept using - SCM accept --flow-components -r <> -u <> -p <> --target
Use force at the end of the load command which you using.
this should work fine.