My S3 path is
s3://datahub-processed-zone/pit/2022-03-25/Investor/
and Snowflake-S3 Integration is s3://datahub-processed-zone which is called com_stage. I am generating a dynamic stage and I have set variable for date.
The dyamnic stage should read all files in the Investor folder
set investor_pattern2 = '/'||current_date()||'\/Investor/'
ls #com_stage/pit pattern=investor_pattern2
but it is don't giving me any files. Can some take a look into why is it not returning me any data ?
Thanks,
Xi
The variable needs to be prefixed with $ when you use it e.g.
ls #com_stage/pit pattern=$investor_pattern2
Alternatively, you can access string variables using identifier():
ls #com_stage/pit pattern=identifier('investor_pattern2')
Related
I am knitting markdown code initiated from a database command button with a shell command to execute a batch file. The goal here is to knit the file into a directory specific to the database record. Batch file execution currently looks like this:
"Rscript.exe" -e "library('knitr'); rmarkdown::render('MyCode.Rmd', output_file="MyRMD.html")"
Inside the rmd file I create a variable (say 'out_dir') that contains the directory string where I want the output file to be stored. Is there any way I can use this variable to direct where knitr will store the rendered file? Other than YAML parameters, can you control knitr output options from within the code?
This wasn't the approach I was trying to accomplish, but the workaround was to build a front end script to the markdown document that did the necessary pre-processing and then passed the values using "output_file", "output_dir" and a "params()" list matching the YAML header in the rmd (Note this part is not necessary in solving the original question, but since I took this approach it makes it handy to pull even more pre-processing code out of the rmd and into the front end passing values through params). So the front end initiates the knit:
rmarkdown::render("MyCode.Rmd",
params = list(P1 = p1, P2 = p2),
output_file = 'MyRMD.html',
output_dir = 'Dir_From_Database_Record')
And just for completeness the markdown rmd YAML header from MyCode.RMD looks like:
output: html_document
params:
P1: NA
P2: NA
Is there a way to batch convert Collada dae files to Scenekit scn files?
My project uses approx 50 models created in sketchup that are updated regularly, these are exported to DAE but also need to be converted to SCN files for usage in xCode. I know it can be done manually via xCode and "Convert to SceneKit scene file format (scn)" but this take to much manual labour.
Based on https://the-nerd.be/2014/11/07/dynamically-load-collada-files-in-scenekit-at-runtime/ I figured out that the scntool is able to convert it via the command line and write the following script:
find ./dae -name "*.dae" | while read f ; do
inputfilename=$(basename $f)
echo $inputfilename
./scntool --convert $f --format scn --output ./scn/$inputfilename
done
for file in ./scn/*.dae; do
mv "$file" "./scn/$(basename "$file" .dae).scn"
done
#HixField has a good shell script for invoking scntool. Another way to do that is to leverage Xcode's build system, which does the same thing for any .dae files you put in a project's scnassets folder. Even if you're not bundling those files in your app, you can create a dummy Xcode target or project that contains all the assets you want to convert, and it'll convert them all whenever you build the target. (Which you could then integrate into a CI system or other automation.)
I agree with #Hixfield About everything except you need to add one more option to the scntool to get your materials correctly without need to re add all manually
scntool --convert INPUT.dae --format scn --output OUT.scn --asset-catalog-path .
The dot at the end of the command line is very important it means you will set resources to same location
If you don’t set the —asset-catalog-path . You will have no materials
I have to read a file in my shell script. I was using PL/SQL's UTL_FILE to open the file.
But I have to do a new change which will append timestamp to the file.
e.g import.data file becomes import_20152005101200.data
Now timestamp is the time at which file arrive at the server.
Since the file name changed I can't use the old way of file accessing.
I came up with below solution:
UTL_FILE.FOPEN ('path','import_${file_date}.data','r');
To achieve this I have to get filename and trim it using SUBSTR to get timestamp and pass to file_date variable.
However I am not able to find how to access filename in a particular path. I can use basename. But My file name keeps changing because of timestamp.
Any help/ alternate ideas are welcome.
PL/SQL isn't a good tool to solve this problem; UTL_FILE doesn't have any tools to list all the files in a folder.
A better solution is to define a stored procedure which uses UTL_FILE and pass the file name to process as an argument to the procedure. That way, you use the shell (which has many powerful commands and tools to examine folders and files) or a script language like Python to determine which file to process.
Is there a function equivalent to git diff FILE in libgit2? In other words, how to efficiently retrieve the diff of a single file without having libgit2 to look at other files in the working directory?
git diff FILE will display the changes you've made relative to the index.
This can be achieved through the use of the libgit2 git_diff_index_to_workdir() function.
This function accepts a git_diff_options structure as a parameter into which you can provide a pathspec, or a list of pathspecs, you're specifically interested in.
More information about this:
Documentation of this function.
Unit test leveraging this use case.
I wonder if anyone knows how to write a batch script to edit some text in a .cs file.
What I want to do is change "AssemblyVersion("1.0.0.0")" "AssemblyVersion("1.0.0.x")" where x++ for every time the job in jenkins is being built.
Best Regards Jan
Do you want to use only a batch script for this? You could also use Execute Groovy Script option and write some simple groovy script to achieve this
file = new File("folder/path/myfile.cs")
fileText = file.text;
fileText = fileText.replaceAll(srcExp, replaceText);
file.write(fileText);
You can also use the availabe environment variables from your jenkins job to construct your replace text. These variables will be present at /env-vars.html
Stay away from "batch-file automation" - will only cause you grief.
(for a starter, different versions of Windows support a different set of batch-commands)
You should incorporate the build-number in the script as an Environment Variable -
use either the "built-in" %BUILD_NUMBER% parameter or set your own format with
the Formatted Version-Number Plugin .
If you do need to edit that 'CS' file, I suggest using either Perl or PowerShell.
Cheers