Correct use of arguments in camel exec - apache-camel

I can't figure out the syntax for doing the following:
bteq < /data/bteqs/test.bteq
Using camel-exec http://camel.apache.org/exec.html with blueprint XML.
I'm probably missing something pretty trivial here - so far I've tried to pass the whole "< /data/bteqs/test.bteq" part as an argument.
I've also tried to overcome the issue by using eval:
<to uri="exec:eval?args="bteq < /data/bteqs/test.bteq""/>
But apparently eval doesn't work with exec at least on my OS:
2018-05-23 12:50:15,017 | INFO | .xml-43_Worker-2 | bteq-test-route
| 43 - org.apache.camel.camel-core - 2.16.5 | ERROR :: Unable to execute
command ExecCommand [args=[bteq < /data/bteqs/test.bteq], executable=eval,
timeout=9223372036854775807, outFile=null, workingDir=null,
useStderrOnEmptyStdout=false]
STACKTRACE :: org.apache.camel.component.exec.ExecException: Unable to
execute command ExecCommand [args=[bteq < /data/bteqs/test.bteq],
executable=eval, timeout=9223372036854775807, outFile=null, workingDir=null,
useStderrOnEmptyStdout=false]

Just a guess, but perhaps you have to use the shell as "runtime" to execute the command.
<to uri="exec:sh?args="bteq < /data/bteqs/test.bteq""/>

Related

Camel sftp fails if there are a lot of files ( more as 10,000 ) in the remote directory queried

Did anyone encounter this behavior also and knows a solution? so_timeout seems the parameter to enlarge but I had no success with this.
In the log files I found caused by pipe closed.
A manual sftp and 'ls *' command took more as 20 minutes to get a listing back. So I guess it is a Camel timeout. Can this set per route?
2020-02-07T15:54:29,624 WARN [com.bank.fuse.filetransfer.config.bankFileTransferManagerLoggingNotifier] (Camel (rabobank-file-transfer-manager-core) thread #4494 - sftp://server.eu/outgoing/attachments) ExchangeFailedEvent | RouteName: SAPSF-ONE-TIME-MIGRATION-18 | OriginatingUri: sftp://server.eu/outgoing/attachments?antInclude=*.pgp&consumer.bridgeErrorHandler=true&delay=20000&inProgressRepository=%23inProgressRepo-SAPSF-ONE-TIME-MIGRATION&knownHostsFile=%2Fhome%2Fjboss%2F.ssh%2Fknown_hosts&move=sent&onCompletionExceptionHandler=%23errorStatusOnCompletionExceptionHandler&password=xxxxxx&privateKeyFile=%2Fhome%2Fjboss%2F.ssh%2Fid_rsa&readLock=none&soTimeout=1800000&streamDownload=true&throwExceptionOnConnectFailed=true&username=account | completedWithoutException: false | toEndpoint: | owner: [SAP] | sm9CI: [CI5328990] | priority: [Low] | BreadcrumbId: ID-system-linux-bank-com-42289-1580217016920-0-5929700 | exception: Cannot change directory to: ..
Maybe soTimeout=1800000 was too short. A manual sftp and ls * command took about 20 minutes.
Since this was a one time action. I resolved it with a manual sftp.

How to pass input parameter in stored procedure calling from single command line

I know how to execute stored procedure by single command line
echo execute some_procedure | sqlplus username/password#databasename
But I am stuck how to pass IN parameter in procedure, actually My procedure taking two parameters.
I tried this but not working
echo execute some_procedure(123,234) | sqlplus username/password#databasename
It will be great if someone can help me on the same.
With what you've shown, you either need to escape the parentheses:
echo execute some_procedure\(123,234\) | sqlplus username/password#databasename
Or enclose your command in double-quotes:
echo "execute some_procedure(123,234)" | sqlplus username/password#databasename
Either will stop the shell trying to intepret the parathenses itself, which would give you an 'syntax error: '(' unexpected or similar error. It's nothing to do with Oracle really, it's just how the shell interpreter works, before it gets as far as piping the echoed string to SQL*Plus.
Incidentally, I'd generally use a heredoc for this sort of thing, and avoid putting the credentials on the command line so they aren't visible via ps; for example:
sqlplus -s /nolog <<!EOF
connect username/password#databasename
execute some_procedure(123,234)
exit
!EOF

How do I run a piped powershell command from command line?

I am trying to run the following powershell command from the windows 7 command line:
powershell ls 'C:/path to file/' | ForEach-Object {$_.LastWriteTime=Get-Date}
I have encountered several errors. When I run the above command, I get the error:
'ForEach-Object' is not recognized as an internal or external command,
operable program or batch file.
I changed the command to:
powershell ls 'C:/My Programs/CPU Analysis/data/test/' | powershell ForEach-Object {$_.LastWriteTime=Get-Date}
Now I am getting the error:
Property 'LastWriteTime' cannot be found on this object; make sure it exists
and is settable.
At line:1 char:17
+ ForEach-Object {$_.LastWriteTime=Get-Date}
+ ~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : PropertyNotFound
How can I modify this command to work from the command line?
Update
Both solutions are basically saying the same thing, but #Trevor Sullivan has a clearer answer.
cmd.exe doesn't understand foreach object. Plus, you're trying to split execution across two separate PowerShell processes, which is not going to work in this scenario.
You'll need to run the whole command in PowerShell.
powershell "ls 'C:/My Programs/CPU Analysis/data/test/' | ForEach-Object {$_.LastWriteTime = Get-Date}"
I'm not sure what are you trying to achieve..but if you are after files and their last modified time then use this:
powershell "ls 'C:\path' | ft name,LastWriteTime"
All you have to do is enclose your command in double quotes ".

Unix File Descriptor Not Found

I am having some trouble when using a File descriptor. The end goal is to be able to use flock for because I am using this script to update a file and it could be run multiple times in parallel and I do not want any collisions. This script is called from another script and passed variables
call script:"call.sh"
#!/bin/ksh
scriptDir=/home/Scripts
###other stuff happens####
#Call to replacement script
. $scriptDir/replacement.sh var1 var2
replacement script:"replacement.sh"
#!/bin/ksh
var1=$1
var2=$2
file=/myfile.doc
exec 300>>$file
flock -x 300
##Replacement logic###
When I run call.sh regular or in debug (ksh) I get an error:
./call.sh: /replacement.sh[34]: 300: not found
At first I though maybe the file descriptor needed to be in the first script too, so I added:
exec 300>>$file
to the call.sh, but that returned an error like:
./call.sh[28]: 300 : not found
It would be awesome if someone could explain to me what I am missing!
Thanks in advance!
You have an invalid space after the = in file= /myfile.doc
ksh only supports single digit fds when used literally. Use 9 instead of 300.
ksh makes non-standard fds non-inheritable. Specifically redirect it to the command.
In all:
#!/bin/ksh
file=./myfile.doc
exec 9>>$file
flock -x 9 9>&9

How to make a pipe loop in Zsh?

Penz says that the problem could be solved by Multios and coproc features in the thread.
However, I am unsure about the solution.
I do know that you can use multios as
ls -1 > file | less
but I have never used such that you have two inputs.
How can you use these features to have a pipe loop in Zsh?
I am having trouble understanding the questions.
Are you trying to do the following:
(ls -1 && file) | less
Where && is used for multiple commands on a single line.
Or are you trying to do the following:
ls -1 | tee file | less
Where tee puts the output into the file and standard out.

Resources