Is it possible for sshkit capture to not error when the command executed returns nothing - capistrano3

What I'm trying to achieve is a capistrano3 task that does a log file grep on all servers - this would save a lot of time as we have a lot of servers so doing it manually or even scripted but sequentially takes ages.
I have a rough at the edges task that actually works except when one of the servers returns nothing for the grep. In this case the whole command falls over.
Hence wondering if there is a way to set capture to accept empty returns.
namespace :admin do
task :log_grep, :command, :file do |t,args|
command = args[:command] || 'ask for a command'
file = args[:file] || 'log_grep_results'
outs = {}
on roles(:app), in: :parallel do
outs[host.hostname] = capture(:zgrep, "#{command}")
end
File.open(file, 'w') do |fh|
outs.each do |host,out|
fh.write(out)
end
end
end
end

Should anyone else come to this question, here's solution - raise_on_non_zero_exit: false
i wanted:
resp = capture %([ -f /var/run/xxx/xxx.pid ] && echo "ok")
error:
SSHKit::Command::Failed: [ -f /var/run/xxx/xxx.pid ] && echo "ok" exit status: 1
[ -f /var/run/xxx/xxx.pid ] && echo "ok" stdout: Nothing written
[ -f /var/run/xxx/xxx.pid ] && echo "ok" stderr: Nothing written
solution:
resp = capture %([ -f /var/run/xxx/xxx.pid ] && echo "ok"), raise_on_non_zero_exit: false
# resp => ""

So the work around I did was to start adding what I'm calling Capistrano utility scripts in the repo. Then capistrano runs these scripts. All the scripts are is a wrapper around a grep and some logic to output something if the return is empty.
Capistrano code:
namespace :utils do
task :log_grep, :str, :file, :save_to do |t,args|
command_args = "#{args[:str]} #{args[:file]}"
outs = {}
on roles(:app), in: :parallel do
outs[host.hostname] = capture(:ruby, "#{fetch(:deploy_to)}/current/bin/log_grep.rb #{args[:str]} #{args[:file]}")
end
file = args[:save_to]
file ||= 'log_grep_output'
File.open(file, 'w') do |fh|
outs.each do |host,out|
s = "#{host} -- #{out}\n"
fh.write(s)
end
end
end
end
Ruby script log_grep.rb:
a = `zgrep #{ARGV[0]} #{ARGV[1]}`
if a.empty?
puts 'Nothing Found'
else
puts a
end

Related

Best way to read ERRORLEVEL codes for a windows executable, executed from within a TCL script

I am pretty new to Tcl world, so please excuse me of any naive questions.
I am trying to execute a windows executable from a Tcl procedure. At the same time, I want to read the %errorlevel% outputted from the windows executable and throw some meaningful messages to the Tcl shell.
Ex:
I have a windows executable "test.exe arg1" that outputs various errorcodes based on the interrupts:
0 - If the script successfully executed
1 - If the user interrupted the process manually, and the process exited.
2 - If the user login is not found, process exited.
3 - If the "arg1" is not specified, process exited
In my TCL script, I have the following:
set result [catch {exec cmd /c test.exe arg1}]
if { $result == 3 } {
puts "Argument undefined"
} elseif { $result == 2 } {
puts "Login Failed"
} elseif { $result == 1 } {
puts "Process Cancelled by user"
} elseif { $result == 0 } {
puts "Command successful"
}
It appears that the output of the catch command is either 1 or 0, and it would not read the appropriate %errorlevel% information from the windows terminal.
What is the best way to trap the %errorlevel% info from the Windows executable and process appropriate error messages using Tcl?
The catch command takes two optional arguments: "resultVarName" and "optionsVarName". If you use those, you can examine the second one for the return code:
catch {exec cmd /c test.exe arg1} output options
puts [dict get $options -errorcode]
That would report something like: CHILDSTATUS 15567 1
The fields represent the error type, process ID, and the exit code. So you should check that error type is "CHILDSTATUS" before taking that last number as the exit code. Other error types will have different data. This is actually more easily done with the try command:
try {
exec cmd /c test.exe arg1
} on ok {output} {
puts "Command successful"
} trap {CHILDSTATUS} {output options} {
set result [lindex [dict get $options -errorcode] end]
if {$result == 3} {
puts "Argument undefined"
} elseif {$result == 2} {
puts "Login Failed"
} elseif {$result == 1} {
puts "Process Cancelled by user"
}
}
Note: I tested this on linux, but it should work very similar on windows.

Getting a Windows command prompt contents to a text file

I want to write a batch utility to copy the output of a command prompt window to a file. I run my command prompt windows with the maximum depth of 9999 lines, and occasionally I want to grab the output of a command whose output is off-screen. I can do this manually with the keys Ctrl-A, Ctrl-Cand then pasting the result into Notepad - I just want to automate it in a batch file with a call to:
SaveScreen <text file name>
I know I can do it with redirection, but that would involve knowing that I will need to save the output of a batch command sequence beforehand.
So if I had a batch script:
call BuildPhase1.bat
if "%ErrorLevel% gtr 0 goto :ErrorExit
call BuildPhase2.bat
if "%ErrorLevel% gtr 0 goto :ErrorExit
call BuildPhase3.bat
if "%ErrorLevel% gtr 0 goto :ErrorExit
I could write:
cls
call BuildPhase1.bat
if "%ErrorLevel% gtr 0 call SaveScreen.bat BuildPhase1.err & goto :ErrorExit
call BuildPhase2.bat
if "%ErrorLevel% gtr 0 call SaveScreen.bat BuildPhase2.err & goto :ErrorExit
call BuildPhase3.bat
if "%ErrorLevel% gtr 0 call SaveScreen.bat BuildPhase3.err & goto :ErrorExit
or I could just type SaveScreen batch.log when I see that a run has failed.
My experiments have got me this far:
<!-- : Begin batch script
#cscript //nologo "%~f0?.wsf" //job:JS
#exit /b
----- Begin wsf script --->
<package>
<job id="JS">
<script language="JScript">
var oShell = WScript.CreateObject("WScript.Shell");
oShell.SendKeys ("hi folks{Enter}") ;
oShell.SendKeys ("^A") ; // Ctrl-A (select all)
oShell.SendKeys ("^C") ; // Ctrl-C (copy)
oShell.SendKeys ("% ES") ; // Alt-space, E, S (select all via menu)
oShell.SendKeys ("% EY") ; // Alt-space, E, Y (copy via menu)
// ... invoke a notepad session, paste the clipboard into it, save to a file
WScript.Quit () ;
</script>
</job>
</package>
My keystrokes are making it to the command prompt so presumably I have the correct window focused - it just seems to be ignoring the Ctrl and Alt modifiers. It also recognises Ctrl-C but not Ctrl-A. Because it has ignored the Ctrl-A to select all the text, the Ctrl-C causes the batch file to think it has seen a break command.
I've seen the other answers like this one but they all deal with methods using redirection, rather than a way of doing it after the fact "on demand".
* UPDATE *
On the basis of #dxiv's pointer, here is a batch wrapper for the routine:
Get-ConsoleAsText.bat
:: save the contents of the screen console buffer to a disk file.
#set "_Filename=%~1"
#if "%_Filename%" equ "" #set "_Filename=Console.txt"
#powershell Get-ConsoleAsText.ps1 >"%_Filename%"
#exit /b 0
The Powershell routine is pretty much as was presented in the link, except that:
I had to sanitise it to remove some of the more interesting character substitutions the select/copy/paste operation introduced.
The original saved the trailing spaces as well. Those are now trimmed.
Get-ConsoleAsText.ps1
# Get-ConsoleAsText.ps1 (based on: https://devblogs.microsoft.com/powershell/capture-console-screen/)
#
# The script captures console screen buffer up to the current cursor position and returns it in plain text format.
#
# Returns: ASCII-encoded string.
#
# Example:
#
# $textFileName = "$env:temp\ConsoleBuffer.txt"
# .\Get-ConsoleAsText | out-file $textFileName -encoding ascii
# $null = [System.Diagnostics.Process]::Start("$textFileName")
#
if ($host.Name -ne 'ConsoleHost') # Check the host name and exit if the host is not the Windows PowerShell console host.
{
write-host -ForegroundColor Red "This script runs only in the console host. You cannot run this script in $($host.Name)."
exit -1
}
$textBuilder = new-object system.text.stringbuilder # Initialize string builder.
$bufferWidth = $host.ui.rawui.BufferSize.Width # Grab the console screen buffer contents using the Host console API.
$bufferHeight = $host.ui.rawui.CursorPosition.Y
$rec = new-object System.Management.Automation.Host.Rectangle 0,0,($bufferWidth - 1),$bufferHeight
$buffer = $host.ui.rawui.GetBufferContents($rec)
for($i = 0; $i -lt $bufferHeight; $i++) # Iterate through the lines in the console buffer.
{
$Line = ""
for($j = 0; $j -lt $bufferWidth; $j++)
{
$cell = $buffer[$i,$j]
$line = $line + $cell.Character
}
$line = $line.trimend(" ") # remove trailing spaces.
$null = $textBuilder.Append($line)
$null = $textBuilder.Append("`r`n")
}
return $textBuilder.ToString()
The contents of the console buffer can be retrieved with the PS script from PowerShell's team blog Capture console screen mentioned in a comment, now edited into OP's question.
The last line could also be changed to copy the contents to the clipboard instead of returning it.
Set-Clipboard -Value $textBuilder.ToString()
As a side note, the reasons for using a StringBuilder rather than direct concatenation are discussed in How does StringBuilder work internally in C# and How the StringBuilder class is implemented.

shell mock --define from array: ERROR: Bad option for '--define' ("dist). Use --define 'macro expr'

I am currently writing a script which should make it more easy for me to build some RPMs using mock.
The plan is to make it possible to add values for the mock (and therefor rpmbuild) --define parameter.
The error I get if I add such a define value is
ERROR: Bad option for '--define' ("dist). Use --define 'macro expr'
When I execute the script with as simple as ./test.sh --define "dist .el7" the "debug" output is as follows:
/usr/bin/mock --init -r epel-7-x86_64 --define "dist .el7"
If I copy this and execute it in the shell directly it is actually working. Does anybody have an idea why this is the case?
My script can be cut down to the following:
#!/bin/sh
set -e
set -u
set -o pipefail
C_MOCK="/usr/bin/mock"
MOCK_DEFINES=()
_add_mock_define() {
#_check_parameters_count_strict 1 ${#}
local MOCK_DEFINE="${1}"
MOCK_DEFINES+=("${MOCK_DEFINE}")
}
_print_mock_defines_parameter() {
if [ ${#MOCK_DEFINES[#]} -eq 0 ]; then
return 0
fi
printf -- "--define \"%s\" " "${MOCK_DEFINES[#]}"
}
_mock_init() {
local MOCK_DEFINES_STRING="$(_print_mock_defines_parameter)"
local MOCK_PARAMS="--init"
MOCK_PARAMS="${MOCK_PARAMS} -r epel-7-x86_64"
[ ! "${#MOCK_DEFINES_STRING}" -eq 0 ] && MOCK_PARAMS="${MOCK_PARAMS} ${MOCK_DEFINES_STRING}"
echo "${C_MOCK} ${MOCK_PARAMS}"
${C_MOCK} ${MOCK_PARAMS}
local RC=${?}
if [ ${RC} -ne 0 ]; then
_exit_error "Error while mock initializing ..." ${RC}
fi
}
while (( ${#} )); do
case "${1}" in
-s|--define)
shift 1
_add_mock_define "${1}"
;;
esac
shift 1
done
_mock_init
exit 0
After asking this question a coworker I was pointed to this question on unix stackexchange: Unix Stackexchange question
The way this problem was solved can be broken down to following lines:
DEFINES=()
DEFINES+=(--define "dist .el7")
DEFINES+=(--define "foo bar")
/usr/bin/mock --init -r epel-7-x86_64 "${DEFINES[#]}"
Just in case somebody else stumbles upon this kind of issue.

Is there a way to read stdin without blocking script's execution using a vbscript?

I am trying to find a way read stdin without blocking my vbscript's execution but still no luck.
What I want to achieve is the following (written in sh shell script):
for i in {1..3}; do
read input;
echo $input;
sleep 1;
if [ "$input" == "done" ]; then
echo "process done";
exit;
fi
done
Tried the following in vbscript but script hangs in the first iteration waiting for Enter in order to proceed
input=""
for i=1 to 3
WScript.Echo i
WScript.sleep (100);
If WScript.StdIn.AtEndOfStream Then
input = input & WScript.StdIn.Readline()
If input = "done" Then
WScript.Echo "process done"
End if
End If
Next
Is there a way not to block my script while reading stdin?

checking if owner/group match in files and subfolders

I need help on writing a shell script to check if the owner and group matches the names I have in my IF statement. It needs to recursively check all files and folders, including the parent folder.
For example, my directory structure might look like this
/data
/data/folder1
/data/folder1/fileA
/data/folder2/fileB
I need to verify that data, folder1, folder2, fileA, fileB are all owned by the same owner and group.
#!/bin/bash
DIR="/data";
N=0;
for $DIR..
if [ NOT MATCH "username:groupname" ]; then
N=1;
fi
done
if [ $N -gt 0 ]; then
echo "all or some files and folders don't match";
else
echo "all files match";
fi
#!/bin/bash
OWNER=user
ACTUAL_OWNER='stat -c %U file'
N=0;
not_match(files)
{
if [ ! OWNER = ACTUAL_OWNER ]; then
N=1;
return;
else if [ not_end_of_file ]; then
not_match(files+1)
else
return
fi
}
DIR="/data";
not_match(files)
if [ $N -gt 0 ]; then
echo "all or some files and folders don't match";
else
echo "all files match";
fi
i wasnt sure of some of the exact commands or syntax so it has some sudo code but it is basically what you want to do. if they are a match and its not the end of the files you are searching call the function again and increase to the next file if it doesnt match say they dont and just return since you dont tell the user how many dont match and if the end is reached then success and return to print out to the user

Resources