issue running a batch script to kill a process - batch-file

I am using the following script on a command line to kill a hypothetical notepad process (using a KornShell (ksh) in Windows XP, if that matters):
kill $(tasklist | grep -i notepad.exe | awk '{print 2}')
Now I take this line, and put it into a batch file c:\temp\testkill.bat, thinking that I should just as well be able to kill the process by running the batch file. However, when I run the batch file, I get the following awk error about unbalanced parentheses:
C:/Temp> ./testkill.bat
C:\Temp>kill $(tasklist | grep -i notepad.exe | awk '{print $2}')
awk: unbalanced () Context is:
>>> {print $2}) <<<
C:/Temp>
So I am baffled as to why I am getting this error about unbalanced parentheses when I run this script via a batch file, but have no issues when I run the command directly from the command line?
(I am not necessarily tied to this way of killing a process - I am additionally wondering why if I write the following on the command line:
tasklist | grep -i notepad.exe | awk '{print $2}' | kill
The process ID that comes out of the tasklist/grep/awk calls does not seem to properly get piped to kill.

Why are you making a batch file if you have a Korn shell? Write a shell script - that will probably help you out a lot.
I can answer your final question - kill doesn't take the PID to kill from the standard input, it takes it on the command line. You can use xargs to make it work:
tasklist | grep -i notepad.exe | awk '{print $2}' | xargs kill

Related

How to run a command on all .cs files in directory and store file path as a variable to be used as command on windows

I'm trying to run the following command on each file of a directory.
svn blame FILEPATH | gawk '{print $2}' | sort | uniq -c
It works well however it only works on individual files. For whatever reason, it won't run on the directory as a whole. I was hoping to create some form of batch script that would iterate through the directory and would grab the file path and store it as a variable to be used in the command. However, I've never written a batch script nor do I know the first thing about them. I tried this loop but couldn't get it to work
set codedirectory=%C:\Repo\Pineapple% for %codedirectory% %%i in (*.cs) do
but I'm not necessarily sure what to do next. Unfortunately, this all has to be run on windows. Any help would be greatly appreciated. Thanks!
use for and find, similar to example on
https://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html
for i in $(find . -name "*.cs"); do
svn blame $i | gawk '{print $2}' | sort | uniq -c
done

Pipe the output of eval/source in fish shell

I want to pipe the output of eval to a file. That works as expected if the command execution is successful:
eval ls > log.txt 2>&1
cat log.txt # Documents Desktop
It also works if the command is not successful
eval rm Desktop > log.txt 2>&1
cat log.txt # rm: cannot remove 'Desktop': Is a directory
However, I do not manage to redirect stderr if the command does not exist
eval abcde > log.txt 2>&1 # fish: Unknown command abcde
cat log.txt # (empty)
How can I redirect also the output of the third case to a log file?
Something that works with source would also be very much appreciated:
echo abcde | source > log.txt 2>&1
However, I do not manage to redirect stderr if the command does not exist
That's because the output is not coming from eval or the command, it's coming from your command-not-found handler.
Try checking if the command exists before you try to execute it. If you absolutely can't, it's technically possible to silence the command-not-found error entirely by redefining __fish_command_not_found_handler:
function __fish_command_not_found_handler; end
You'd have to handle moving it back afterwards via functions --copy:
functions --copy __fish_command_not_found_handler oldcnf
Overall I don't recommend any of this and suspect you might be overusing eval.
Something that works with source would also be very much appreciated:
That's what eval is for, quite literally. Up to the upcoming 3.1 release eval is a function that's just source with some support code that mostly boils down to handling these redirections.
You should eval if command exists, this is possible with
test -f (whereis -b command | awk '{print $2}')
whereis -b will to search for the binaries for the command in your system
awk filter the output to only show the first result
test -f going to verify if the file exists
If the command exist, it return status 0. So to finish you can write it like
test -f (whereis -b abcde | awk '{print $2}') && abcde > log.txt 2>&1
Also can use this form
test (whereis -b abcde | awk '{print $2}') != '' && abcde > log.txt 2>&1

Create a bat file for git-bash script

Hi all. I gave a git-bash installed and want some automatisation. I've got a .bat file, which I want to run as
some.bat param | iconv -cp1251 > l.log | tail -f l.log
And I want to run it not in WinCMD but in git-bash shell - tell me plz how to do it?
Git bash on windows uses BASH. You can use bash to create a quick script that can take in parameters and echo them so that you can pipe them to your next utility.
It could look something like this
File: some.sh
#!/bin/bash
#Lots of fun bash scripting here...
echo $1
# $1 here is the first parameter sent to this script.
# $2 is the second... etc. $0 is the script name
then by setting some.sh as executable
$ chmod +x some.sh
You'll then be able to execute it in the git-bash shell
./some.sh param | cat ... etc | etc...
You can read about bash programming
I'd reccomend looking at some bash scripting tutorials such as this one

reading latest from sftp server

I have a requirement to download the latest file from sftp server.I have written the below code in shell script but am not able to download the file.
After retrieving the file am getting the below Error
Invalid command
Please help me how to download the file.
#!/bin/sh
HOST='xx.xx.xx.nxx'
USER='xx'
PASSWD='xx'
sftp $USER#$HOST <<EOF
cd /inbound
file=$(ls -ltr *.xml | tail -1 | awk '{print $NF}')
get $file
EOF
You are trying to run shell commands in sftp, but sftp is not a shell. The command ls happens to exist in sftp, but not $(), tail, or awk. To see this, just type sftp $USER#$HOST to open a sftp session and type help to get all of the available commands.
So what you need to do is to execute the shell commands using ssh to get the filename. So something like this:
file=$(ssh $USER#$HOST "ls -ltr /inbound/*.xml" | tail -1 | awk '{print $NF}')
This execute the command ls -ltr /inbound/*.xml remotely on the server. The output of that is then processed by your shell script locally. Or maybe more efficiently by doing the processing on the server:
file=$(ssh $USER#$HOST "ls -ltr /inbound/*.xml | tail -1 | awk '{print \$NF}'")
Now the shell variable file contains the name of the newest file. Then you can download that file with sftp as
sftp $USER#$HOST:$file .

Faster grep function for big (27GB) files

I have to grep from a file (5MB) containing specific strings the same strings (and other information) from a big file (27GB).
To speed up the analysis I split the 27GB file into 1GB files and then applied the following script (with the help of some people here). However it is not very efficient (to produce a 180KB file it takes 30 hours!).
Here's the script. Is there a more appropriate tool than grep? Or a more efficient way to use grep?
#!/bin/bash
NR_CPUS=4
count=0
for z in `echo {a..z}` ;
do
for x in `echo {a..z}` ;
do
for y in `echo {a..z}` ;
do
for ids in $(cat input.sam|awk '{print $1}');
do
grep $ids sample_"$z""$x""$y"|awk '{print $1" "$10" "$11}' >> output.txt &
let count+=1
[[ $((count%NR_CPUS)) -eq 0 ]] && wait
done
done #&
A few things you can try:
1) You are reading input.sam multiple times. It only needs to be read once before your first loop starts. Save the ids to a temporary file which will be read by grep.
2) Prefix your grep command with LC_ALL=C to use the C locale instead of UTF-8. This will speed up grep.
3) Use fgrep because you're searching for a fixed string, not a regular expression.
4) Use -f to make grep read patterns from a file, rather than using a loop.
5) Don't write to the output file from multiple processes as you may end up with lines interleaving and a corrupt file.
After making those changes, this is what your script would become:
awk '{print $1}' input.sam > idsFile.txt
for z in {a..z}
do
for x in {a..z}
do
for y in {a..z}
do
LC_ALL=C fgrep -f idsFile.txt sample_"$z""$x""$y" | awk '{print $1,$10,$11}'
done >> output.txt
Also, check out GNU Parallel which is designed to help you run jobs in parallel.
My initial thoughts are that you're repeatedly spawning grep. Spawning processes is very expensive (relatively) and I think you'd be better off with some sort of scripted solution (e.g. Perl) that doesn't require the continual process creation
e.g. for each inner loop you're kicking off cat and awk (you won't need cat since awk can read files, and in fact doesn't this cat/awk combination return the same thing each time?) and then grep. Then you wait for 4 greps to finish and you go around again.
If you have to use grep, you can use
grep -f filename
to specify the set of patterns to match in the filename, rather than a single pattern on the command line. I suspect form the above you can pre-generate such a list.
ok I have a test file containing 4 character strings ie aaaa aaab aaac etc
ls -lh test.txt
-rw-r--r-- 1 root pete 1.9G Jan 30 11:55 test.txt
time grep -e aaa -e bbb test.txt
<output>
real 0m19.250s
user 0m8.578s
sys 0m1.254s
time grep --mmap -e aaa -e bbb test.txt
<output>
real 0m18.087s
user 0m8.709s
sys 0m1.198s
So using the mmap option shows a clear improvement on a 2 GB file with two search patterns, if you take #BrianAgnew's advice and use a single invocation of grep try the --mmap option.
Though it should be noted that mmap can be a bit quirky if the source files changes during the search.
from man grep
--mmap
If possible, use the mmap(2) system call to read input, instead of the default read(2) system call. In some situations, --mmap yields better performance. However, --mmap can cause undefined behavior (including core dumps) if an input file shrinks while grep is operating, or if an I/O error occurs.
Using GNU Parallel it would look like this:
awk '{print $1}' input.sam > idsFile.txt
doit() {
LC_ALL=C fgrep -f idsFile.txt sample_"$1" | awk '{print $1,$10,$11}'
}
export -f doit
parallel doit {1}{2}{3} ::: {a..z} ::: {a..z} ::: {a..z} > output.txt
If the order of the lines is not important this will be a bit faster:
parallel --line-buffer doit {1}{2}{3} ::: {a..z} ::: {a..z} ::: {a..z} > output.txt

Resources