Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I would like to know if there is any way to scan a text file and then run a command. I have tried grep getting nowhere. I have also tried the find . thing, it sounds promising but I can't seem to get a good explanation on how to use it. If you would like to know what this will be used for here is an explanation: I have an iPhone app that sends a word over http, the server side application is listening for the command and when received it runs a command.
The following will cat all files that find finds that contain "needle" and will show their contents. Modify accordingly:
find . -exec grep needle -q {} \; -exec cat {} \;
In bash, you could tail -f the file, and pipe it to this script:
while read LINE; do
grep -q word <<< $LINE && command_to_execute
done
But the best thing would to place this logic in the web server instead of parsing a file (the log file I am guessing).
UPDATE:
The above loop is expensive to run as grep is called at each iteration. This one is better:
tail -f file | grep word | while read LINE; do
command_to_execute
done
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
How to print $arr appended value in new line? Below is the code.
arr=() |
hive -e 'show tables in database'|
while read line
do
echo "The name of the line is $line"
arr+="TABLE NAME : $line"
done
echo $arr
There are several issues in the code.
Piping the assignment to an array makes no sense. The assignment has no output, so there's nothing to pipe.
+= without parentheses does a string concatenation, so only ${arr[0]} gets changed. Use
arr+=("TABLE NAME : $line")
Commands in a pipeline are running in subshells, which means the assignment only happens in a subshell, the array in the main shell is not updated. Use "process substitution" instead:
while ...
done < <(hive ...)
Also, I'd rather store just the table names in the array, as you can reuse the values later, instead of storing the whole messages. Fixing all these, we can get something like
#!/bin/bash
tables=()
while IFS= read -r line ; do
echo "The name of the line is $line"
tables+=("$line")
done < <(hive -e 'show tables in database')
printf 'TABLE NAME: %s\n' "${tables[#]}"
This is a good place for the mapfile command: reads the output of a command and stores each line as an array element. Useful with the process substitution here
mapfile -t arr < <(hive ...)
for elem in "${arr[#]}"; do
echo "TABLE: $elem"
done
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have 5000 files in my hard-disk with name as ip_file_1,ip_file_2,....
I have a executable that can merge only 2 files. How can I write a script that takes all the file residing in the hardisk (whichs start with ip_file_*) and calls the function for merging all the files.
I have a 5000 files which are binaries that contain the logging information (time that each function call has taken). I have another executable that takes only two files and merges according to the timestamp and gives the merged output.
I execute with the format like the below,
./trace ip_file1 ip_file2 mergefile # I'm not using the trace tool. It's an example
I could use the executable to merge only two files. I thought of automating it to merge all the other files.
The merges has to be done in order (merged according to the timestamp). The logic to merge is already there. And the Output of the merge is sent to the file.
My question is not on how to merge the files. My question is how to automate and merge all the files instead of two files.
To avoid excessive number of parameters or length of parameters to a command line, you want to write your merge command so that it can take a previously merged output and merge another file. The description of merge in the original problem statement is quite scant, so I'll make the assumption that you can do this:
merge -o output_file input_file
Where output_file can be a previously merged file or a new file. If you can do that, then it would be simple to merge all of them by:
find drive_path -name "ip_file_*" -exec merge -o output_file {} \;
The order here is directory order in the file system. If a different order is needed, that will need to be specified.
ADDENDUM
If you need the files in timestamp order, then I would revamp this approach and create a merge command that accepts as an input a text file which lists all of the files to merge. Create this list of files using the information given in this post: https://superuser.com/questions/294161/unix-linux-find-and-sort-by-date-modified
Where your external merge tool is real_merge, and this tool writes merged output from two command-line arguments to stdout, the following recursive shell function will do the job:
merge_files() {
next=$1; shift
case $# in
0) cat "$next" ;;
1) real_merge "$next" "$1"
*) real_merge "$next" <(merge_files "$#")
esac
}
This approach is highly parallelized -- which means that it'll use as much CPU and disk IO as is available to it. Depending on your available resources, and your operating system's facility at managing those resources, this may or may not be a good thing.
The other approach is to use a temporary file:
swap() {
local var_curr=$1
local var_next=$2
local tmp
tmp="${!var_curr}"
printf -v "$var_curr" "${!var_next}"
printf -v "$var_next" "$tmp"
}
merge_files() {
local tempfile_curr=tempfile_A
local tempfile_next=tempfile_B
local tempfile_A="$(mktemp -t sort-wip-A.XXXXXX)"
local tempfile_B="$(mktemp -t sort-wip-B.XXXXXX)"
while (( $# )); do
if [[ -s ${!tempfile_curr} ]]; then
# we already populated our temporary file
real_merge "${!tempfile_curr}" "$1" "${!tempfile_next}"
swap tempfile_curr tempfile_next
elif (( $# >= 2 )); then
# only two arguments at all
real_merge "$1" "$2" "${!tempfile_curr}"
shift
else
# only one argument at all
cat "$1"
rm -f "$tempfile_A" "$tempfile_B"
return
fi
shift
done
# write output to stdout
cat "${!tempfile_curr}"
# ...and clean up.
rm -f "$tempfile_A" "$tempfile_B"
}
You can invoke it as: merge_files ip_file_* if the filenames' lexical sort order is accurate. (This will be true if their names are zero-padded, ie. ip_file_00001, but not true if they aren't padded). If not, you'll need to sort the stream of names first. If you're using bash and have GNU stat and sort available, this could be done as so:
declare -a filenames=()
while IFS='' read -r -d ' ' timestamp && IFS='' read -r -d '' filename; do
filenames+=( "$filename" )
done < <(stat --printf '%Y %n\0' ip_file_* | sort -n -z)
merge_files "${filenames[#]}"
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I want to delete all the inherited environment variables,could you tell me how to do it?
To remove all environment variables in Linux with GNU C Library you can use clearenv(). When this function is not available (it is not in POSIX) you can use environ = NULL instead. Do this before calling execl() or any variant.
If you are calling some exec() variant you can set the environment directly with the call (variants ending e) using the last parameter. Example executing /bin/csh with empty environment: execle("/bin/csh", "-csh", NULL, NULL)
If you want to unset all defined enviroment variables, you can do something like this:
for a in $(/usr/bin/env); do
unset $(echo "$a" | /usr/bin/cut -d = -f 1);
done
Note, this will unset PATH as well, so you might want to initialize the shell with a environment afterwards.
Edit
Shorter version inspired by #opentokix:
unset $(/usr/bin/env | /usr/bin/cut -d = -f 1 | /usr/bin/xargs)
unset `env | awk -F= '/^\w/ {print $1}' | xargs`
This is probably not a good idea, since it will remove your path etc.
You can unset individual variables with unset VARIABLE
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
From a terminal window:
When I use the rm command it can only remove files.
When I use the rmdir command it only removes empty folders.
If I have a directory nested with files and folders within folders with files and so on, is there a way to delete all the files and folders without all the strenuous command typing?
If it makes a difference, I am using the Mac Bash shell from a terminal, not Microsoft DOS or Linux.
rm -rf some_dir
-r "recursive"
-f "force" (suppress confirmation messages)
Be careful!
rm -rf *
Would remove everything (folders & files) in the current directory.
But be careful! Only execute this command if you are absolutely sure, that you are in the right directory.
Yes, there is. The -r option tells rm to be recursive, and remove the entire file hierarchy rooted at its arguments; in other words, if given a directory, it will remove all of its contents and then perform what is effectively an rmdir.
The other two options you should know are -i and -f. -i stands for interactive; it makes rm prompt you before deleting each and every file. -f stands for force; it goes ahead and deletes everything without asking. -i is safer, but -f is faster; only use it if you're absolutely sure you're deleting the right thing. You can specify these with -r or not; it's an independent setting.
And as usual, you can combine switches: rm -r -i is just rm -ri, and rm -r -f is rm -rf.
Also note that what you're learning applies to bash on every Unix OS: OS X, Linux, FreeBSD, etc. In fact, rm's syntax is the same in pretty much every shell on every Unix OS. OS X, under the hood, is really a BSD Unix system.
I was looking for a way to remove all files in a directory except for some directories, and files, I wanted to keep around. I devised a way to do it using find:
find -E . -regex './(dir1|dir2|dir3)' -and -type d -prune -o -print -exec rm -rf {} \;
Essentially it uses regex to select the directories to exclude from the results then removes the remaining files.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a lots of files with multiple lines, and in most case, one of the lines contain a certain pattern. I would like to list every file that does not have a line with this pattern.
Use the "-L" option in order to have file WITHOUT the pattern. Per the man page:
-L, --files-without-match
Suppress normal output; instead print the name of each input file from which no output would normally have been printed. The scanning will stop on the first match.
Grep returns 0/1 to indicate if there was a match, so you can do something like this:
for f in *.txt; do
if ! grep -q "some expression" $f; then
echo $f
fi
done
EDIT: You can also use the -L option:
grep -L "some expression" *
try "count" and filter where equals ":0":
grep -c [pattern] * | grep ':0$'
(if you use TurboGREP cough, you won't have a -L switch ;))
(EDIT: added '$' to end of regex in case there are files with ":0" in the name)