Print the middle line of any file UNIX - file

I have to print the middle line of any text file without sed nor awk.
For example, the following file.txt:
line 1
line 2
line 3
line 4
line 5
I need something like:
$ command -flags file.txt
line 3
Is there any command?
Thanks.

Not the most efficient, but works in bash.
Use wc -l to count the lines, and divide by two. Then use tail -n +N | head -n 1 to print just the Nth line (where N starts at 1).
$ cat input.txt
A
B
C
D
E
$ tail -n +$(((`cat input.txt | wc -l` / 2) + 1)) input.txt | head -n 1
C
Note that a file with an even number of lines has no single "middle line".
I cat-ed the file to wc -l so it wouldn't print the filename.

sed -n $(((`cat input.txt| wc -l`/ 2) + 1))p input.txt

Related

Gnuplot: How to plot bash array without dumping it to a file

I am trying to plot a bash array using gnuplot without dumping the array to a temporary file.
Let's say:
myarray=$(seq 1 5)
I tried the following:
myarray=$(seq 1 5)
gnuplot -p <<< "plot $myarray"
I got the following error:
line 0: warning: Cannot find or open file "1"
line 0: No data in plot
gnuplot> 2
^
line 0: invalid command
gnuplot> 3
^
line 0: invalid command
gnuplot> 4
^
line 0: invalid command
gnuplot> 5''
^
line 0: invalid command
Why it doesn't interpret the array as a data block?
Any help is appreciated.
bash array
myarray=$(seq 1 5)
The myarray is not a bash array, it is a normal variable.
The easiest is to put the data to stdin and plot <cat.
seq 5 | gnuplot -p -e 'plot "<cat" w l'
Or with your variable and with using a here-string:
<<<"$myarray" gnuplot -p -e 'plot "<cat" w l'
Or with your variable with redirection with echo or printf:
printf "%s\n" "$myarray" | gnuplot -p -e 'plot "<cat" w l'
And if you want to plot an actual array, just print it on separate lines and then pipe to gnuplot
array=($(seq 5))
printf "%s\n" "${array[#]}" | gnuplot -p -e 'plot "<cat" w l'
Plot STDIN
gnuplot -p -e 'plot "/dev/stdin"'
Sample:
( seq 5 10; seq 7 12 ) | gnuplot -p -e 'plot "/dev/stdin"'
or
gnuplot -p -e 'plot "/dev/stdin" with steps' < <( seq 5 10; seq 7 12 )
More tunned plot
gnuplot -p -e "set terminal wxt 0 enhanced;set grid;
set label \"Test demo with random values\" at 0.5,0 center;
set yrange [ \"-1\" : \"80\" ] ; set timefmt \"%s\";
plot \"/dev/stdin\" using 1:2 title \"RND%30+40\" with impulse;" < <(
paste <(
seq 2300 2400
) <(
for ((i=101;i--;)){ echo $[RANDOM%30+40];}
)
)
Please note that this is still one line, you could Copy'n paste into any terminal console.

Use bash variable as array in awk and filter input file by comparing with array

I have bash variable like this:
val="abc jkl pqr"
And I have a file that looks smth like this:
abc 4 5
abc 8 8
def 43 4
def 7 51
jkl 4 0
mno 32 2
mno 9 2
pqr 12 1
I want to throw away rows from file which first field isn't present in the val:
abc 4 5
abc 8 8
jkl 4 0
pqr 12 1
My solution in awk doesn't work at all and I don't have any idea why:
awk -v var="${val}" 'BEGIN{split(var, arr)}$1 in arr{print $0}' file
Just slice the variable into array indexes:
awk -v var="${val}" 'BEGIN{split(var, arr)
for (i in arr)
names[arr[i]]
}
$1 in names' file
As commented in the linked question, when you call split() you get values for the array, while what you want to set are indexes. The trick is to generate another array with this content.
As you see $1 in names suffices, you don't have to call for the action {print $0} when this happens, since it is the default.
As a one-liner:
$ awk -v var="${val}" 'BEGIN{split(var, arr); for (i in arr) names[arr[i]]} $1 in names' file
abc 4 5
abc 8 8
jkl 4 0
pqr 12 1
grep -E "$( echo "${val}"| sed 's/ /|/g' )" YourFile
# or
awk -v val="${val}" 'BEGIN{gsub(/ /, "|",val)} $1 ~ val' YourFile
Grep:
it use a regex (extended version with option -E) that filter all the lines that contains the value. The regex is build OnTheMove in a subshell with a sed that replace the space separator by a | meaning OR
Awk:
use the same princip as the grep but everything is made inside (so no subshell)
use the variable val assigned to the shell variable of the same name
At start of the script (before first line read) change the space, (in val) by | with BEGIN{gsub(/ /, "|",val)}
than, for every line where first field (default field separator is space/blank in awk, so first is the letter group) matching, print it (defaut action of a filter with $1 ~ val.

Print duplicate entries in a file using linux commands

I have a file called foo.txt, which consists of:
abc
zaa
asd
dess
zaa
abc
aaa
zaa
I want the output to be stored in another file as:
this text abc appears 2 times
this text zaa appears 3 times
I have tried the following command, but this just writes duplicate entries and their number.
sort foo.txt | uniq --count --repeated > sample.txt
Example of output of above command:
abc 2
zaa 3
How do I add the line "this text appears x times" ?
Awk is your friend:
sort foo.txt | uniq --count --repeated | awk '{print($2" appears "$1" times")}'

shell insert a line every n lines

I have two files and I am trying to insert a line from file2 into file1 every other 4 lines starting at the beginning of file1. So for example:
file1:
line 1
line 2
line 3
line 4
line 5
line 6
line 7
line 8
line 9
line 10
file2:
50
43
21
output I am trying to get:
50
line 1
line 2
line 3
line 4
43
line 5
line 6
line 7
line 8
21
line 9
line 10
The code I have:
while read line
do
sed '0~4 s/$/$line/g' < file1.txt > file2.txt
done < file1.txt
I am getting the following error:
sed: 1: "0~4 s/$/$line/g": invalid command code ~
The following steps through both files without loading either one into an array in memory:
awk '(NR-1)%4==0{getline this<"file2";print this} 1' file1
This might be preferable if your actual file2 is larger than what you want to hold in memory.
This breaks down as follows:
(NR-1)%4==0 - a condition which matches every 4th line starting at 0
getline this<"file2" - gets a line from "file2" and stores it in the variable this
print this - prints ... this.
1 - shorthand for "print the current line", which in this case comes from file1 (awk's normal input)
It is easing to do this using awk:
awk 'FNR==NR{a[i++]=$0; next} !((FNR-1) % 4){print a[j++]} 1' file2 file1
50
line 1
line 2
line 3
line 4
43
line 5
line 6
line 7
line 8
21
line 9
line 10
While processing first file in input i.e. file2, we store each line in array with key as an incrementing number starting with 0.
While processing second file in input i.e. file1, we check if current record # is divisible by 4 using modulo arithmetic and if it is then insert a line from file2 and increment the index counter.
Finally using action 1, we print lines from file1.
This might work for you (GNU sed):
sed -e 'Rfile1' -e 'Rfile1' -e 'Rfile1' -e 'Rfile1' file2
or just use cat and paste:
cat file1 | paste -d\\n file2 - - - -
another alternative with unix toolchain
$ paste file2 <(pr -4ats file1) | tr '\t' '\n'
50
line 1
line 2
line 3
line 4
43
line 5
line 6
line 7
line 8
21
line 9
line 10
Here's a goofy way to do it with paste and tr
paste file2 <(paste - - - - <file1) | tr '\t' '\n'
Assumes you don't have any actual tabs in your input files.

Delete lines in a file containing argument passed on command line

I'm trying to delete specific lines based on the argument passed in.
My data.txt file contains
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
My del.sh contains
myvar=$1
sed'/$myvar/d' data.txt > temp.txt
mv temp.txt > data.txt
but it just prints every line in temp.txt to data.txt....however
sed '/64/d' data.txt > temp.txt
will do the correct data transfer (but I don't want to hardcode 64), I feel like there's some kind of syntax error with the argument. Any input please
It's because of the single quotes, change them to double quotes. Variables inside single quotes are not interpolated, so you are sending the literal string $myvar to sed, instead of the value of $myvar.
Change:
sed '/$myvar/d' data.txt
to:
sed "/$myvar/d" data.txt
Note: You will run into issues when $myvar contains regular expression meta characters or forward slashes as pointed out in this response from Ed Morton. If you are not in complete control of your input you will need to find another avenue to accomplish this.
Assuming this is undesirable behavior:
$ cat file
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
$ myvar=6
$ sed "/$myvar/d" file
Monitor 22 42 50
$ myvar=/
$ sed "/$myvar/d" file
sed: -e expression #1, char 3: unknown command: `/'
$ myvar=.
$ sed "/$myvar/d" file
$
Try this instead:
$ myvar=6
$ awk -v myvar="$myvar" '{for (i=1; i<=NF;i++) if ($i == myvar) next }1' file
Monitor 22 42 50
Game 32 64 128
$ myvar=/
$ awk -v myvar="$myvar" '{for (i=1; i<=NF;i++) if ($i == myvar) next }1' file
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
$ myvar=.
$ awk -v myvar="$myvar" '{for (i=1; i<=NF;i++) if ($i == myvar) next }1' file
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
and if you think you can just escape the /s and use sed, you can't because you might be adding a 2nd backslash to one already present:
$ foo='\/'
$ myvar=${foo//\//\\\/}
$ sed "/$myvar/d" file
sed: -e expression #1, char 5: unknown command: `/'
$ awk -v myvar="$myvar" '{for (i=1; i<=NF;i++) if ($i == myvar) next }1' file
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
This is simply NOT a job you can in general do with sed due to it's syntax and it's restriction of only allowing REs in it's search.
You can also use awk to do the same,
awk '!/'$myvar'/' data.txt > temp.txt && mv temp.txt data.txt
Use -i option in addition to what #SeanBright proposed. Then you won't need > temp.txt and mv temp.txt data.txt.
sed -i "/$myvar/d" data.txt

Resources