Count numbers between zeros using bash script [closed] - arrays

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Using bash script, I want to count the numbers between zeros only. Is it possible to use awk? Sorry, i am new to this.
5000000000009228247152000000000000000000003
5000000000006338293700000000000000000000001

grep -Po '0+\K[^0]+(?=0)'
gives you:
9228247152
63382937
EDIT
If things could be easily done in a single process (awk here is the case), I won't start 2nd process.
awk one-liner with count and text:
awk -F'0+' 'NF>2{for(i=2;i<NF;i++)printf "text:%s count:%d\n",$i,length($i)}'
gives:
text:9228247152 count:10
text:63382937 count:8
awk one-liner only with count:
awk -F'0+' 'NF>2{for(i=2;i<NF;i++)print length($i)}'
gives:
10
8

With sed:
echo 5000000000009228247152000000000000000000003 | \
sed -n -r -e 's/0+([^0]+)0+/\1/' | wc -m

$ grep -oP '0+\K[1-9]+(?=0+)' file
9228247152
63382937
To count the number of digits,
$ grep -oP '0+\K[1-9]+(?=0+)' file | awk -v FS="" '{print NF}'
10
8

Related

Printing a # in c [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
From my program, I am trying to execute a command using popen` which contains:
sprintf(buff, "echo -n cd %s; ls | awk -F'.' '{print $2"."$3"."$4'#'$5}'"
But compiler says 'stray # in program'.
How to print "#" in C ?
Your # needs to be between quotation marks ("). If you want to have quotation marks as characters in a string you need to escape them with \ (e.g. "\"").
So the string should be "echo -n cd %s; ls | awk -F'.' '{print $2\".\"$3\".\"$4\"#\"$5}'".

Save value in bash array gives array[] not found [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
i have this code:
#!/bin/bash
PIDS=$(ls -la /proc | awk '{print $9}' | grep "^[0-9]*$")
PIDLIST=$(echo $PIDS | tr "" "\n")
counter=0
for PID in $PIDLIST; do
KERNEL[$counter]=$(cat "/proc/$PID/stat" | awk '{print $14 }')
counter=$((counter + 1))
done
I'm trying to save the content of cat "/proc/$PID/stat" | awk '{print $14 }' command in a named KERNEL array, given a position by a counter.
I have this error:
mitop.sh: 8: mitop.sh: KERNEL[0]=26: not found
What i'm doing wrong?
sistemas#DyASO:~$ bash --version
GNU bash, versión 4.2.24(1)-release (i686-pc-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
Licencia GPLv3+: GPL de GNU versión 3 o posterior <http://gnu.org/licenses/gpl.html>
I am using sh ./mitop.sh
That is the problem. You're not executing the script with Bash.
You are executing it with /bin/sh, which is very different.
You need to run it like this:
./mitop.sh
Or like this:
bash ./mitop.sh
This last one is just for sanity check.
The recommended way to run shell scripts is with ./the_script.sh,
to let the first line decide how it should be executed.
Also, the script can be written better, I recommend this way:
#!/bin/bash
kernel=()
for file in /proc/[0-9]*; do
read -a fields < "$file"/stat
kernel+=("${fields[13]}")
done

Running command when word appears in log? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I would like to know if there is any way to scan a text file and then run a command. I have tried grep getting nowhere. I have also tried the find . thing, it sounds promising but I can't seem to get a good explanation on how to use it. If you would like to know what this will be used for here is an explanation: I have an iPhone app that sends a word over http, the server side application is listening for the command and when received it runs a command.
The following will cat all files that find finds that contain "needle" and will show their contents. Modify accordingly:
find . -exec grep needle -q {} \; -exec cat {} \;
In bash, you could tail -f the file, and pipe it to this script:
while read LINE; do
grep -q word <<< $LINE && command_to_execute
done
But the best thing would to place this logic in the web server instead of parsing a file (the log file I am guessing).
UPDATE:
The above loop is expensive to run as grep is called at each iteration. This one is better:
tail -f file | grep word | while read LINE; do
command_to_execute
done

Looking for language database and codes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am looking for a table of language names and codes like the ISO 639-1 set: http://en.wikipedia.org/wiki/List_of_ISO_639-1_codes
Thanks
You'll want ISO 639-3 if you want the up-to-date list.
Enhancing Obalix answer, I have created a bash script that will take a UTF-8 CSV file and insert it into a database.
Note that the file provided by Obalix is in UTF-16 NOT UTF-8. The below script checks for its encoding and advises user on how to convert it.
Of course you will need to modify the insert statement according to your schema.
#!/bin/bash
USAGE="Usage: $0 <csv-file>"
if [ $# -lt 1 ]
then
echo $USAGE
exit 1
fi
csv=$1
if [ ! -f $csv ]; then
echo "$csv: No such file"
exit 1
fi
file $csv | grep -q UTF-8
if [ $? -ne 0 ]
then
echo $csv: must be in UTF-8 format, use the following command to fix this:
echo "cat $csv | iconv -f UTF-16 -t UTF-8 | tr -d \"\r\" > utf8-$csv"
exit 1
fi
mysql=<PATH/TO/mysql/BINARY>
db=<DATABASE_NAME>
user=<USERNAME>
pass=<PASSWORD>
sql=insert-all-langs.sql
echo "-- Inserting all languages generated on `date`" > $sql
printf "Processing CSV file..."
# prepend _ to all lines so that no line starts by whitespace
sed 's/.*/_&/' $csv | while read l; do
iso6391=`echo "$l" | cut -f4`
name=`echo -e "$l" | cut -f3 | tr -d "\"" | sed 's/'\''/\\\\'\''/g'`
echo $iso6391:$name
# insert ignore supresses errors for duplicate locales (row still not inserted)
echo "insert ignore into languages (name, locale, rtl, created_at, updated_at) values ('$name', '$iso6391', 0, now(), now());" >> $sql
done
echo Done
printf "Executing SQL..."
cat $sql | $mysql -u$user -p$pass $db
echo Done

Grep : get all file that doesn't have a line that matches [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a lots of files with multiple lines, and in most case, one of the lines contain a certain pattern. I would like to list every file that does not have a line with this pattern.
Use the "-L" option in order to have file WITHOUT the pattern. Per the man page:
-L, --files-without-match
Suppress normal output; instead print the name of each input file from which no output would normally have been printed. The scanning will stop on the first match.
Grep returns 0/1 to indicate if there was a match, so you can do something like this:
for f in *.txt; do
if ! grep -q "some expression" $f; then
echo $f
fi
done
EDIT: You can also use the -L option:
grep -L "some expression" *
try "count" and filter where equals ":0":
grep -c [pattern] * | grep ':0$'
(if you use TurboGREP cough, you won't have a -L switch ;))
(EDIT: added '$' to end of regex in case there are files with ":0" in the name)

Resources