I use a .txt filet as database like this:
is-program-installed= 0
is-program2-installed= 1
is-script3-runnig= 0
is-var5-declared= 1
But what if i uninstall program 2 and i want to set its database value to "0"?
One way to do using sed:
sed -e '/is-program2-/ s/.$/0/' -i file.txt
It works like this:
The s/.$/0/ replaces the last character with 0: the dot matches any character, and the $ matches the end of the line--hence .$ is the last character on the line.
The /is-program2-/ is a filter, so that the replacement is only executed for matching lines.
The filter pattern I used is a bit lazy: it's short but inaccurate. A longer, more strict solution would be:
sed -e '/^is-program2-installed= / s/.$/0/' -i file.txt
You can wrap a sed command (see #janos' answer) in a function for ease of use:
Define:
function markuninstalled () {
PROGRAM=${1?Usage: markuninstalled PROGRAM [FILE]}
FILE=${2:-file.txt}
sed -e "/^is-$PROGRAM-/ s/.$/0/" -i.bak $FILE
}
and then, use it like this:
markuninstalled program2
and it will modify the default file file.txt and create a copy.
Related
I'm trying to make a shell script to take an input file (thousands of lines) and produce an output file that is the same except that on certain lines there will be added text. When the text is added to the (middle of) the line, the exact added text will depend on a substring on the line. The correlation between the substring and the added text is complex and comes from an external program that I can call in the shell. I don't have the source for this converter program nor any control over how the mapping is done.
To explain further...
I have an input file of this general format:
Blah Blah Unimportant
Something Something
FIELD_INFO(field_name_1, output_1),
FIELD_INFO(field_name_2, output_2),
Yadda Yadda
The whole file needs to be copied, with added text, but the only important parts for me are the field names (e.g. field_name_1, field_name_2). I have a command line program called "converter" that can take a file of field names and output a list of corresponding actions. Converter cannot operate directly on the input file. The input to converter needs to be just field names and the output of converter has extra information I don't need:
converter_field_name_1 "action1" /* Use this action for field_name_1 */
converter_field_name_2 "action2" /* use this action for field_name_2 */
The desire is to create a second file that looks like this:
Blah Blah Unimportant
Something Something
FIELD_INFO(field_name_1, action1, output_1),
FIELD_INFO(filed_name_2, action2, output_2),
Yadda Yadda
Here is the script I'm working on, but I've hit a wall (or two):
#!/bin/bash
filename="input_file"
# Let's create an array of the field names to feed to the converter program
field_array=($(sed -e '/^\s*FIELD_INFO/ s/FIELD_INFO(\(.*\),.*),/\1/' -e 't' -e 'd' < ${filename}))
# Save the array to a file, to be able to use the converter's file option
printf "%s\n" "${field_array[#]}" > script_field_names.txt
# Use converter on the whole file and extract only the actions into another array
action_array=($(converter -f script_field_names.txt | cut -d'"' -f 2))
# I will make and use an associative array and try to use
# sed to do the substitution
declare -A mapper
for i in ${!field_array[*]}
do
mapper[${field_array[i]}]=${action_array[i]}
done
#Now go back through the file and add action names (source file unchanged)
sed -e "s/FIELD_INFO(\(.*\),\(.*?),\)/FIELD_INFO(\1, ${mapper[\1], \2}/" < ${filename}
I know now that I can't use the sed group capture "\1" as an index into the mapper array like this. It is not working as a key and the output looks like this:
Blah Blah Unimportant
Something Something
FIELD_INFO(field_name_1, , output_1),
FIELD_INFO(field_name_2, , output_2),
Yadda Yadda
My actual script has debug statements scattered throughout and I know the field array, action array, and mapper array are all getting created correctly. But my idea of using the group capture substring from sed as the index into the mapper array is not working because I now know that sed expands the variables before running in the sub-shell, so the mapper[] array is not seeing the substring as an index.
What should I be doing instead? This script may only be used once, but it's too time consuming and error prone to do the addition of the action strings by hand. I want to come up with a way to make this work but I can't tell if I'm close or completely on the wrong path.
sed -e "s/FIELD_INFO(\(.*\),\(.*?),\)/FIELD_INFO(\1, ${mapper[\1], \2}/" < ${filename}
[...]
I now know that sed expands the variables before running in the sub-shell, so the mapper[] array is not seeing the substring as an index.
Good job identifying the problem. Also, the non-greedy quantifier .*? does not work with sed and ${mapper[\1], \2} should probably be ${mapper[\1]}, \2.
If you want to keep your current approach I see two options.
Do the replacement line by line in bash, either by creating a giant sed command string that lists the action for each line, or by executing sed inside a loop for each line while creating the command strings on the fly.
Instead of the array mapper, create a file that lists the actions to be inserted in the order from the file. Then use GNU sed's R filename command. This command inserts the next line from filename. You can use this to insert the correct action each time you come across a filed. However, the linebreak is inserted too. So you have to fiddle with the hold space and so on to remove these linebreaks afterwards.
Both options are not that great. Therefore I'd switch to awk to insert the actions:
sed -En 's/^\s*FIELD_INFO\(([^,]*).*/\1/p' "$filename" > fields
converter -f fields | cut -d\" -f2 > actions
awk '/^\s*FIELD_INFO\(/ {getline a < "actions"; sub(",", ", " a ",")} 1' "$filename"
With GNU grep you can simplify the first line to
grep -Po '^\s*FIELD_INFO\(\K[^,]*' "$filename" > fields
Why not try,
sed -n -e 's/^[ ]*FIELD_INFO(\(.*\),.*,/\1/p' -- input_file > script_field_names.txt
printf '/^[ ]*FIELD_INFO(%s,/ s/(\\(.[^,]*\\), \\(.[^)]*\\))/(\\1, %s, \\2)/\n' \
$(converter -f script_field_names.txt | cut -d'"' -f 2 |
paste -- script_field_names.txt -) |
sed -f /dev/stdin -- input_file
where
paste emits the map of fields (from file) and actions (from stdin)
printf emits a script read by sed from stdin
each script line becomes: /^[ ]*FIELD_INFO(fieldnameN,/ s/(\(.[^,]*\), \(.[^)]*\))/(\1, actionN, \2)/
I have a test file having around 20K lines in that file I want to change some specific string in specific lines I am getting the line number and strings to change.here I have a scenario where I want to change the one string to another in multiple lines. I used earlier like
sed -i '12s/stringone/stringtwo/g' filename
but in this case I have to run the multiple commands for same test like
sed -i '15s/stringone/stringtwo/g' filename
sed -i '102s/stringone/stringtwo/g' filename
sed -i '11232s/stringone/stringtwo/g' filename
Than I tried below
sed -i '12,15,102,11232/stringone/stringtwo/g' filename
but I am getting the error
sed: -e expression #1, char 5: unknown command: `,'
Please some one help me to achieve this.
To get the functionality you're trying to get with GNU sed would be this in GNU awk:
awk -i inplace '
BEGIN {
split("12 15 102 11232",tmp)
for (i in tmp) lines[tmp[i]]
}
NR in lines { gsub(/stringone/,"stringtwo") }
' filename
Just like with a sed script, the above will fail when the strings contain regexp or backreference metacharacters. If that's an issue then with awk you can replace gsub() with index() and substr() for string literal operations (which are not supported by sed).
You get the error because the N,M in sed is a range (from N to M) and doesn't apply to a list of single line number.
An alternative is to use printf and sed:
sed -i "$(printf '%ds/stringone/stringtwo/g;' 12 15 102 11232)" filename
The printf statement is repeating the pattern Ns/stringone/stringtwo/g; for all numbers N in argument.
This might work for you (GNU sed):
sed '12ba;15ba;102ba;11232ba;b;:a;s/pattern/replacement/' file
For each address, branch to a common place holder (in this case :a) and do a substitution, otherwise break out of the sed cycle.
If the addresses were in a file:
sed 's/.*/&ba/' fileOfAddresses | sed -f - -e 'b;:a;s/pattern/replacement/' file
While I've handled this task in other languages easily, I'm at a loss for which commands to use when Shell Scripting (CentOS/BASH)
I have some regex that provides many matches in a file I've read to a variable, and would like to take the regex matches to an array to loop over and process each entry.
Regex I typically use https://regexr.com/ to form my capture groups, and throw that to JS/Python/Go to get an array and loop - but in Shell Scripting, not sure what I can use.
So far I've played with "sed" to find all matches and replace, but don't know if it's capable of returning an array to loop from matches.
Take regex, run on file, get array back. I would love some help with Shell Scripting for this task.
EDIT:
Based on comments, put this together (not working via shellcheck.net):
#!/bin/sh
examplefile="
asset('1a/1b/1c.ext')
asset('2a/2b/2c.ext')
asset('3a/3b/3c.ext')
"
examplearr=($(sed 'asset\((.*)\)' $examplefile))
for el in ${!examplearr[*]}
do
echo "${examplearr[$el]}"
done
This works in bash on a mac:
#!/bin/sh
examplefile="
asset('1a/1b/1c.ext')
asset('2a/2b/2c.ext')
asset('3a/3b/3c.ext')
"
examplearr=(`echo "$examplefile" | sed -e '/.*/s/asset(\(.*\))/\1/'`)
for el in ${examplearr[*]}; do
echo "$el"
done
output:
'1a/1b/1c.ext'
'2a/2b/2c.ext'
'3a/3b/3c.ext'
Note the wrapping of $examplefile in quotes, and the use of sed to replace the entire line with the match. If there will be other content in the file, either on the same lines as the "asset" string or in other lines with no assets at all you can refine it like this:
#!/bin/sh
examplefile="
fooasset('1a/1b/1c.ext')
asset('2a/2b/2c.ext')bar
foobar
fooasset('3a/3b/3c.ext')bar
"
examplearr=(`echo "$examplefile" | grep asset | sed -e '/.*/s/^.*asset(\(.*\)).*$/\1/'`)
for el in ${examplearr[*]}; do
echo "$el"
done
and achieve the same result.
There are several ways to do this. I'd do with GNU grep with perl-compatible regex (ah, delightful line noise):
mapfile -t examplearr < <(grep -oP '(?<=[(]).*?(?=[)])' <<<"$examplefile")
for i in "${!examplearr[#]}"; do printf "%d\t%s\n" $i "${examplearr[i]}"; done
0 '1a/1b/1c.ext'
1 '2a/2b/2c.ext'
2 '3a/3b/3c.ext'
This uses the bash mapfile command to read lines from stdin and assign them to an array.
The bits you're missing from the sed command:
$examplefile is text, not a filename, so you have to send to to sed's stdin
sed's a funny little language with 1-character commands: you've given it the "a" command, which is inappropriate in this case.
you only want to output the captured parts of the matches, not every line, so you need the -n option, and you need to print somewhere: the p flag in s///p means "print the [line] if a substitution was made".
sed -n 's/asset\(([^)]*)\)/\1/p' <<<"$examplefile"
# or
echo "$examplefile" | sed -n 's/asset\(([^)]*)\)/\1/p'
Note that this returns values like ('1a/1b/1c.ext') -- with the parentheses. If you don't want them, add the -r or -E option to sed: among other things, that flips the meaning of ( and \(
I have many files in a directory having extension like
.text(2) and .text(1).
I want to remove the numbers from extension and output should be like
.text and .text .
can anyone please help me with the shell script for that?
I am using centOs.
A pretty portable way of doing it would be this:
for i in *.text*; do mv "$i" "$(echo "$i" | sed 's/([0-9]\{1,\})$//')"; done
Loop through all files which end in .text followed by anything. Use sed to remove any parentheses containing one or more digits from the end of each filename.
If all of the numbers within the parentheses are single digits and you're using bash, you could also use built-in parameter expansion:
for i in *.text*; do mv "$i" "${i%([0-9])}"; done
The expansion removes any parentheses containing a single digit from the end of each filename.
Another way without loops, but also with sed (and all the regexp's inside) is piping to sh:
ls *text* | sed 's/\(.*\)\..*/mv \1* \1.text/' | sh
Example:
[...]$ ls
xxxx.text(1) yyyy.text(2)
[...]$ ls *text* | sed 's/\(.*\)\..*/mv \1* \1.text/' | sh
[...]$ ls
xxxx.text yyyy.text
Explanation:
Everything between \( and \) is stored and can be pasted again by \1 (or \2, \3, ... a consecutive number for each pair of parentheses used). Therefore, the code above stores all the characters before the first dot \. and after that, compounds a sequence like this:
mv xxxx* xxxx.text
mv yyyy* yyyy.text
That is piped to sh
Most simple way if files are in same folder
rename 's/text\([0-9]+\)/text/' *.text*
link
Every example I was able to find demonstrating the w command of sed has it in the end of the script. What if I can't do that?
An example will probably demonstrate the problem better:
$ echo '123' | sed 'w tempfile; s/[0-9]/\./g'
sed: couldn't open file tempfile; s/[0-9]/\./g: No such file or directory
(How) can I change the above so that sed knows where the filename ends?
P.S. I'm aware that I can do
$ echo '123' | sed 'w tempfile
> s/[0-9]/\./g'
...
Are there prettier options?
P.P.S. People tend to suggest to split it in two scripts. The question is then: is it safe? What if I was going to branch somewhere after the w command, and so on. Can someone confirm that any script can be split in two after any command and that will not affect the results?
Final edit: I checked that multiple -e work just as concatenated commands. I thought it was more complex (like the first one should always exit before the second one starts, etc.). However, I tried splitting a {..} block of commands between two scripts and it still worked, so the w thing is really not a serious problem. Thanks to all.
You can give a two line script to sed in one shell line:
echo '123' | sed -e 'w tempfile' -e 's/[0-9]/\./g'
This might work for you (if you're using BASH and probably GNU sed):
echo '123' | sed 'w tempfile'$'\n'';s/[0-9]/\./g'
Explanation:
The r, R and w commands need a newline to terminate the file name.
The answer to the question is "newline":
sed will treat a non-escaped literal newline as the end of the file name.
If your shell is bash, or supports the $'\n' syntax, you can solve the OP's original question this way:
echo '123' | sed 'w tempfile'$'\n''s/[0-9]/\./g'
In a more limited sh you can say
$ echo '123' | sed 'w tempfile'\
> 's/[0-9]/\./g'
What I did here was write \ as an escape, then hit enter and wrote the rest of the command there. Note that here I am escaping the newline from bash but it is being passed to sed.
Reverse the 2 sed command sequences like this:
echo '123' | sed 's/[0-9]/\./g;w tempfile'
i.e. perform replacements first and then write pattern space into a file.
EDIT: There was some misunderstanding whether OP wants replaced text in final file or not. My above command puts replaced text in tempfile. Since this is not what OP wanted here is one more version that avoids it:
echo '123' | sed -e 'h;s/[0-9]/\./g;g;w tempfile'