#define SHELLSCRIPT "\
#/bin/bash \n\
awk 'BEGIN { FS=\":\"; print \"User\t\tUID\n--------------------\"; } { print $1,\"\t\t\",$3;} END { print \"--------------------\nAll Users and UIDs Printed!\" }' /etc/passwd \n\
"
void displayusers()
{
system(SHELLSCRIPT);
}
The error message is:
awk: line 1: runaway string constant "User...
The bash cmd when run and works in the terminal is:
awk 'BEGIN { FS=":"; print "User\t\tUID\n--------------------"; } { print $1,"\t\t",$3;} END { print "--------------------\nAll Users and UIDs Printed!" }' /etc/passwd
I think somewhere when using \ to block out the various " for c it messed up my awk. But I'm not sure where. Ideas?
I simply took your string and used it in a printf() statement, and then analyzed the output:
#include <stdio.h>
#define SHELLSCRIPT "\
#/bin/bash \n\
awk 'BEGIN { FS=\":\"; print \"User\t\tUID\n--------------------\"; } { print $1,\"\t\t\",$3;} END { print \"--------------------\nAll Users and UIDs Printed!\" }' /etc/passwd \n\
"
int main(void)
{
printf("[[%s]]\n", SHELLSCRIPT);
return 0;
}
Example run:
$ ./runaway
[[#/bin/bash
awk 'BEGIN { FS=":"; print "User UID
--------------------"; } { print $1," ",$3;} END { print "--------------------
All Users and UIDs Printed!" }' /etc/passwd
]]
$
When I made the line ends visible (^J marks the end of line, ^I tabs), the problem is transparent:
[[#/bin/bash ^J
awk 'BEGIN { FS=":"; print "User^I^IUID^J
--------------------"; } { print $1,"^I^I",$3;} END { print "--------------------^J
All Users and UIDs Printed!" }' /etc/passwd ^J
]]^J
You have two occurrences of \n in the string which need to be \\n. It is up to you whether you change the appearances of \t to \\t; it works either way.
#define SHELLSCRIPT "\
#/bin/bash\n\
awk 'BEGIN { FS=\":\"; print \"User\t\tUID\\n--------------------\"; } { print $1,\"\t\t\",$3;} END { print \"--------------------\\nAll Users and UIDs Printed!\" }' /etc/passwd\n"
Using that in my program yields:
[[#/bin/bash^J
awk 'BEGIN { FS=":"; print "User^I^IUID\n--------------------"; } { print $1,"^I^I",$3;} END { print "--------------------\nAll Users and UIDs Printed!" }' /etc/passwd^J
]]^J
Note, in particular, the technique used to debug this. Print the data so you can see it precisely.
Haven't tested, but here's a useful-looking article just about this topic and it would appear that your choices are something along the following lines:
if you have different quotes surrounding the text, you don't need to escape the interior ones
you can use the same quotes surrounding and escape the interior ones
you can use octal sequences for the quotes, e.g. <\42>
if it get's too confusing, move the string into a separate file where quoting will not be an issue
Related
I am running AIX 5.3.
I have two flat text files.
One is a "master" list of network devices, along with their communication settings(CLLIFile.tbl).
The other is a list of specific network devices that need to have one setting changed, within the main file(specifically, cn to le). The list file is called DDM2000-030215.txt.
I have gotten as far as looping through DDM2000-030215.txt, pulling the lines I need to change with grep from CLLIFile.tbl, changing cn to le with sed, and sending the output to a file.
The trouble is, all I get are the changed lines. I need to make the changes inside CLLIFile.tbl, because I cannot disturb the formatting or structure.
Here's what we tried, so far:
for i in 'DDM2000-030215.txt'
do
grep -p $ii CLLIFile.tbl| sed s/cn/le/g >> CLLIFileNew.tbl
done
Basically, I need to replace all instances of 'le' with 'cn', within 'CLLIFile.tbl', that are on lines that contain a network element name from 'DDM2000-030215.txt'.
Your sed (on AIX) will not have an -i option (edit the input file),
and you do not want to use a temporary file.
You can try a here construction with vi:
vi CLLIFile.tbl >/dev/null <<END
:1,$ s/cn/le/g
:wq
END
You don't want grep here, because, as you've observed, it only outputs the matching lines. You want to just use sed and have it do the replacement only on the lines that match while passing the other lines through unchanged.
So instead of this:
grep 'pattern' | sed 's/old/new/'
just do this:
sed '/pattern/s/old/new/'
You will have to send the output into a new file, and then move that new file into place to replace the old CLLIfile.tbl. Something like this:
cp CLLIfile.tbl CLLIfile.tbl.bak # make a backup in case something goes awry
sed '/pattern/s/old/new/' CLLIfile.tbl >newclli && mv newclli CLLIfile.tbl
EDIT: Entirely new question, I see. For this, I would use awk:
awk 'NR == FNR { a[++n] = $0; next } { for(i = 1; i <= n; ++i) { if($0 ~ a[i]) { gsub(/cn/, "le"); break } } print }' DDM2000-030215.txt CLLIFile.txt
This works as follows:
NR == FNR { # when processing the first file
# (DDM2000-030215.txt)
a[++n] = $0 # remember the tokens. This assumes that every
# full line of the file is a search token.
next # That is all.
}
{ # when processing the second file (CLLIFile.tbl)
for(i = 1; i <= n; ++i) { # check all remembered tokens
if($0 ~ a[i]) { # if the line matches one
gsub(/cn/, "le") # replace cn with le
break # and break out of the loop, because that only
# needs to be done once.
}
}
print # print the line, whether it was changed or not.
}
Note that if the contents of DDM2000-030215.txt are to be interpreted as fixed strings rather than regexes, you should use index($0, a[i]) instead of $0 ~ a[i] in the check.
I have a file like this:
c
a
b<
d
f
I need to get the index no. of letter which has < as suffix in a bash script. I thought of reading the file into an array then matching it with the regex .<$. But how do I get the index no. of that element which matches this regex?
I need the index no. because I want to modify this file to get the letter which is pointed to, move the < to the next line, and if it is at the last line, shuffle the order of the lines and place < after the first line.
you need awk '/<$/ { print NR; }' <your-file>
Grep could be used also:
grep -n \< infile
Then:
grep -n \< infile|cut -d : -f 1
So I build the source file,
$ cat file
c
a
b<
d
f<
with below awk, it will move the < to next line, but if it is last line, < will be moved to fist line.
awk '{ if (/</) a[NR]
sub(/</,"")
b[NR]=$0 }
END{ for (i in a)
{ if (i==NR) { b[1]=b[1] "<" }
else{ b[i+1]=b[i+1] "<"}
}
for (i=1;i<=NR;i++) print b[i]
}' file
c<
a
b
d<
f
Warning, not an awk programmer.
I have a file, let's call it file.txt. It has a list of numbers which I will be using to find the information I need from the rest of the directory (which is full of files *.asc). The remaining files do not have the same lengths, but since I will be drawing data based on file.txt, the matrix I will be building will have the same number of rows. All files DO however contain the same number of columns, 3. The first column will be compared to file.txt, the second column of each *.asc file will be used to build the matrix. Here is what I have so far:
awk '
NR==FNR{
A[$1];
next}
$1 in A
{print $2 >> "data.txt";}' file.txt *.asc
This, however, prints the information from each file below the previous file. I want the information side by side, like a matrix. I looked up paste, but it seems to be called before awk, and all examples were only of a couple of files. I tried it still in place of print and did not work.
If anyone could help me out, this would be the last piece to my project. Thanks so much!
You could try:
awk -f ext.awk file.txt *.asc > data.txt
where ext.awk is
NR==FNR {
A[$1]++
next
}
FNR==1 {
if (ARGIND > 2)
print ""
}
$1 in A {
printf "%s ", $2
}
END {
print ""
}
Update
If you do not have Gnu Awk, the ARGIND variable is not available. You could then try
NR==FNR {
A[$1]++
next
}
FNR==1 {
if (++ai > 1)
print ""
}
$1 in A {
printf "%s ", $2
}
END {
print ""
}
I'd like to do a search for simple if statements in a collection of C source files.
These are statements of the form:
if (condition)
statement;
Any amount of white space or other sequences (e.g. "} else ") might appear on the same line before the if. Comments may appear between the "if (condition)" and "statement;".
I want to exclude compound statements of the form:
if (condition)
{
statement;
statement;
}
I've tried each of the following in awk:
awk '/if \(.*\)[^{]+;/ {print NR $0}' file.c # (A) No results
awk '/if \(.*\)[^{]+/ {print NR $0}' file.c # (B)
awk '/if \(.*\)/ {print NR $0}' file.c # (C)
(B) and (C) give different results. Both include items I'm looking for and items I want to exclude. Part of the problem, obviously, is how to deal with patterns that span multiple lines.
Edge cases (badly formed comments, odd indenting or curly braces in odd places, etc.) can be ignored.
How can I accomplish this?
Based on Al's answer, but with fixes for a couple of problems (plus I decided to check for simple else clauses, too (also, it prints the full if block):
#!/usr/bin/perl -w
my $line_number = 0;
my $in_if = 0;
my $if_line = "";
#ifdef NEW
my $block = "";
#endif /* NEW */
# Scan through each line
while(<>)
{
# Count the line number
$line_number += 1;
# If we're in an if block
if ($in_if)
{
$block = $block . $line_number . "+ " . $_;
# Check for open braces (and ignore the rest of the if block
# if there is one).
if (/{/)
{
$in_if = 0;
$block = "";
}
# Check for semi-colons and report if present
elsif (/;/)
{
print $if_line;
print $block;
$block = "";
$in_if = 0;
}
}
# If we're not in an if block, look for one and catch the end of the line
elsif (/(if \(.*\)|[^#]else)(.*)/)
{
# Store the line contents
$if_line = $line_number . ": " . $_;
# If the end of the line has a semicolon, report it
if ($2 =~ ';')
{
print $if_line;
}
# If the end of the line contains the opening brace, ignore this if
elsif ($2 =~ '{')
{
}
# Otherwise, read the following lines as they come in
else
{
$in_if = 1;
}
}
}
I'm not sure how you'd do this with a one liner (I'm sure you could by using sed's 'n' command to read the next line, but it would be very complicated), so you probably want to use a script for this. How about:
perl parse_if.pl file.c
Where parse_if.pl contains:
#!/usr/bin/perl -w
my $line_number = 0;
my $in_if = 0;
my $if_line = "";
# Scan through each line
while(<>)
{
# Count the line number
$line_number += 1;
# If we're in an if block
if ($in_if)
{
# Check for open braces (and ignore the rest of the if block
# if there is one).
if (/{/)
{
$in_if = 0;
}
# Check for semi-colons and report if present
elsif (/;/)
{
print $if_line_number . ": " . $if_line;
$in_if = 0;
}
}
# If we're not in an if block, look for one and catch the end of the line
elsif (/^[^#]*\b(?:if|else|while) \(.*\)(.*)/)
{
# Store the line contents
$if_line = $_;
$if_line_number = $line_number;
# If the end of the line has a semicolon, report it
if ($1 =~ ';')
{
print $if_line_number . ": " . $if_line;
}
# If the end of the line contains the opening brace, ignore this if
elsif ($1 =~ '{')
{
}
# Otherwise, read the following lines as they come in
else
{
$in_if = 1;
}
}
}
I'm sure you could do something fairly easily in any other language (including awk) if you wanted to; I just thought that I could do it quickest in perl by way of an example.
In awk, each line is treated as a record and "\n" is the record separator. As all the records are parsed line by line so you need to keep track of next line after if. I don't know how can you do this in awk..
In perl, you can do this easily as
open(INFO,"<file.c");
$flag=0;
while($line = <INFO>)
{
if($line =~ m/if\s*\(/ )
{
print $line;
$flag = 1;
}
else
{
print $line && $flag ;
$flag = 0 if($flag);
}
}
Using Awk you can do this by:
awk '
BEGIN { flag=0 }
{
if($0 ~ /if/) {
print $0;
flag=NR+1
}
if(flag==NR)
print $0
}' try.c
I'm working on a small academic research about extremely long and complicated functions in the Linux kernel. I'm trying to figure out if there is a good reason to write 600 or 800 lines-long functions.
For that purpose, I would like to find a tool that can extract a function from a .c file, so I can run some automated tests on the function.
For example, If I have the function cifs_parse_mount_options() within the file connect.c, I'm seeking a solution that would roughly work like:
extract /fs/cifs/connect.c cifs_parse_mount_options
and return the 523 lines of code(!) of the function, from the opening braces to the closing braces.
Of course, any way of manipulating existing software packages like gcc to do that, would be most helpful too.
Thanks,
Udi
EDIT : The answers to Regex to pull out C function prototype declarations? convinced me that matching function declaration by regex is far from trivial.
Why don't you write a small PERL/PHP/Python script or even a small C++,Java or C# program that does that?
I don't know of any already-made tools to do that but writing the code to parse out the text file and extract a function body from a C++ code file should not take more than 20 lines of code.. The only difficult part will be locating the beginning of the function and that should be a relatively simple task using RegEx. After that, all you need is to iterate through the rest of the file keeping track of opening and closing curly braces and when you reach the function body closing brace you're done.
indent -kr code -o code.out
awk -f split.awk code.out
you have to adapt a little bit split.awk wich is somewhat specific to my code and refactoring needs (for example y have so struct who are not typedefs
And I'm sure you can make a nicer script :-)
--
BEGIN { line=0; FS="";
out=ARGV[ARGC-1] ".out";
var=ARGV[ARGC-1] ".var";
ext=ARGV[ARGC-1] ".ext";
def=ARGV[ARGC-1] ".def";
inc=ARGV[ARGC-1] ".inc";
typ=ARGV[ARGC-1] ".typ";
system ( rm " " -f " " out " " var " " ext " " def " " inc " " typ );
}
/^[ ]*\/\/.*/ { print "comment :" $0 "\n"; print $0 >> out ; next ;}
/^#define.*/ { print "define :" $0 ; print $0 >>def ; next;}
/^#include.*/ { print "define :" $0 ; print $0 >>inc ; next;}
/^typedef.*{$/ { print "typedef var :" $0 "\n"; decl="typedef";print $0 >> typ;infile="typ";next;}
/^extern.*$/ { print "extern :" $0 "\n"; print $0 >> ext;infile="ext";next;}
/^[^ }].*{$/ { print "init var :" $0 "\n";decl="var";print $0 >> var; infile="vars";
print $0;
fout=gensub("^([^ \\*])*[ ]*([a-zA-A0-9_]*)\\[.*","\\2","g") ".vars";
print "var decl : " $0 "in file " fout;
print $0 >fout;
next;
}
/^[^ }].*)$/ { print "func :" $0 "\n";decl="func"; infile="func";
print $0;
fout=gensub("^.*[ \\*]([a-zA-A0-9_]*)[ ]*\\(.*","\\1","g") ".func";
print "function : " $0 "in file " fout;
print $0 >fout;
next;
}
/^}[ ]*$/ { print "end of " decl ":" $0 "\n";
if(infile=="typ") {
print $0 >> typ;
}else if (infile=="ext"){
print $0 >> ext;
}else if (infile=="var") {
print $0 >> var;
}else if ((infile=="func")||(infile=="vars")) {
print $0 >> fout;
fflush (fout);
close (fout);
}else if (infile=="def") {
print $0 >> def;
}else if (infile=="inc"){
print $0 >> inc;
}else print $0 >> out;
next;
}
/^[a-zA-Z_]/ { print "extern :" $0 "\n"; print $0 >> var;infile="var";next;}
{ print "other :" $0 "\n" ;
if(infile=="typ") {
print $0 >> typ;
}else if (infile=="ext"){
print $0 >> ext;
}else if (infile=="var") {
print $0 >> var;
}else if ((infile=="func")||(infile=="vars")){
print $0 >> fout;
}else if (infile=="def") {
print $0 >> def;
}else if (infile=="inc"){
print $0 >> inc;
}else print $0 >> out;
next;
}
in case you are finding difficult to extract function names :
1> use ctags ( a program ) to extract function names .
ctags -x --c-kinds=fp path_to_file.
2> once u got the function names, write a simple perl script to extract contents of function by passing the script name of function as said above.
Bash builtin declare appears to provide similar functionality, but I am not sure how it is implemented. In particular, declare -f lists the functions in the present environment:
declare -f quote
declare -f quote_readline
declare outputs the list of functions in the present environment:
quote ()
{
local quoted=${1//\'/\'\\\'\'};
printf "'%s'" "$quoted"
}
quote_readline ()
{
local ret;
_quote_readline_by_ref "$1" ret;
printf %s "$ret"
}
Finally, declare -f quote outputs the function definition for the quote function.
quote ()
{
local quoted=${1//\'/\'\\\'\'};
printf "'%s'" "$quoted"
}
Perhaps the underlying machinery can be repurposed to meet your needs.
You should use something like clang which will actually parse your source code and allows you to analyse it. So it can find functions in many languages, and even if you consider macros. You have no chance using regular expressions.