I have for example this line in my file (note that the numbers after = are every time different):
abcd=1234
and I need to change it to:
abcd=9999
How can I do it?
Try sed -i.backup -e 's/abcd=.*/abcd=9999/' filename.txt
Example in filename.txt
abcd=123
abcd=15652
Output as expected:
abcd=9999
abcd=9999
Note: It has been presented to me that this is the GNU sed, which has some extensions apparently that make it different from the normal sed. I was not aware of this, and I have no means to verify this. If someone has the non-gnu solution, please feel free to edit it in.
Related
I try to open a file with vi, but it says:
Line too long
I read topic vi: Line too long, but the only two solutions (install Vim, use sed, AWK, fold, less) aren't viable.
The file that I pretend to open, has more than 400.000 lines, and commands like more, or sed, or fold, or view don't work, because I don't know the specific line number. Installing another program is descarted.
I want navigate in a file, especially on the last lines.
The operating system is a SunOS 5.8, and the commands or editors or programs that you will propose to me has to be installed on this version.
Initially I discarded the tail command, but I think that is the unique solution.
Finally the solution using "tail" with arguments of the last lines to show.
With this command and "more" I can navigate to the last lines and jump over the line with the too-long-problem:
tail -1000 file-with-line-too-long.txt | more
It's a managed and limited machine without permissions to install any programs.
:$ moves you to the beginning of the last line.
It also works from the command line (you may have to escape the $):
vi +$ /path/to/file
I have three programs that are currently using YACC files to do configuration file parsing. For simplicity, they all read the same configuration file, however, they each respond to keys/values uniquely (so the same .y file can't be used for more than 1 program). It would be nice not to have to repeat the %token declarations for each one - if I want to add one token, I have to change 3 files? What year is it??
These methods aren't working or are giving me issues:
The C preprocessor is obviously run AFTER we YACC the file, so #include for a #define or other macro will not work.
I've tried to script up something similar using sed:
REPLACE_DATA=$(cat <file>)
NEW_FILE=<file>.tmp
sed 's/$PLACEHOLDER/$REPLACE_DATA/g' <file> > $NEW_FILE
However it seems that it's stripping my newlines in REPLACE_DATA and then not replacing instances of $PLACEHOLDER instead of replacing the contents of the variables PLACEHOLDER.
Is there a real include mechanism in YACC, or are there other solutions I'm missing? This is a maintenance nightmare and I'm hoping someone else has run into a similar situation. Thanks in advance.
here's a sed version from http://www.grymoire.com/Unix/Sed.html#uh-37
#!/bin/sh
# watch out for a '/' in the parameter
# use alternate search delimiter
sed -e '\_#INCLUDE <'"$1"'>_{
r '"$1"'
d
}'
But traditionally, we used the m4 preprocessor before yacc.
I want to know what system call is used in linux C programming is used to know whether a file is modified.
I know that make utility compiles the file using the modification dates only.
I want know how to find whether the file is modified or not.
Thanks in advance
Using md5sum or sha1sum will hash the contents of the file, which should give you a better indication of actual changes than modification dates.
stat(2) gives you file times and more.
Edit 0:
You can look into fcntl(2) and F_NOTIFY flag - you'd have to open the directory, not the file itself though. Or the newer Linux inotify(7) facility.
You can use ls and various flags on it, like -l or -t and pipe to grep or something. That will tell you when the last file was modified. But it doesn't really tell you if the file was modified. I think the only real way you can know that is if you are keeping track of when the last time it was modified in general (like checking from backups or something).
I am not a native english speaker so please excuse the awkward title of this question. I just not knew how to phrase it better.
I am on a FreeBSD box and I have a little filter tool written in C which reads a list of data via stdin and outputs a processed list via stdout. I invoke it somewhat like this: find . -type f | myfilter > /tmp/processed.txt.
Now I want to give my filter a little bit more exposure and publish it. Convention says that tools should allow something like this: find . -type f | myfilter -f - -o /tmp/processed.text
This would force me to write code that simply is not needed since the shell can do the job, therefore I tend to leave it out.
My question is: Do I miss some argument (other but convention) why the reading and writing of files should be done in my code an not delegated to shell redirection?
There's absolutely nothing wrong with this. Your filter would have an interface similar to, say, c++filt.
You might consider file handling if you wanted to automatically choose an output file based on the name of an input file or if you wanted to special handling for processing multiple files in a single command.
If you don't want to do any either of these then there's nothing wrong with being a simple filter. Anyone can provide a set of simple shell wrappers to provide a cmd infile outfile syntax if they wish.
That's a needlessly limiting interface. Accepting arguments from the command line is more flexible,
grep foo file | myfilter > /tmp/processed.text
and it doesn't preclude find from being used
find . -type f -exec myfilter {} + > /tmp/processed.text
Actually to have the same effect as shell redirection you can do this:
freopen( "filename" , "wb" , stdout );
and so if you have used printf throughout your code, outputs will be redirected to the file. So you don't need to modify any of the code you've written before and easily adapt to the convention.
It is nice to have as option run any command with filename argument. As in your example:
myfilter [-f ./infile] [-o ./outfile] #or
myfilter [-o outfile] [filename] #and (the best one)
myfilter [-f file] [-o file] #so, when the input and output are the same file - the filter should working correctly anyway
For the nice example check the sort command. Usually used as filer in pipes, but can do [-o output] and correctly handle the same input/output problem too...
And why it is good? For example, when want run the command from "C" by "fork/exec" and don't want start the shell for handling I/O. In this case is much easier (and faster) execve(.....) with arguments as start the cmd with a shell wrapper.
I don't know if it is legal on SOF to access such a specific question, but here I go:
I've found this wonderful piece of code which takes all the samples from a WAV file and passes it into an array.
The code compiles, but I can't seem to figure out where to pass the argument of where the file is and what its name is.
Any help?
PS If this code does what it says it does, I think it could be useful to a lot of people.
Thanks!
If you look at the code it's reading from stdin (and it even says in the description above the code: "It reads from stdin"). Evidently it's designed to be used as a command line tool (or "filter") which can be piped with other tools, or standalone using I/O redirection.
You should be able to pipe output from wget or curl.
wget -O - http://www.whatever.com/somefile.wav | yourpieceofcode