Find and replace multi line file content in files - file

What I want to do is:
find some_files -name '*.html' -exec sed -i "s/`cat old`/`cat new`/g" {} \;
with old and new containing newline characters and slashes and other special characters, which prevent sed from parsing correctly.
I have read about how to escape newline characters with sed, and the command tr, the command printf '%q', but I can't make these work properly, maybe because I don't fully understand their function. Additionally, I don't know which special characters I still have to escape for sed to work.

I'm not sure what you want to do exactly, but if the old file contains newlines, you're probably going to run into trouble. That is because sed works by applying the commands on each line, so trying to match a line with a pattern that represents multiple lines will not work unless you load more lines explicitly.
My suggestion would be to load the whole file into sed's "buffer" before applying the substitute command. Then, you'd have to make sure that old and new are escaped correctly. Also, what could become more confusing is that escaping for the old file (the pattern) must be different than for the new file (the replacement).
Let's start by escaping the new file into a "new.tmp" file. For clarity, we'll create a sed script called "escape_new.sed":
#!/bin/sed -f
# Commas used as separators
s,\\,\\\\,g
s,$,\\,g
s,[/&],\\&,g
$ a/
Then run it: sed -f escape_new.sed new > new.tmp
There are three commands we use to escape:
Backslashes should be preceded by another backslash
Newlines should be preceded by a backslash (we do this by adding a backslash before the end of the line).
Ampersands and slashes should be preceded by a backslash (notice that the & at the replacement text is actually an operator that contains the match, therefore if it matches the slash it contains the slash, and if it matches the ampersand, it contains the ampersand).
On the last line (refered to with the "$" symbol), we append (through the "a" command) a slash. This is the closing slash for the substitute command we will be using later. We have to put it here because the backticks will remove any extra newlines at the end of the input, and that can cause problems (like for example a backslash used for quoting a newline actually quoting the terminating slash).
Now let's escape the old file. As above, we'll create an "escape_old.sed" script. Before we do it though, we need to load the whole file into the pattern space (sed's internal buffer) so we can replace newline characters. We can do that with the following commands:
: a
$! {
N
b a
}
The first command creates a label called "a". The second command ("{") actually starts a group of commands. The magic here is the "$!" address prefix. That prefix tells it to run the commands only if the last input line that was read wasn't the last line of the input ("$" means last line of the input and "!" means not). The first command in the group appends the next line from the input into the pattern space. If this "N" command is executed in the last line, it terminates the script, so we must be careful to not execute it on the last line. The second command in the group is a branch command, "b", which will "jump" back to the "a" label. The magic is the "$!" address prefix we have before the command. The closing bracket closes the group. This group, with its respective address prefix, allows us to loop through all of the lines, concatanting them together, and stop after the last line, allowing any further commands to be executed. We then have the final script:
#!/bin/sed -f
: a
$! {
N
b a
}
s,\\,\\\\,g
s,\n,\\n,g
s,[][/^$.],\\&,g
As above, we need to escape the special characters. In this case an actual newline is now escaped as a backslash followed by the letter n. In the last command, there are more characters that need to be prefixed by a backslash. Notice that to match a closing square-bracket, it needs to be the first character inside the square-brackets, to prevent sed from interpreting it as the closing character for our list of characters to match. Therefore, the characters that are listed in order between the square brackets are ][/^$. .
And again, we execute it with: sed -f escape_new.sed old > old.tmp
Now we can use these escaped files in the sed command, but again we must load all of the lines into pattern space. Using the same commands as before, but placing them into a single line we have the compact form: :a;$!{N;ba}: which we can now use in the final expression (without the closing slash character that is now on the new.tmp file):
find some_files -name '*.html' -exec sed -e ":a;\$!{N;ba};s/`cat old.tmp`/`cat new.tmp`g" -i {} \;
And hopefully it will work =)
Notice that we have escaped the $ symbol with a backslash, otherwise the shell will think that we are trying to access the $! variable (result of the last asynchronous command executed).

Related

Case statement not properly comparing variable to string [duplicate]

I am making an NW.js app on macOS, and want to run the app in dev mode
by double-clicking on an icon.
In the first step, I'm trying to make my shell script work.
Using VS Code on Windows (I wanted to gain time), I have created a run-nw file at the root of my project, containing this:
#!/bin/bash
cd "src"
npm install
cd ..
./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &
but I get this output:
$ sh ./run-nw
: command not found
: No such file or directory
: command not found
: No such file or directory
Usage: npm <command>
where <command> is one of: (snip commands list)
(snip npm help)
npm#3.10.3 /usr/local/lib/node_modules/npm
: command not found
: No such file or directory
: command not found
Some things I don't understand.
It seems that it takes empty lines as commands.
In my editor (VS Code) I have tried to replace \r\n with \n
(in case the \r creates problems) but it changes nothing.
It seems that it doesn't find the folders
(with or without the dirname instruction),
or maybe it doesn't know about the cd command ?
It seems that it doesn't understand the install argument to npm.
The part that really weirds me out, is that it still runs the app
(if I did an npm install manually)...
Not able to make it work properly, and suspecting something weird with
the file itself, I created a new one directly on the Mac, using vim this time.
I entered the exact same instructions, and... now it works without any
issues.
A diff on the two files reveals exactly zero difference.
What can be the difference? What can make the first script not work? How can I find out?
Update
Following the accepted answer's recommendations, after the wrong line
endings came back, I checked multiple things.
It turns out that since I copied my ~/.gitconfig from my Windows
machine, I had autocrlf=true, so every time I modified the bash
file under Windows, it re-set the line endings to \r\n.
So, in addition to running dos2unix (which you will have to
install using Homebrew on a Mac), if you're using Git, check your
.gitconfig file.
Yes. Bash scripts are sensitive to line-endings, both in the script itself and in data it processes. They should have Unix-style line-endings, i.e., each line is terminated with a Line Feed character (decimal 10, hex 0A in ASCII).
DOS/Windows line endings in the script
With Windows or DOS-style line endings , each line is terminated with a Carriage Return followed by a Line Feed character. You can see this otherwise invisible character in the output of cat -v yourfile:
$ cat -v yourfile
#!/bin/bash^M
^M
cd "src"^M
npm install^M
^M
cd ..^M
./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &^M
In this case, the carriage return (^M in caret notation or \r in C escape notation) is not treated as whitespace. Bash interprets the first line after the shebang (consisting of a single carriage return character) as the name of a command/program to run.
Since there is no command named ^M, it prints : command not found
Since there is no directory named "src"^M (or src^M), it prints : No such file or directory
It passes install^M instead of install as an argument to npm which causes npm to complain.
DOS/Windows line endings in input data
Like above, if you have an input file with carriage returns:
hello^M
world^M
then it will look completely normal in editors and when writing it to screen, but tools may produce strange results. For example, grep will fail to find lines that are obviously there:
$ grep 'hello$' file.txt || grep -x "hello" file.txt
(no match because the line actually ends in ^M)
Appended text will instead overwrite the line because the carriage returns moves the cursor to the start of the line:
$ sed -e 's/$/!/' file.txt
!ello
!orld
String comparison will seem to fail, even though strings appear to be the same when writing to screen:
$ a="hello"; read b < file.txt
$ if [[ "$a" = "$b" ]]
then echo "Variables are equal."
else echo "Sorry, $a is not equal to $b"
fi
Sorry, hello is not equal to hello
Solutions
The solution is to convert the file to use Unix-style line endings. There are a number of ways this can be accomplished:
This can be done using the dos2unix program:
dos2unix filename
Open the file in a capable text editor (Sublime, Notepad++, not Notepad) and configure it to save files with Unix line endings, e.g., with Vim, run the following command before (re)saving:
:set fileformat=unix
If you have a version of the sed utility that supports the -i or --in-place option, e.g., GNU sed, you could run the following command to strip trailing carriage returns:
sed -i 's/\r$//' filename
With other versions of sed, you could use output redirection to write to a new file. Be sure to use a different filename for the redirection target (it can be renamed later).
sed 's/\r$//' filename > filename.unix
Similarly, the tr translation filter can be used to delete unwanted characters from its input:
tr -d '\r' <filename >filename.unix
Cygwin Bash
With the Bash port for Cygwin, there’s a custom igncr option that can be set to ignore the Carriage Return in line endings (presumably because many of its users use native Windows programs to edit their text files).
This can be enabled for the current shell by running set -o igncr.
Setting this option applies only to the current shell process so it can be useful when sourcing files with extraneous carriage returns. If you regularly encounter shell scripts with DOS line endings and want this option to be set permanently, you could set an environment variable called SHELLOPTS (all capital letters) to include igncr. This environment variable is used by Bash to set shell options when it starts (before reading any startup files).
Useful utilities
The file utility is useful for quickly seeing which line endings are used in a text file. Here’s what it prints for for each file type:
Unix line endings: Bourne-Again shell script, ASCII text executable
Mac line endings: Bourne-Again shell script, ASCII text executable, with CR line terminators
DOS line endings: Bourne-Again shell script, ASCII text executable, with CRLF line terminators
The GNU version of the cat utility has a -v, --show-nonprinting option that displays non-printing characters.
The dos2unix utility is specifically written for converting text files between Unix, Mac and DOS line endings.
Useful links
Wikipedia has an excellent article covering the many different ways of marking the end of a line of text, the history of such encodings and how newlines are treated in different operating systems, programming languages and Internet protocols (e.g., FTP).
Files with classic Mac OS line endings
With Classic Mac OS (pre-OS X), each line was terminated with a Carriage Return (decimal 13, hex 0D in ASCII). If a script file was saved with such line endings, Bash would only see one long line like so:
#!/bin/bash^M^Mcd "src"^Mnpm install^M^Mcd ..^M./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &^M
Since this single long line begins with an octothorpe (#), Bash treats the line (and the whole file) as a single comment.
Note: In 2001, Apple launched Mac OS X which was based on the BSD-derived NeXTSTEP operating system. As a result, OS X also uses Unix-style LF-only line endings and since then, text files terminated with a CR have become extremely rare. Nevertheless, I think it’s worthwhile to show how Bash would attempt to interpret such files.
On JetBrains products (PyCharm, PHPStorm, IDEA, etc.), you'll need to click on CRLF/LF to toggle between the two types of line separators (\r\n and \n).
I was trying to startup my docker container from Windows and got this:
Bash script and /bin/bash^M: bad interpreter: No such file or directory
I was using git bash and the problem was about the git config, then I just did the steps below and it worked. It will configure Git to not convert line endings on checkout:
git config --global core.autocrlf input
delete your local repository
clone it again.
Many thanks to Jason Harmon in this link:
https://forums.docker.com/t/error-while-running-docker-code-in-powershell/34059/6
Before that, I tried this, that didn't works:
dos2unix scriptname.sh
sed -i -e 's/\r$//' scriptname.sh
sed -i -e 's/^M$//' scriptname.sh
If you're using the read command to read from a file (or pipe) that is (or might be) in DOS/Windows format, you can take advantage of the fact that read will trim whitespace from the beginning and ends of lines. If you tell it that carriage returns are whitespace (by adding them to the IFS variable), it'll trim them from the ends of lines.
In bash (or zsh or ksh), that means you'd replace this standard idiom:
IFS= read -r somevar # This will not trim CR
with this:
IFS=$'\r' read -r somevar # This *will* trim CR
(Note: the -r option isn't related to this, it's just usually a good idea to avoid mangling backslashes.)
If you're not using the IFS= prefix (e.g. because you want to split the data into fields), then you'd replace this:
read -r field1 field2 ... # This will not trim CR
with this:
IFS=$' \t\n\r' read -r field1 field2 ... # This *will* trim CR
If you're using a shell that doesn't support the $'...' quoting mode (e.g. dash, the default /bin/sh on some Linux distros), or your script even might be run with such a shell, then you need to get a little more complex:
cr="$(printf '\r')"
IFS="$cr" read -r somevar # Read trimming *only* CR
IFS="$IFS$cr" read -r field1 field2 ... # Read trimming CR and whitespace, and splitting fields
Note that normally, when you change IFS, you should put it back to normal as soon as possible to avoid weird side effects; but in all these cases, it's a prefix to the read command, so it only affects that one command and doesn't have to be reset afterward.
Coming from a duplicate, if the problem is that you have files whose names contain ^M at the end, you can rename them with
for f in *$'\r'; do
mv "$f" "${f%$'\r'}"
done
You properly want to fix whatever caused these files to have broken names in the first place (probably a script which created them should be dos2unixed and then rerun?) but sometimes this is not feasible.
The $'\r' syntax is Bash-specific; if you have a different shell, maybe you need to use some other notation. Perhaps see also Difference between sh and bash
Since VS Code is being used, we can see CRLF or LF in the bottom right depending on what's being used and if we click on it we can change between them (LF is being used in below example):
We can also use the "Change End of Line Sequence" command from the command pallet. Whatever's easier to remember since they're functionally the same.
One more way to get rid of the unwanted CR ('\r') character is to run the tr command, for example:
$ tr -d '\r' < dosScript.py > nixScript.py
I ran into this issue when I use git with WSL.
git has a feature where it changes the line-ending of files according to the OS you are using, on Windows it make sure the line endings are \r\n which is not compatible with Linux which uses only \n.
You can resolve this problem by adding a file name .gitattributes to your git root directory and add lines as following:
config/* text eol=lf
run.sh text eol=lf
In this example all files inside config directory will have only line-feed line ending and run.sh file as well.
For Notepad++ users, this can be solved by:
The simplest way on MAC / Linux - create a file using 'touch' command, open this file with VI or VIM editor, paste your code and save. This would automatically remove the windows characters.
If you are using a text editor like BBEdit you can do it at the status bar. There is a selection where you can switch.
For IntelliJ users, here is the solution for writing Linux script.
Use LF - Unix and masOS (\n)
Scripts may call each other.
An even better magic solution is to convert all scripts in the folder/subfolders:
find . -name "*.sh" -exec sed -i -e 's/\r$//' {} +
You can use dos2unix too but many servers do not have it installed by default.
For the sake of completeness, I'll point out another solution which can solve this problem permanently without the need to run dos2unix all the time:
sudo ln -s /bin/bash `printf 'bash\r'`

ksh: remove last extension from a multiple extension filename

I have a filename in the format dir1/dir2/filename.txt.org and I like to rename this to dir1/dir2/filename.txt . how can this be done. I tried 'cut' with '.' separator but it also removes .txt
You can try korn shell variable expansion formats, instead of using a subprocess (e.g. cut) . This can be much faster.
example:
var1=dir1/dir2/filename.txt.org
var2=${var1%.*}
If you now print $var2 its value will be dir1/dir2/filename.txt
The % tells it to delete the smallest matching rightmost match for .* (which means anything following the rightmost period character).
${variable%pattern} - return the value of variable without the smallest ending portion that matches pattern.
Other variable expansion formats are available, it is worthwhile to study the docs.

Bash array behaving strangely from sed usage [duplicate]

I am making an NW.js app on macOS, and want to run the app in dev mode
by double-clicking on an icon.
In the first step, I'm trying to make my shell script work.
Using VS Code on Windows (I wanted to gain time), I have created a run-nw file at the root of my project, containing this:
#!/bin/bash
cd "src"
npm install
cd ..
./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &
but I get this output:
$ sh ./run-nw
: command not found
: No such file or directory
: command not found
: No such file or directory
Usage: npm <command>
where <command> is one of: (snip commands list)
(snip npm help)
npm#3.10.3 /usr/local/lib/node_modules/npm
: command not found
: No such file or directory
: command not found
Some things I don't understand.
It seems that it takes empty lines as commands.
In my editor (VS Code) I have tried to replace \r\n with \n
(in case the \r creates problems) but it changes nothing.
It seems that it doesn't find the folders
(with or without the dirname instruction),
or maybe it doesn't know about the cd command ?
It seems that it doesn't understand the install argument to npm.
The part that really weirds me out, is that it still runs the app
(if I did an npm install manually)...
Not able to make it work properly, and suspecting something weird with
the file itself, I created a new one directly on the Mac, using vim this time.
I entered the exact same instructions, and... now it works without any
issues.
A diff on the two files reveals exactly zero difference.
What can be the difference? What can make the first script not work? How can I find out?
Update
Following the accepted answer's recommendations, after the wrong line
endings came back, I checked multiple things.
It turns out that since I copied my ~/.gitconfig from my Windows
machine, I had autocrlf=true, so every time I modified the bash
file under Windows, it re-set the line endings to \r\n.
So, in addition to running dos2unix (which you will have to
install using Homebrew on a Mac), if you're using Git, check your
.gitconfig file.
Yes. Bash scripts are sensitive to line-endings, both in the script itself and in data it processes. They should have Unix-style line-endings, i.e., each line is terminated with a Line Feed character (decimal 10, hex 0A in ASCII).
DOS/Windows line endings in the script
With Windows or DOS-style line endings , each line is terminated with a Carriage Return followed by a Line Feed character. You can see this otherwise invisible character in the output of cat -v yourfile:
$ cat -v yourfile
#!/bin/bash^M
^M
cd "src"^M
npm install^M
^M
cd ..^M
./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &^M
In this case, the carriage return (^M in caret notation or \r in C escape notation) is not treated as whitespace. Bash interprets the first line after the shebang (consisting of a single carriage return character) as the name of a command/program to run.
Since there is no command named ^M, it prints : command not found
Since there is no directory named "src"^M (or src^M), it prints : No such file or directory
It passes install^M instead of install as an argument to npm which causes npm to complain.
DOS/Windows line endings in input data
Like above, if you have an input file with carriage returns:
hello^M
world^M
then it will look completely normal in editors and when writing it to screen, but tools may produce strange results. For example, grep will fail to find lines that are obviously there:
$ grep 'hello$' file.txt || grep -x "hello" file.txt
(no match because the line actually ends in ^M)
Appended text will instead overwrite the line because the carriage returns moves the cursor to the start of the line:
$ sed -e 's/$/!/' file.txt
!ello
!orld
String comparison will seem to fail, even though strings appear to be the same when writing to screen:
$ a="hello"; read b < file.txt
$ if [[ "$a" = "$b" ]]
then echo "Variables are equal."
else echo "Sorry, $a is not equal to $b"
fi
Sorry, hello is not equal to hello
Solutions
The solution is to convert the file to use Unix-style line endings. There are a number of ways this can be accomplished:
This can be done using the dos2unix program:
dos2unix filename
Open the file in a capable text editor (Sublime, Notepad++, not Notepad) and configure it to save files with Unix line endings, e.g., with Vim, run the following command before (re)saving:
:set fileformat=unix
If you have a version of the sed utility that supports the -i or --in-place option, e.g., GNU sed, you could run the following command to strip trailing carriage returns:
sed -i 's/\r$//' filename
With other versions of sed, you could use output redirection to write to a new file. Be sure to use a different filename for the redirection target (it can be renamed later).
sed 's/\r$//' filename > filename.unix
Similarly, the tr translation filter can be used to delete unwanted characters from its input:
tr -d '\r' <filename >filename.unix
Cygwin Bash
With the Bash port for Cygwin, there’s a custom igncr option that can be set to ignore the Carriage Return in line endings (presumably because many of its users use native Windows programs to edit their text files).
This can be enabled for the current shell by running set -o igncr.
Setting this option applies only to the current shell process so it can be useful when sourcing files with extraneous carriage returns. If you regularly encounter shell scripts with DOS line endings and want this option to be set permanently, you could set an environment variable called SHELLOPTS (all capital letters) to include igncr. This environment variable is used by Bash to set shell options when it starts (before reading any startup files).
Useful utilities
The file utility is useful for quickly seeing which line endings are used in a text file. Here’s what it prints for for each file type:
Unix line endings: Bourne-Again shell script, ASCII text executable
Mac line endings: Bourne-Again shell script, ASCII text executable, with CR line terminators
DOS line endings: Bourne-Again shell script, ASCII text executable, with CRLF line terminators
The GNU version of the cat utility has a -v, --show-nonprinting option that displays non-printing characters.
The dos2unix utility is specifically written for converting text files between Unix, Mac and DOS line endings.
Useful links
Wikipedia has an excellent article covering the many different ways of marking the end of a line of text, the history of such encodings and how newlines are treated in different operating systems, programming languages and Internet protocols (e.g., FTP).
Files with classic Mac OS line endings
With Classic Mac OS (pre-OS X), each line was terminated with a Carriage Return (decimal 13, hex 0D in ASCII). If a script file was saved with such line endings, Bash would only see one long line like so:
#!/bin/bash^M^Mcd "src"^Mnpm install^M^Mcd ..^M./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &^M
Since this single long line begins with an octothorpe (#), Bash treats the line (and the whole file) as a single comment.
Note: In 2001, Apple launched Mac OS X which was based on the BSD-derived NeXTSTEP operating system. As a result, OS X also uses Unix-style LF-only line endings and since then, text files terminated with a CR have become extremely rare. Nevertheless, I think it’s worthwhile to show how Bash would attempt to interpret such files.
On JetBrains products (PyCharm, PHPStorm, IDEA, etc.), you'll need to click on CRLF/LF to toggle between the two types of line separators (\r\n and \n).
I was trying to startup my docker container from Windows and got this:
Bash script and /bin/bash^M: bad interpreter: No such file or directory
I was using git bash and the problem was about the git config, then I just did the steps below and it worked. It will configure Git to not convert line endings on checkout:
git config --global core.autocrlf input
delete your local repository
clone it again.
Many thanks to Jason Harmon in this link:
https://forums.docker.com/t/error-while-running-docker-code-in-powershell/34059/6
Before that, I tried this, that didn't works:
dos2unix scriptname.sh
sed -i -e 's/\r$//' scriptname.sh
sed -i -e 's/^M$//' scriptname.sh
If you're using the read command to read from a file (or pipe) that is (or might be) in DOS/Windows format, you can take advantage of the fact that read will trim whitespace from the beginning and ends of lines. If you tell it that carriage returns are whitespace (by adding them to the IFS variable), it'll trim them from the ends of lines.
In bash (or zsh or ksh), that means you'd replace this standard idiom:
IFS= read -r somevar # This will not trim CR
with this:
IFS=$'\r' read -r somevar # This *will* trim CR
(Note: the -r option isn't related to this, it's just usually a good idea to avoid mangling backslashes.)
If you're not using the IFS= prefix (e.g. because you want to split the data into fields), then you'd replace this:
read -r field1 field2 ... # This will not trim CR
with this:
IFS=$' \t\n\r' read -r field1 field2 ... # This *will* trim CR
If you're using a shell that doesn't support the $'...' quoting mode (e.g. dash, the default /bin/sh on some Linux distros), or your script even might be run with such a shell, then you need to get a little more complex:
cr="$(printf '\r')"
IFS="$cr" read -r somevar # Read trimming *only* CR
IFS="$IFS$cr" read -r field1 field2 ... # Read trimming CR and whitespace, and splitting fields
Note that normally, when you change IFS, you should put it back to normal as soon as possible to avoid weird side effects; but in all these cases, it's a prefix to the read command, so it only affects that one command and doesn't have to be reset afterward.
Coming from a duplicate, if the problem is that you have files whose names contain ^M at the end, you can rename them with
for f in *$'\r'; do
mv "$f" "${f%$'\r'}"
done
You properly want to fix whatever caused these files to have broken names in the first place (probably a script which created them should be dos2unixed and then rerun?) but sometimes this is not feasible.
The $'\r' syntax is Bash-specific; if you have a different shell, maybe you need to use some other notation. Perhaps see also Difference between sh and bash
Since VS Code is being used, we can see CRLF or LF in the bottom right depending on what's being used and if we click on it we can change between them (LF is being used in below example):
We can also use the "Change End of Line Sequence" command from the command pallet. Whatever's easier to remember since they're functionally the same.
For Notepad++ users, this can be solved by:
One more way to get rid of the unwanted CR ('\r') character is to run the tr command, for example:
$ tr -d '\r' < dosScript.py > nixScript.py
I ran into this issue when I use git with WSL.
git has a feature where it changes the line-ending of files according to the OS you are using, on Windows it make sure the line endings are \r\n which is not compatible with Linux which uses only \n.
You can resolve this problem by adding a file name .gitattributes to your git root directory and add lines as following:
config/* text eol=lf
run.sh text eol=lf
In this example all files inside config directory will have only line-feed line ending and run.sh file as well.
The simplest way on MAC / Linux - create a file using 'touch' command, open this file with VI or VIM editor, paste your code and save. This would automatically remove the windows characters.
If you are using a text editor like BBEdit you can do it at the status bar. There is a selection where you can switch.
For IntelliJ users, here is the solution for writing Linux script.
Use LF - Unix and masOS (\n)
Scripts may call each other.
An even better magic solution is to convert all scripts in the folder/subfolders:
find . -name "*.sh" -exec sed -i -e 's/\r$//' {} +
You can use dos2unix too but many servers do not have it installed by default.
For the sake of completeness, I'll point out another solution which can solve this problem permanently without the need to run dos2unix all the time:
sudo ln -s /bin/bash `printf 'bash\r'`

Single Quotes or No quotes in file paths in Unix shells

I am new to Unix systems and trying to learn some thing with help of terminal. I have following question in my mind. If we can write filepath without single quotes in terminal (for ex : mv path1 path2) then why we sometime use single quotes to specify paths. What is the difference between these two?
This is not a question of the operating system, but of the shell you use. You can actually chose what shell you want to use on a unixoid system if multiple are installed (which usually is the case).
In general the shell has to interpret the input you make. It has to decide how to handle the tokens of the input. What to consider as the "command" you want to execute, what as arguments. For the arguments it has to decide if the string is meant as a single argument or multiple arguments.
Without quotes (single or double quotes), whitespace characters are considered separators between words, words are typically considered separate arguments. So you can specify multiple arguments for a single command. If that is not desired then you can use quote characters to group multiple words separated by whitespace characters into a single argument, for example a folder name containing a space character. This works because now the shell knows that you want everything following the quote character to be considered as a single argument up to the next matching quote character (actually except escaped ones...).
It's used to escape spaces in file names, otherwise, a backslash is needed. For instance:
$ rm spaces\ in\ file\ name
$ rm 'spaces in file name'
If your file path does not have spaces, it's probably safe to omit the quotes.

How do I replace a word in a text file with another word and an action?

hi all
Suppose we have a text file (file1.txt)
file1.txt contains many words and spaces and enter characters (cR+LF).
I wanna to replace a specific word that follows with an enter character and replace it with only that word. I mean eliminating cr+lf character.
How ?
Thank you
i assume you're asking about how to do it programmatically.
LF and CR are characters and as such they have an ascii code assigned (10,13). you'll need to load the text file, copy it to a new buffer word by word and whenever you encounter the word you want to replace - check whether it is followed by 10,13 and just don't copy those characters if so.
then write the new buffer back to the file.
Use of regular expressions should make short work of this:
replace word\r\n with word
How this is exactly done depends on your environment / editor / tools. You mentioned cf + lf, which hints that you're using Windows.
If you use Notepad++ for example, it has builtin regex support and you can use these facilities to obtain your goal.
Update: I have tried this variant it works:
Download Vim for Windows.
Open your file in Vim.
In it, issue the following command:
%s/\v([[:digit:]]+NPN[[:alpha:]]+)\n/\1 /g
Explanation:
%s - work for all lines
\v - easier regex manipulation regarding backslashes
([[:digit:]]+NPN[[:alpha:]]+) - match some digits, then NPN, then letters and capture this
\n - match end of line
\1 - replace everything with first group and two spaces
g - do this many times for each line (this is basically optional)
If you want to convert CRLF to LF:
sed 's/.$//' # assumes that all lines end with CR/LF
If you want to remove CRLF altogether
cat file1.txt | tr '\n' ' ' # join the lines with a space
cat file1.txt | tr -d '\n' # join the lines without a space
You might have to convert the line endings to unix (CRLF to LF) first and then do the translation.

Resources