\n characters before EOF in file causing problems - c

I have written some data to a file manually i.e. not by my application.
My code is reading the data char by char and storing them in different arrays but my program gets stuck when I insert the condition EOF.
After some investigation I found out that in my file before EOF there are three to four \n characters. I have not inserted them. I don't understand why they are in my file.

Want to remove those pesky extra characters? First, see how many of them there are at the end of your file:
od -c <filename> | tail
Then, remove however many characters you don't like. If it's 3:
truncate -s -3 <filename>
But overall, if it were me, I'd change my program to discard undesired newline characters, unless they're truly invalid according to the input file format specification.

It is very easy to add additional newlines to the end of a file in every text editor. You have to push the cursor around to see them. Open your file in your editor and see what happens when you navigate to the end, you'll see the extra newlines.
There is no such thing as an EOF character in general. Windows treats control-Z as EOF in some cases. Perhaps you are talking about the return value from some API that indicates that it has reached the end of file?

Related

“readline” when there is output at beginning of line

I am using the readline (version 6.3, default [non-vi] mode, Ubuntu 14.04) library from within my own program, running in a Terminal window (on a PC). There is a problem when there is previous output not terminated by newline when readline() is called.
#include <stdio.h>
#include <readline/readline.h>
void main(void)
{
// Previous output from some other part of application
// which *may* have output stuff *not* terminated with a '\n'
printf("Hello ");
fflush(stdout);
char *in = readline("OK> ");
}
So the line looks like:
Hello OK> <caret here>
If you type a small number of characters (up to 5?) and then, say, Ctrl+U (may be others) to delete your input so far it all seems well --- readline() moves the caret back to just after its own prompt, i.e. deleting 5 characters. However, try typing, say:
123456 <Ctrl+U>
Now it deletes back into the Hello, leaving just Hell on the line, followed by the caret, i.e. deleting 6+6==12. So you see:
Hello OK> 123456 <Ctrl+U>
Hell<caret here>
I need one of two possible solutions:
I have realised it depends on how many characters are typed on the line where it goes wrong. Any fix/workaround?
Alternatively, is there a readline library call I could make which would tell me what position/column the caret is at before I invoke readline()? Then at least I could recognise the fact that that I am at the end of an existing line and output a \n so as to position myself at the start of a new line first.
I think I can sort of guess that for up to 5 characters typed it does up to 5 backspaces, but over that it chooses to do something else which messes up if it did not start at beginning of line?
I see GNU Readline: how to clear the input line?. Is this the same situation? The solutions seem pretty complex. Is it not possible to ask what column you are at when starting readline(), or to tell it not to try to be so clever at deleting and stick to only deleting as many characters as have actually been typed into it?
It turns out that readline cannot recognise if it is not starting at column #1, and thereby stop itself from messing up the previous output on the line.
The only way to deal with this is to recognise the starting column ourselves, and move to the start of the next line down if the current position is not column #1. Then it will always starts from the left-most column, without outputting an unnecessary newline when it is already at column #1.
We can do this for the standard "Terminal" because it understands an ANSI escape sequence to query the current row & column of the terminal. The query is sent via characters to stdout and the response is read via characters the terminal inserts into stdin. We must put the terminal into "raw" input mode so that the response characters can be read immediately and will not be echoed.
So here is the code:
rl_prep_terminal(1); // put the terminal into "raw" mode
fputs("\033[6n", stdout); // <ESC>[6n is ANSI sequence to query terminal position
int row, col; // terminal will reply with <ESC>[<row>;<col>R
fscanf(stdin, "\033[%d;%dR", &row, &col);
rl_deprep_terminal(); // restore terminal "cooked" mode
if (col > 1) // if beyond the first column...
fputc('\n', stdout); // output '\n' to move to start of next line
in = readline(prompt); // now we can invoke readline() with our prompt

error reading text file; lines read twice and with every other having extra line break

I'm reading a text file (written on a UNIX or Linux machine) that is supposed to have one entry on each line. When I read it with my program and output the file contents to the console every other entry has an extra line break and each line is repeated twice. Here is the code
FILE* fullList;
char sline[21];
fullList = fopen("fullList", "r");
if(fullList == NULL)
exit(EXIT_FAILURE);
while(fgets(sline, sizeof(sline), fullList) != NULL)
{
puts(sline);
printf(sline);
}
fclose(fullList);
So if the input file contains
apple
banana
orange
zucchini
cucumber
eggplant
the program would display it as
apple
apple
banana
banana
orange
orange
zucchini
zucchini
cucumber
cucumber
eggplant
eggplant
I'm not sure what's doing it. Must I some how clear sline before using it again?
That's because you print each line twice - once through puts, and once through printf.
fgets captures newline \n, and puts appends a '\n' of its own, so there's an additional line break after the first printout.
The last line in the file ("eggplant") lacks the trailing '\n', so there's no extra blank line in between the two eggplant printouts.
To fix this problem, first stop calling one of the printing functions. Next, make sure that the line you read does not have a \n at the end. You could either strip it off yourself, or use
while (fscanf(fulllist, "%20s", sline) == 1) {
...
}
It is not advisable to call printf with your string in the spot of the formal parameter, because having unexpected format symbols there may lead to undefined behavior. If you decide to use printf, use it as follows:
printf("%s\n", sline);
What do you think this does?
puts(sline);
printf(sline);
The first one prints the line (followed by a newline!). The second one prints the line, but formats anything starting with % in a special way. So puts() gives you an extra newline, but printf() is even worse--look up the documentation and think about what would happen if your file contains "%s" or "%d".
So you want to use only a single output statement, and you don't want double newlines. You could remove the newline from each line before printing, but even better is to use fputs(sline, stdout) which does not add a newline.
As an aside, a bit of advice: using C to process text files is going to be pretty painful for you (as a newcomer to C). I suggest using some other language, such as Python, Ruby, awk, sed, or anything else based on your needs and experience.

What exactly is \r in C language?

#include <stdio.h>
int main()
{
int countch=0;
int countwd=1;
printf("Enter your sentence in lowercase: ");
char ch='a';
while(ch!='\r')
{
ch=getche();
if(ch==' ')
countwd++;
else
countch++;
}
printf("\n Words =%d ",countwd);
printf("Characters = %d",countch-1);
getch();
}
This is the program where I came across \r. What exactly is its role here? I am beginner in C and I appreciate a clear explanation on this.
'\r' is the carriage return character. The main times it would be useful are:
When reading text in binary mode, or which may come from a foreign OS, you'll find (and probably want to discard) it due to CR/LF line-endings from Windows-format text files.
When writing to an interactive terminal on stdout or stderr, '\r' can be used to move the cursor back to the beginning of the line, to overwrite it with new contents. This makes a nice primitive progress indicator.
The example code in your post is definitely a wrong way to use '\r'. It assumes a carriage return will precede the newline character at the end of a line entered, which is non-portable and only true on Windows. Instead the code should look for '\n' (newline), and discard any carriage return it finds before the newline. Or, it could use text mode and have the C library handle the translation (but text mode is ugly and probably should not be used).
It's Carriage Return. Source: http://msdn.microsoft.com/en-us/library/6aw8xdf2(v=vs.80).aspx
The following repeats the loop until the user has pressed the Return key.
while(ch!='\r')
{
ch=getche();
}
Once upon a time, people had terminals like typewriters (with only upper-case letters, but that's another story). Search for 'Teletype', and how do you think tty got used for 'terminal device'?
Those devices had two separate motions. The carriage return moved the print head back to the start of the line without scrolling the paper; the line feed character moved the paper up a line without moving the print head back to the beginning of the line. So, on those devices, you needed two control characters to get the print head back to the start of the next line: a carriage return and a line feed. Because this was mechanical, it took time, so you had to pause for long enough before sending more characters to the terminal after sending the CR and LF characters. One use for CR without LF was to do 'bold' by overstriking the characters on the line. You'd write the line out once, then use CR to start over and print twice over the characters that needed to be bold. You could also, of course, type X's over stuff that you wanted partially hidden, or create very dense ASCII art pictures with judicious overstriking.
On Unix, all the logic for this stuff was hidden in a terminal driver. You could use the stty command and the underlying functions (in those days, ioctl() calls; they were sanitized into the termios interface by POSIX.1 in 1988) to tweak all sorts of ways that the terminal behaved.
Eventually, you got 'glass terminals' where the speeds were greater and and there were new idiosyncrasies to deal with - Hazeltine glitches and so on and so forth. These got enshrined in the termcap and later terminfo libraries, and then further encapsulated behind the curses library.
However, some other (non-Unix) systems did not hide things as well, and you had to deal with CRLF in your text files - and no, this is not just Windows and DOS that were in the 'CRLF' camp.
Anyway, on some systems, the C library has to deal with text files that contain CRLF line endings and presents those to you as if there were only a newline at the end of the line. However, if you choose to treat the text file as a binary file, you will see the CR characters as well as the LF.
Systems like the old Mac OS (version 9 or earlier) used just CR (aka \r) for the line ending. Systems like DOS and Windows (and, I believe, many of the DEC systems such as VMS and RSTS) used CRLF for the line ending. Many of the Internet standards (such as mail) mandate CRLF line endings. And Unix has always used just LF (aka NL or newline, hence \n) for its line endings. And most people, most of the time, manage to ignore CR.
Your code is rather funky in looking for \r. On a system compliant with the C standard, you won't see the CR unless the file is opened in binary mode; the CRLF or CR will be mapped to NL by the C runtime library.
There are a few characters which can indicate a new line. The usual ones are these two:
'\n' or '0x0A' (10 in decimal) -> This character is called "Line Feed" (LF).
'\r' or '0x0D' (13 in decimal) -> This one is called "Carriage return" (CR).
Different Operating Systems handle newlines in a different way. Here is a short list of the most common ones:
DOS and Windows
They expect a newline to be the combination of two characters, namely '\r\n' (or 13 followed by 10).
Unix (and hence Linux as well)
Unix uses a single '\n' to indicate a new line.
Mac
Macs use a single '\r'.
That is not always true; it only works in Windows.
For interacting with terminal in putty, Linux shell,... it will be used for returning the cursor to the beginning of line.
following picture shows the usage of that:
Without '\r':
Data comes without '\r' to the putty terminal, it has just '\n'.
it means that data will be printed just in next line.
With '\r':
Data comes with '\r', i.e. string ends with '\r\n'. So the cursor in putty terminal not only will go to the next line but also at the beginning of line
It depends upon which platform you're on as to how it will be translated and whether it will be there at all: Wikipedia entry on newline
\r is an escape sequence character or void character. It is used to bring the cursor to the beginning of the line (it maybe of same or new line) to overwrite with new content (content written ahead of \r like: \rhello);
int main ()
{
printf("Hello \rworld");
return 0;
}
The output of the program will be world not Hello world
because \r has put the cursor at the beginning of the line and Hello has been overwritten with world.

Returning the terminal cursor to start-of-line with wrapping enabled

I'm writing a filter (in a pipe destined for a terminal output) that sometimes needs to "overwrite" a line that has just occurred. It works by passing stdin to stdout character-by-character until a \n is reached, and then invoking special behaviour. My problem regards how to return to the beginning of the line.
The first thing I thought of was using a \r or the ANSI sequence \033[1G. However, if the line was long enough to have wrapped on the terminal (and hence caused it to scroll), these will only move the cursor back to the current physical line.
My second idea was to track the length of the line (number of characters passed since previous \n), and then echo \b that many times. However, that goes wrong if the line contained control characters or escape sequences (and possibly Unicode?).
Short of searching for all special sequences and using this to adjust my character count, is there a simple way to achieve this?
Even if there were a "magic sequence" that when written to a console would reliably erase the last written line, you would STILL get the line and the sequence on the output (though hidden on a console). Think what would happen if somebody wrote the output to a file, or passed it down the pipe to other filters? Would they know how to handle such input? And don't tell me you rule out the possibility of writing somewhere else than directly to a console. Sooner or later, somebody WILL want to redirect the output - maybe even you!
The Right Way to do this is to buffer each line in memory as it is processed, and then decide whether to output it or not. There's really no way around this.
$ cat >test.sh <<'EOF'
> #!/bin/sh
> tput sc
> echo 'Here is a really long multi-line string: .............................................................................................'
> tput rc
> echo 'I went back and overwrote some stuff!!!!'
> echo
> EOF
$ sh test.sh
I went back and overwrote some stuff!!!! .......................................
......................................................
Look for the save_cursor and restore_cursor string capabilities in the terminfo database.
You can query terminal dimensions with a simple ioctl:
#include <sys/types.h>
#include <sys/ioctl.h>
// ...
struct winsize ws;
ioctl(1, TIOCGWINSZ, &ws);
// ws.ws_col, ws.ws_row should now contain terminal dimensions
This way you can prevent printing anything beyond the end of line and simply use the \r method.

Why should text files end with a newline?

I assume everyone here is familiar with the adage that all text files should end with a newline. I've known of this "rule" for years but I've always wondered — why?
Because that’s how the POSIX standard defines a line:
3.206 Line
A sequence of zero or more non- <newline> characters plus a terminating <newline> character.
Therefore, lines not ending in a newline character aren't considered actual lines. That's why some programs have problems processing the last line of a file if it isn't newline terminated.
There's at least one hard advantage to this guideline when working on a terminal emulator: All Unix tools expect this convention and work with it. For instance, when concatenating files with cat, a file terminated by newline will have a different effect than one without:
$ more a.txt
foo
$ more b.txt
bar$ more c.txt
baz
$ cat {a,b,c}.txt
foo
barbaz
And, as the previous example also demonstrates, when displaying the file on the command line (e.g. via more), a newline-terminated file results in a correct display. An improperly terminated file might be garbled (second line).
For consistency, it’s very helpful to follow this rule – doing otherwise will incur extra work when dealing with the default Unix tools.
Think about it differently: If lines aren’t terminated by newline, making commands such as cat useful is much harder: how do you make a command to concatenate files such that
it puts each file’s start on a new line, which is what you want 95% of the time; but
it allows merging the last and first line of two files, as in the example above between b.txt and c.txt?
Of course this is solvable but you need to make the usage of cat more complex (by adding positional command line arguments, e.g. cat a.txt --no-newline b.txt c.txt), and now the command rather than each individual file controls how it is pasted together with other files. This is almost certainly not convenient.
… Or you need to introduce a special sentinel character to mark a line that is supposed to be continued rather than terminated. Well, now you’re stuck with the same situation as on POSIX, except inverted (line continuation rather than line termination character).
Now, on non POSIX compliant systems (nowadays that’s mostly Windows), the point is moot: files don’t generally end with a newline, and the (informal) definition of a line might for instance be “text that is separated by newlines” (note the emphasis). This is entirely valid. However, for structured data (e.g. programming code) it makes parsing minimally more complicated: it generally means that parsers have to be rewritten. If a parser was originally written with the POSIX definition in mind, then it might be easier to modify the token stream rather than the parser — in other words, add an “artificial newline” token to the end of the input.
Each line should be terminated in a newline character, including the last one. Some programs have problems processing the last line of a file if it isn't newline terminated.
GCC warns about it not because it can't process the file, but because it has to as part of the standard.
The C language standard says
A source file that is not empty shall end in a new-line character, which shall not be immediately preceded by a backslash character.
Since this is a "shall" clause, we must emit a diagnostic message for a violation of this rule.
This is in section 2.1.1.2 of the ANSI C 1989 standard. Section 5.1.1.2 of the ISO C 1999 standard (and probably also the ISO C 1990 standard).
Reference: The GCC/GNU mail archive.
This answer is an attempt at a technical answer rather than opinion.
If we want to be POSIX purists, we define a line as:
A sequence of zero or more non- <newline> characters plus a terminating <newline> character.
Source: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_206
An incomplete line as:
A sequence of one or more non- <newline> characters at the end of the file.
Source: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_195
A text file as:
A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character. Although POSIX.1-2008 does not distinguish between text files and binary files (see the ISO C standard), many utilities only produce predictable or meaningful output when operating on text files. The standard utilities that have such restrictions always specify "text files" in their STDIN or INPUT FILES sections.
Source: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_397
A string as:
A contiguous sequence of bytes terminated by and including the first null byte.
Source: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_396
From this then, we can derive that the only time we will potentially encounter any type of issues are if we deal with the concept of a line of a file or a file as a text file (being that a text file is an organization of zero or more lines, and a line we know must terminate with a <newline>).
Case in point: wc -l filename.
From the wc's manual we read:
A line is defined as a string of characters delimited by a <newline> character.
What are the implications to JavaScript, HTML, and CSS files then being that they are text files?
In browsers, modern IDEs, and other front-end applications there are no issues with skipping EOL at EOF. The applications will parse the files properly. It has to since not all Operating Systems conform to the POSIX standard, so it would be impractical for non-OS tools (e.g. browsers) to handle files according to the POSIX standard (or any OS-level standard).
As a result, we can be relatively confident that EOL at EOF will have virtually no negative impact at the application level - regardless if it is running on a UNIX OS.
At this point we can confidently say that skipping EOL at EOF is safe when dealing with JS, HTML, CSS on the client-side. Actually, we can state that minifying any one of these files, containing no <newline> is safe.
We can take this one step further and say that as far as NodeJS is concerned it too cannot adhere to the POSIX standard being that it can run in non-POSIX compliant environments.
What are we left with then? System level tooling.
This means the only issues that may arise are with tools that make an effort to adhere their functionality to the semantics of POSIX (e.g. definition of a line as shown in wc).
Even so, not all shells will automatically adhere to POSIX. Bash for example does not default to POSIX behavior. There is a switch to enable it: POSIXLY_CORRECT.
Food for thought on the value of EOL being <newline>: https://www.rfc-editor.org/old/EOLstory.txt
Staying on the tooling track, for all practical intents and purposes, let's consider this:
Let's work with a file that has no EOL. As of this writing the file in this example is a minified JavaScript with no EOL.
curl http://cdnjs.cloudflare.com/ajax/libs/AniJS/0.5.0/anijs-min.js -o x.js
curl http://cdnjs.cloudflare.com/ajax/libs/AniJS/0.5.0/anijs-min.js -o y.js
$ cat x.js y.js > z.js
-rw-r--r-- 1 milanadamovsky 7905 Aug 14 23:17 x.js
-rw-r--r-- 1 milanadamovsky 7905 Aug 14 23:17 y.js
-rw-r--r-- 1 milanadamovsky 15810 Aug 14 23:18 z.js
Notice the cat file size is exactly the sum of its individual parts. If the concatenation of JavaScript files is a concern for JS files, the more appropriate concern would be to start each JavaScript file with a semi-colon.
As someone else mentioned in this thread: what if you want to cat two files whose output becomes just one line instead of two? In other words, cat does what it's supposed to do.
The man of cat only mentions reading input up to EOF, not <newline>. Note that the -n switch of cat will also print out a non- <newline> terminated line (or incomplete line) as a line - being that the count starts at 1 (according to the man.)
-n Number the output lines, starting at 1.
Now that we understand how POSIX defines a line , this behavior becomes ambiguous, or really, non-compliant.
Understanding a given tool's purpose and compliance will help in determining how critical it is to end files with an EOL. In C, C++, Java (JARs), etc... some standards will dictate a newline for validity - no such standard exists for JS, HTML, CSS.
For example, instead of using wc -l filename one could do awk '{x++}END{ print x}' filename , and rest assured that the task's success is not jeopardized by a file we may want to process that we did not write (e.g. a third party library such as the minified JS we curld) - unless our intent was truly to count lines in the POSIX compliant sense.
Conclusion
There will be very few real life use cases where skipping EOL at EOF for certain text files such as JS, HTML, and CSS will have a negative impact - if at all. If we rely on <newline> being present, we are restricting the reliability of our tooling only to the files that we author and open ourselves up to potential errors introduced by third party files.
Moral of the story: Engineer tooling that does not have the weakness of relying on EOL at EOF.
Feel free to post use cases as they apply to JS, HTML and CSS where we can examine how skipping EOL has an adverse effect.
It may be related to the difference between:
text file (each line is supposed to end in an end-of-line)
binary file (there are no true "lines" to speak of, and the length of the file must be preserved)
If each line does end in an end-of-line, this avoids, for instance, that concatenating two text files would make the last line of the first run into the first line of the second.
Plus, an editor can check at load whether the file ends in an end-of-line, saves it in its local option 'eol', and uses that when writing the file.
A few years back (2005), many editors (ZDE, Eclipse, Scite, ...) did "forget" that final EOL, which was not very appreciated.
Not only that, but they interpreted that final EOL incorrectly, as 'start a new line', and actually start to display another line as if it already existed.
This was very visible with a 'proper' text file with a well-behaved text editor like vim, compared to opening it in one of the above editors. It displayed an extra line below the real last line of the file. You see something like this:
1 first line
2 middle line
3 last line
4
Some tools expect this. For example, wc expects this:
$ echo -n "Line not ending in a new line" | wc -l
0
$ echo "Line ending with a new line" | wc -l
1
A separate use case: commit hygiene, when your text file is version controlled.
If content is added to the end of the file, then the line that was previously the last line will have been edited to include a newline character. This means that blameing the file to find out when that line was last edited will show the newline addition, not the commit before that you actually wanted to see.
(The example is specific to git, but the same approach applies to other version control systems too.)
Basically there are many programs which will not process files correctly if they don't get the final EOL EOF.
GCC warns you about this because it's expected as part of the C standard. (section 5.1.1.2 apparently)
"No newline at end of file" compiler warning
I've wondered this myself for years. But i came across a good reason today.
Imagine a file with a record on every line (ex: a CSV file). And that the computer was writing records at the end of the file. But it suddenly crashed. Gee was the last line complete? (not a nice situation)
But if we always terminate the last line, then we would know (simply check if the last line is terminated). Otherwise we would probably have to discard the last line every time, just to be safe.
This originates from the very early days when simple terminals were used. The newline char was used to trigger a 'flush' of the transferred data.
Today, the newline char isn't required anymore. Sure, many apps still have problems if the newline isn't there, but I'd consider that a bug in those apps.
If however you have a text file format where you require the newline, you get simple data verification very cheap: if the file ends with a line that has no newline at the end, you know the file is broken. With only one extra byte for each line, you can detect broken files with high accuracy and almost no CPU time.
In addition to the above practical reasons, it wouldn't surprise me if the originators of Unix (Thompson, Ritchie, et al.) or their Multics predecessors realized that there is a theoretical reason to use line terminators rather than line separators: With line terminators, you can encode all possible files of lines. With line separators, there's no difference between a file of zero lines and a file containing a single empty line; both of them are encoded as a file containing zero characters.
So, the reasons are:
Because that's the way POSIX defines it.
Because some tools expect it or "misbehave" without it. For example, wc -l will not count a final "line" if it doesn't end with a newline.
Because it's simple and convenient. On Unix, cat just works and it works without complication. It just copies the bytes of each file, without any need for interpretation. I don't think there's a DOS equivalent to cat. Using copy a+b c will end up merging the last line of file a with the first line of file b.
Because a file (or stream) of zero lines can be distinguished from a file of one empty line.
Why should text files end with a newline?
Because that's the sanest choice to make.
Take a file with the following content,
one\n
two\n
three
where \n means a newline character, which on Windows is \r\n, a return character followed by line feed, because it's so cool, right?
How many lines does this file have? Windows says 3, we say 3, POSIX (Linux) says that the file is crippled because there should be a \n at the end of it.
Regardless, what would you say its last line is? I guess anybody agrees that three is the last line of the file, but POSIX says that's a crippled line.
And what is its second line? Oh, here we have the first strong separation:
Windows says two because a file is "lines separated by newlines" (wth?);
POSIX says two\n, adding that that's a true, honest line.
What's the consequence of Windows choice, then? Simple:
You cannot say that a file is made up of lines
Why? Try to take the last line from the previous file and replicate it a few times... What you get? This:
one\n
two\n
threethreethreethree
Try, instead, to swap second and third line... And you get this:
one\n
threetwo\n
Therefore
You must say that a text file is an alternation of lines and \ns, which starts with a line and ends with a line
which is quite a mouthful, right?
And you want another strange consequence?
You must accept that an empty file (0 bytes, really 0 bits) is a one-line file, magically, always because they are cool at Microsoft
Which is quite a crazyness, don't you think?
What is the consequence of POSIX choice?
That the file on the top is just a bit crippled, and we need some hack to deal with it.
Being serious
I'm being provocative, in the preceding text, for the reason that dealing with text files lacking the \n at the end forces you to treat them with ad-hoc ticks/hacks. You always need an if/else somewhere to make things work, where the branch dealing with the crippled line only deals with the crippled line, all the other lines taking the other branch. It's a bit racist, no?
My conclusion
I'm in favour of POSIX definition of a line for the following reasons:
A file is naturally conceived as a sequence of lines
A line shouldn't be one thing or another depending on where it is in the file
An empty file is not a one-line file, come on!
You should not be forced to make hacks in your code
And yes, Windows does encourage you to omit the trailing \r\n. If you want a two lines file below, you have to omit the trailing \r\n otherwise text editors will show it as a 3-lines file:
Presumably simply that some parsing code expected it to be there.
I'm not sure I would consider it a "rule", and it certainly isn't something I adhere to religiously. Most sensible code will know how to parse text (including encodings) line-by-line (any choice of line endings), with-or-without a newline on the last line.
Indeed - if you end with a new line: is there (in theory) an empty final line between the EOL and the EOF? One to ponder...
There's also a practical programming issue with files lacking newlines at the end: The read Bash built-in (I don't know about other read implementations) doesn't work as expected:
printf $'foo\nbar' | while read line
do
echo $line
done
This prints only foo! The reason is that when read encounters the last line, it writes the contents to $line but returns exit code 1 because it reached EOF. This breaks the while loop, so we never reach the echo $line part. If you want to handle this situation, you have to do the following:
while read line || [ -n "${line-}" ]
do
echo $line
done < <(printf $'foo\nbar')
That is, do the echo if the read failed because of a non-empty line at end of file. Naturally, in this case there will be one extra newline in the output which was not in the input.
Why should (text) files end with a newline?
As well expressed by many, because:
Many programs do not behave well, or fail without it.
Even programs that well handle a file lack an ending '\n', the tool's functionality may not meet the user's expectations - which can be unclear in this corner case.
Programs rarely disallow final '\n' (I do not know of any).
Yet this begs the next question:
What should code do about text files without a newline?
Most important - Do not write code that assumes a text file ends with a newline. Assuming a file conforms to a format leads to data corruption, hacker attacks and crashes. Example:
// Bad code
while (fgets(buf, sizeof buf, instream)) {
// What happens if there is no \n, buf[] is truncated leading to who knows what
buf[strlen(buf) - 1] = '\0'; // attempt to rid trailing \n
...
}
If the final trailing '\n' is needed, alert the user to its absence and the action taken. IOWs, validate the file's format. Note: This may include a limit to the maximum line length, character encoding, etc.
Define clearly, document, the code's handling of a missing final '\n'.
Do not, as possible, generate a file the lacks the ending '\n'.
It's very late here but I just faced one bug in file processing and that came because the files were not ending with empty newline. We were processing text files with sed and sed was omitting the last line from output which was causing invalid json structure and sending rest of the process to fail state.
All we were doing was:
There is one sample file say: foo.txt with some json content inside it.
[{
someProp: value
},
{
someProp: value
}] <-- No newline here
The file was created in widows machine and window scripts were processing that file using PowerShell commands. All good.
When we processed same file using sed command sed 's|value|newValue|g' foo.txt > foo.txt.tmp
The newly generated file was
[{
someProp: value
},
{
someProp: value
and boom, it failed the rest of the processes because of the invalid JSON.
So it's always a good practice to end your file with empty new line.
I was always under the impression the rule came from the days when parsing a file without an ending newline was difficult. That is, you would end up writing code where an end of line was defined by the EOL character or EOF. It was just simpler to assume a line ended with EOL.
However I believe the rule is derived from C compilers requiring the newline. And as pointed out on “No newline at end of file” compiler warning, #include will not add a newline.
Imagine that the file is being processed while the file is still being generated by another process.
It might have to do with that? A flag that indicates that the file is ready to be processed.
I personally like new lines at the end of source code files.
It may have its origin with Linux or all UNIX systems for that matter. I remember there compilation errors (gcc if I'm not mistaken) because source code files did not end with an empty new line. Why was it made this way one is left to wonder.
IMHO, it's a matter of personal style and opinion.
In olden days, I didn't put that newline. A character saved means more speed through that 14.4K modem.
Later, I put that newline so that it's easier to select the final line using shift+downarrow.

Resources