What is wrong with adding null character to non null-terminated string? - c

Why I shouldn't add a null character to the end of a non null-terminated string like in this answer? I mean if I have a non null-terminated string and add null character to the end of the string, I now have a null-terminated string which should be good, right?
Is there any security problem I don't see?
Here's the code in case the answer gets deleted:
char letters[SIZE + 1]; // Leave room for the null-terminator.
// ...
// Populate letters[].
// ...
letters[SIZE] = '\0'; // Null-terminate the array.

to know the end of the string you must have a null terminated string, otherwise there is no way to know the end of the string

There is nothing technically wrong in terminating the string with \0 this way. However, the approaches you can use to populate the array before adding \0 are prone to error. Take a look in some situations:
Suppose you decide to populate letters char by char. What happens if you forget to add some letters? What if you add more letters than the expected size?
What if there are thousands of letters to populate the array?
What if you need to populate letters with Unicode characters that (often) require more than one byte per symbol?
Of course you can address these situations very carefully but they still will be prone to error when maintaining the code.

To be clear: a string in C always has one and only one null character - it is the last character of the string. A string is an array of characters. If an array of characters does not have a null character, it is not a string.
A string is a contiguous sequence of characters terminated by and including the first null character. C11dr 7.1.1 1
There is nothing wrong with adding a null character to an array of characters as OP coded.
This is a fine way to form a a string if:
All the preceding characters are defined.
String functions are not call until after a null character is written.

You shouldn't use it, to avoid errors (or security holes) due mixing C/Pascal strings.
C style string: An array of char, terminated by NULL ('\0')
Pascal style string: a kind of structure, with a int with the size of the string, and an array with the string itself.
The Pascal style don't use in-band control, so it can use any char inside it, like NULL. C strings can't, as they use it as signaling control.
The problem is when you mix them, or assume one style when it's another. Or even try to convert between them.
Converting a C string to pascal would do no harm. But if you have a legit Pascal string with more then one NULL character, converting it to C style will cause problem, as it can't represent it.
A good example of this is the X.509 Null Char Exploit, where you could register a ssl certificate to:
www.mysimplesite.com\0www.bigbank.com
The X.509 certificate uses Pascal string, so this is valid. But when checking, the CA could use or assume C code or string style that just sees the first www.mysimplesite.com and signs the certificate. And some brosers parses this certificate as valid also for www.bigbank.com.
So, you CAN use it, but you SHOULD'NT, as it's risky to cause some bug or even a security breach.
More details and info:
https://www.blackhat.com/presentations/bh-usa-09/MARLINSPIKE/BHUSA09-Marlinspike-DefeatSSL-SLIDES.pdf
https://sites.google.com/site/cse825maninthemiddle/odds-and-ends/x-509-null-char-exploit

In general, there are two ways of keeping track of an array of some variable number of things:
Use a terminator. Of course, this is the C approach to representing strings: an array of characters of some unknown size, with the actual string length given by a null terminator.
Use an explicit count stored somewhere else. (As it happens, this is how Pascal traditionally represents strings.)
If you have an array containing a known but not null-terminated sequence of characters, and if you want to turn it into a proper null-terminated string, and if you know that the underlying array is allocated big enough to contain the null terminator, then yes, explicitly setting array[N] to '\0' is not only acceptable, it is the way to do it.
Bottom line: it's a fine technique (if the constraints are met). I don't know why that earlier answer was criticized and downvoted.

Related

Difficulty in reading a series of whitespace separated DNA string into different locations of an array [duplicate]

Just wondering why this is the case. I'm eager to know more about low level languages, and I'm only into the basics of C and this is already confusing me.
Do languages like PHP automatically null terminate strings as they are being interpreted and / or parsed?
From Joel's excellent article on the topic:
Remember the way strings work in C: they consist of a bunch of bytes followed by a null character, which has the value 0. This has two obvious implications:
There is no way to know where the string ends (that is, the string length) without moving through it, looking for the null character at the end.
Your string can't have any zeros in it. So you can't store an arbitrary binary blob like a JPEG picture in a C string.
Why do C strings work this way? It's because the PDP-7 microprocessor, on which UNIX and the C programming language were invented, had an ASCIZ string type. ASCIZ meant "ASCII with a Z (zero) at the end."
Is this the only way to store strings? No, in fact, it's one of the worst ways to store strings. For non-trivial programs, APIs, operating systems, class libraries, you should avoid ASCIZ strings like the plague.
Think about what memory is: a contiguous block of byte-sized units that can be filled with any bit patterns.
2a c6 90 f6
A character is simply one of those bit patterns. Its meaning as a string is determined by how you treat it. If you looked at the same part of memory, but using an integer view (or some other type), you'd get a different value.
If you have a variable which is a pointer to the start of a bunch of characters in memory, you must know when that string ends and the next piece of data (or garbage) begins.
Example
Let's look at this string in memory...
H e l l o , w o r l d ! \0
^
|
+------ Pointer to string
...we can see that the string logically ends after the ! character. If there were no \0 (or any other method to determine its end), how would we know when seeking through memory that we had finished with that string? Other languages carry the string length around with the string type to solve this.
I asked this question when my underlying knowledge of computers was limited, and this is the answer that would have helped many years ago. I hope it helps someone else too. :)
C strings are arrays of chars, and a C array is just a pointer to a memory location, which is the start location of the array. But also the length (or end) of the array must be expressed somehow; in case of strings, a null termination is used. Another alternative would be to somehow carry the length of the string alongside with the memory pointer, or to put the length in the first array location, or whatever. It's just a matter of convention.
Higher level languages like Java or PHP store the size information with the array automatically & transparently, so the user needn't worry about them.
C has no notion of strings by itself. Strings are simply arrays of chars (or wchars for unicode and such).
Due to those facts C has no way to check i.e. the length of the string as there is no "mystring->length", there is no length value set somewhere. The only way to find the end of the string is to iterate over it and check for the \0.
There are string-libraries for C which use structs like
struct string {
int length;
char *data;
};
to remove the need for the \0-termination but this is not standard C.
Languages like C++, PHP, Perl, etc have their own internal string libraries which often have a seperate length field that speeds up some string functions and remove the need for the \0.
Some other languages (like Pascal) use a string type that is called (suprisingly) Pascal String, it stores the length in the first byte of the string which is the reason why those strings are limited to a length of 255 characters.
Because in C strings are just a sequence of characters accessed viua a pointer to the first character.
There is no space in a pointer to store the length so you need some indication of where the end of the string is.
In C it was decided that this would be indicated by a null character.
In pascal, for example, the length of a string is recorded in the byte immediately preceding the pointer, hence why pascal strings have a maximum length of 255 characters.
It is a convention - one could have implemented it with another algorithm (e.g. length at the beginning of the buffer).
In a "low level" language such as assembler, it is easy to test for "NULL" efficiently: that might have ease the decision to go with NULL terminated strings as opposed of keeping track of a length counter.
They need to be null terminated so you know how long they are. And yes, they are simply arrays of char.
Higher level languages like PHP may choose to hide the null termination from you or not use it at all - they may maintain a length, for example. C doesn't do it that way because of the overhead involved. High level languages may also not implement strings as an array of char - they could (and some do) implement them as lists of arrays of char, for example.
In C strings are represented by an array of characters allocated in a contiguous block of memory and thus there must either be an indicator stating the end of the block (ie. the null character), or a way of storing the length (like Pascal strings which are prefixed by a length).
In languages like PHP,Perl,C# etc.. strings may or may not have complex data structures so you cannot assume they have a null character. As a contrived example, you could have a language that represents a string like so:
class string
{
int length;
char[] data;
}
but you only see it as a regular string with no length field, as this can be calculated by the runtime environment of the language and is only used internally by it to allocate and access memory correctly.
They are null-terminated because whole plenty of Standard Library functions expects them to be.

Does '\0' appear naturally in text files?

I encountered a somewhat annoying bug today where a string (stored as a char[]) would be printed with junk at the end. The string that was suppose to be printed (using arduino print/write functions) was correct (it correctly included \r and \n). However, there would be junk printed at the end.
I then allocated an extra element to store a '\0' after '\r' and '\n' (which were the last 2 characters in the string to be printed). Then, print() printed the string correctly. It seems '\0' was used to indicate to the print() function that the string had terminated (I remember reading this in Kernighan's C).
This bug appeared in my code which reads from a text file. It occurred to me that I did not encounter '\0' at all when I designed my code. This leads me to believe that '\0' has no practical use in text editors and are merely used by print functions. Is this correct?
C strings are terminated by the NUL byte ('\0') - this is implicitly appended to any string literals in double quotes, and used as the terminator by all standard library functions operating on strings. From this it follows that C strings can not contain the '\0' terminator in between other characters, since there would be no way to tell whether it is the actual end of string or not.
(Of course you could handle strings in the C language other than as C strings - e.g., simply adding an integer to record the length of the string would make the terminator unnecessary, but such strings would not be fully interoperable with functions expecting C strings.)
A "text file" in general is not governed by the C standard, and a user of a C program could conceivably give a file containing a NUL byte as input to a C program (which would be unable to handle it "correctly" for the above reasons if it read the file into C strings). However, the NUL byte has no valid reason for existing in a plain text file, and it may be considered at least a de facto standard for text files that they do not contain the NUL byte (or certain other control characters, which might break transmission of that text through some terminals or serial protocols).
I would argue that it is an acceptable (though not necessary!) limitation for a program working on plain text input to not guarantee correct output if there are NUL bytes in the input. However, the programmer should be aware of this possibility regardless of whether it will be treated correctly, and not allow it to cause undefined behaviour in their program. Like all user input, it should be considered "unsafe" in the sense that it can contain anything (e.g., it could be maliciously formed on purpose).
This leads me to believe that '\0' has no practical use in text
editors and are merely used by print functions. Is this correct?
This is wrong. In C, the end of a character string is designated by the \0 character. This is commonly known as the null terminator. Almost all string functions declared in the C library under <string.h> use this criteria to check or find the end of a string.
A text file, on the other hand, will not typically have any \0 characters in it. So, when reading text from a file, you have to null-terminate your character buffer before you then print it.
\0 is the C escape sequence for the null character (ASCII code 0) and is widely used to represent the end of a string in memory. The character normally doesn't appear explicitly in a text file, however, by convention, most C strings contain a null terminator at the end. Functions that read a string into memory will generally append a \0 to denote the end of the string, and functions that output a string from memory will similarly expect a \0.
Note that there are other ways of representing strings in memory, for example as a (length, content) pair (Pascal notably used this representation), which do not require a null terminator since the length of the string is known ahead of time.
Common Text Files
The null character '\0', even if rare, can appear in a text file. Code should be prepared to handle reading '\0'.
This also includes other char outside the typical ASCII range, which may be negative with a signed char.
UTF-16
Some "text" files use UTF-16 encoding and code encountering that, but expecting a typical "text" file will encounter many null characters.
Line Length
Lines can be too long, too short (only "\n"). or maybe other "text" problems exist.
Robust code does not trust use/file input until it is qualified and meets expectations. It does not assume null chracters are absent.

What happens when a string contains only '\0'? C

In my program I'm reading words from a .txt file and I will be inserting them into both a linked list and a hash table.
If two '\n' characters are read in a row after a word then the second word the program will read will be '\n', however I then overwrite it with '\0', so essentially the string contains only '\0'.
Is it worth me putting an if statement so the next part of my program only executes if the word is a real word (i.e. word[0] != '\n')? Would the string '\0' use up space in the hash table/linked list?
In C a character array with first element being \0 is an empty string, i.e. of length zero. There's not much sense in keeping empty strings in containers, if that's what you are asking.
It depends if you consider an empty string a valid entry. You seem to be storing words so I would guess that an empty string is of no interest, but that is application specific.
For example, an environment variable can be present (getenv returns a valid pointer) but the value can be "unset": an empty string. In that case the fact that the value is an empty string might be significant.
So, if an empty string is not significant is it worth adding an if statement to ignore it? Generally that would be a "yes", since the overhead of storing and maintaining the empty string could be significantly more than one if statement per entry. But of course that is only a guess, I don't know what your overheads are, how many times that if would get executed, and how many empty string entries you would be saving. You might not know that either, so my fallback position would be only to store data that is significant.

The terminating NULL in an array in C

I have a simple question. Why is it necessary to consider the terminating null in an
array of chars (or simply a string) and not in an array of integers. So when i want a string to hold 20 characters i need to declare char string[21];. When i want to declare an array of integers holding 5 digits then int digits[5]; is enough. What is the reason for this?
You don't have to terminate a char array with NULL if you don't want to, but when using them to represent a string, then you need to do it because C uses null-terminated strings to represent its strings. When you use functions that operate on strings (like strlen for string-length or using printf to output a string), then those functions will read through the data until a NULL is encountered. If one isn't present, then you would likely run into buffer overflow or similar access violation/segmentation fault problems.
In short: that's how C represents string data.
Null terminators are required at the end of strings (or character arrays) because:
Most standard library string functions expect the null character to be there. It's put there in lieu of passing an explicit string length (though some functions require that instead.)
By design, the NUL character (ASCII 0x00) is used to designate the end of strings. Hence why it's also used as an EOF character when reading from ASCII files or streams.
Technically, if you're doing your own string manipulation with your own coded functions, you don't need a null terminator; you just need to keep track of how long the string is. But, if you use just about anything standardized, it will expect it.
It is only by convention that C strings end in the ascii nul character. (That's actually something different than NULL.)
If you like, you can begin your strings with a nul byte, or randomly include nul bytes in the middle of strings. You will then need your own library.
So the answer is: all arrays must allocate space for all of their elements. Your "20 character string" is simply a 21-character string, including the nul byte.
The reason is it was a design choice of the original implementors. A null terminated string gives you a way to pass an array into a function and not pass the size. With an integer array you must always pass the size. Ints convention of the language nothing more you could rewrite every string function in c with out using a null terminator but you would allways have to keep track of your array size.
The purpose of null termination in strings is so that the parser knows when to stop iterating through the array of characters.
So, when you use printf with the %s format character, it's essentially doing this:
int i = 0;
while(input[i] != '\0') {
output(input[i]);
i++;
}
This concept is commonly known as a sentinel.
It's not about declaring an array that's one-bigger, it's really about how we choose to define strings in C.
C strings by convention are considered to be a series of characters terminated by a final NUL character, as you know. This is baked into the language in the form of interpreting "string literals", and is adopted by all the standard library functions like strcpy and printf and etc. Everyone agrees that this is how we'll do strings in C, and that character is there to tell those functions where the string stops.
Looking at your question the other way around, the reason you don't do something similar in your arrays of integers is because you have some other way of knowing how long the array is-- either you pass around a length with it, or it has some assumed size. Strings could work this way in C, or have some other structure to them, but they don't -- the guys at Bell Labs decided that "strings" would be a standard array of characters, but would always have the terminating NUL so you'd know where it ended. (This was a good tradeoff at that time.)
It's not absolutely necessary to have the character array be 21 elements. It's only necessary if you follow the (nearly always assumed) convention that the twenty characters be followed by a null terminator. There is usually no such convention for a terminator in integer and other arrays.
Because of the the technical reasons of how C Strings are implemented compared to other conventions
Actually - you don't have to NUL-terminate your strings if you don't want to!
The only problem is you have to re-write all the string libraries because they depend on them. It's just a matter of doing it the way the library expects if you want to use their functionality.
Just like I have to bring home your daughter at midnight if I wish to date her - just an agreement with the library (or in this case, the father).

Why do strings in C need to be null terminated?

Just wondering why this is the case. I'm eager to know more about low level languages, and I'm only into the basics of C and this is already confusing me.
Do languages like PHP automatically null terminate strings as they are being interpreted and / or parsed?
From Joel's excellent article on the topic:
Remember the way strings work in C: they consist of a bunch of bytes followed by a null character, which has the value 0. This has two obvious implications:
There is no way to know where the string ends (that is, the string length) without moving through it, looking for the null character at the end.
Your string can't have any zeros in it. So you can't store an arbitrary binary blob like a JPEG picture in a C string.
Why do C strings work this way? It's because the PDP-7 microprocessor, on which UNIX and the C programming language were invented, had an ASCIZ string type. ASCIZ meant "ASCII with a Z (zero) at the end."
Is this the only way to store strings? No, in fact, it's one of the worst ways to store strings. For non-trivial programs, APIs, operating systems, class libraries, you should avoid ASCIZ strings like the plague.
Think about what memory is: a contiguous block of byte-sized units that can be filled with any bit patterns.
2a c6 90 f6
A character is simply one of those bit patterns. Its meaning as a string is determined by how you treat it. If you looked at the same part of memory, but using an integer view (or some other type), you'd get a different value.
If you have a variable which is a pointer to the start of a bunch of characters in memory, you must know when that string ends and the next piece of data (or garbage) begins.
Example
Let's look at this string in memory...
H e l l o , w o r l d ! \0
^
|
+------ Pointer to string
...we can see that the string logically ends after the ! character. If there were no \0 (or any other method to determine its end), how would we know when seeking through memory that we had finished with that string? Other languages carry the string length around with the string type to solve this.
I asked this question when my underlying knowledge of computers was limited, and this is the answer that would have helped many years ago. I hope it helps someone else too. :)
C strings are arrays of chars, and a C array is just a pointer to a memory location, which is the start location of the array. But also the length (or end) of the array must be expressed somehow; in case of strings, a null termination is used. Another alternative would be to somehow carry the length of the string alongside with the memory pointer, or to put the length in the first array location, or whatever. It's just a matter of convention.
Higher level languages like Java or PHP store the size information with the array automatically & transparently, so the user needn't worry about them.
C has no notion of strings by itself. Strings are simply arrays of chars (or wchars for unicode and such).
Due to those facts C has no way to check i.e. the length of the string as there is no "mystring->length", there is no length value set somewhere. The only way to find the end of the string is to iterate over it and check for the \0.
There are string-libraries for C which use structs like
struct string {
int length;
char *data;
};
to remove the need for the \0-termination but this is not standard C.
Languages like C++, PHP, Perl, etc have their own internal string libraries which often have a seperate length field that speeds up some string functions and remove the need for the \0.
Some other languages (like Pascal) use a string type that is called (suprisingly) Pascal String, it stores the length in the first byte of the string which is the reason why those strings are limited to a length of 255 characters.
Because in C strings are just a sequence of characters accessed viua a pointer to the first character.
There is no space in a pointer to store the length so you need some indication of where the end of the string is.
In C it was decided that this would be indicated by a null character.
In pascal, for example, the length of a string is recorded in the byte immediately preceding the pointer, hence why pascal strings have a maximum length of 255 characters.
It is a convention - one could have implemented it with another algorithm (e.g. length at the beginning of the buffer).
In a "low level" language such as assembler, it is easy to test for "NULL" efficiently: that might have ease the decision to go with NULL terminated strings as opposed of keeping track of a length counter.
They need to be null terminated so you know how long they are. And yes, they are simply arrays of char.
Higher level languages like PHP may choose to hide the null termination from you or not use it at all - they may maintain a length, for example. C doesn't do it that way because of the overhead involved. High level languages may also not implement strings as an array of char - they could (and some do) implement them as lists of arrays of char, for example.
In C strings are represented by an array of characters allocated in a contiguous block of memory and thus there must either be an indicator stating the end of the block (ie. the null character), or a way of storing the length (like Pascal strings which are prefixed by a length).
In languages like PHP,Perl,C# etc.. strings may or may not have complex data structures so you cannot assume they have a null character. As a contrived example, you could have a language that represents a string like so:
class string
{
int length;
char[] data;
}
but you only see it as a regular string with no length field, as this can be calculated by the runtime environment of the language and is only used internally by it to allocate and access memory correctly.
They are null-terminated because whole plenty of Standard Library functions expects them to be.

Resources