Is there any practical difference between using wcscpy_s and using wcsncpy? The only difference seems to be the order of parameters and return value:
errno_t wcscpy_s(wchar_t *strDestination,
size_t numberOfElements,
const wchar_t *strSource);
wchar_t *wcsncpy(wchar_t *strDest,
const wchar_t *strSource,
size_t count );
And if there is no practical difference, why did Microsoft need to add wcscpy_s to Visual Studio, when wcsncpy was already available and a standard function?
Is it OK to replace wcscpy_s to wcsncpy when porting from Visual Studio to gcc?
These two functions do not have the same behavior.
From the MSDN documentation of wcscpy_s:
Upon successful execution, the destination string will always be null terminated.
From the specification of wcsncpy (C11 7.29.4.2.2/1-3):
#include <wchar.h>
wchar_t *wcsncpy(wchar_t * restrict s1,
const wchar_t * restrict s2,
size_t n);
The wcsncpy function copies not more than n wide characters (those that follow a null
wide character are not copied) from the array pointed to by s2 to the array pointed to by
s1.
If the array pointed to by s2 is a wide string that is shorter than n wide characters, null wide characters are appended to the copy in the array pointed to by s1, until n wide characters in all have been written
and the footnote (#346):
Thus, if there is no null wide character in the first n wide characters of the array pointed to by s2, the result will not be null-terminated.
Note that strncpy and wcsncpy are not designed for use with null-terminated strings. They are designed for use with null-padded, fixed-width strings.
Another difference (that just cost me a couple hours of staring at code, wondering what was going on) is that the wcscpy_s function will, by default, terminate the application if you're going to overrun the buffer.
I expected it to behave like one of the strncpy variants. That is not the case!
You can apparently change this behavior with the _set_invalid_parameter_handler function.
wcscpy_s is more secure, it can detect your error and trigger the Invalid Parameter Handler Routine. For handling such errors without causing crash, they have provided _set_invalid_parameter_handler.
The functions with the appended _s are functions whoch are more secure. Usually, the functions not conating the trailing _s will be marked as "deprecated" by VS2012 for example. You will get a warning. For additional information: MSDN has plenty of information on this.
Related
I have to concatenate wide C-style strings, and based on my research, it seems that something like _memccpy is most ideal (in order to avoid Shlemiel's problem). But I can't seem to find a wide-character version. Does something like that exist?
Does something like that exist?
The C standard library does not contain a wide-character version of Microsoft's _memccpy(). Neither does it contain _memccpy() itself, although POSIX specifies the memccpy() function on which MS's _memccpy() appears to be modeled.
POSIX also defines wcpcpy() (a wide version of stpcpy()), which copies a a wide string and returns a pointer to the end of the result. That's not as fully featured as memccpy(), but it would suffice to avoid Shlemiel's problem, if only Microsoft's C library provided a version of it.
You can, however, use swprintf() to concatenate wide strings without suffering from Shlemiel's problem, with the added advantage that it is in the standard library, since C99. It does not provide the memccpy behavior of halting after copying a user-specified (wide) character, but it does return the number of wide characters written, which is equivalent to returning a pointer to the end of the result. Also, it can directly concatenate an arbitrary fixed number of strings in a single call. swprintf does have significant overhead, though.
But of course, if the overhead of swprintf puts you off then it's pretty easy to write your own. The result might not be as efficient as a well-tuned implementation from your library vendor, but we're talking about a scaling problem, so you mainly need to win on the asymptotic complexity front. Simple example:
/*
* Copies at most 'count' wide characters from 'src' to 'dest', stopping after
* copying a wide character with value 0 if that happens first. If no 0 is
* encountered in the first 'count' wide characters of 'src' then the result
* will be unterminated.
* Returns 'dest' + n, where n is the number of non-zero wide characters copied to 'dest'.
*/
wchar_t *wcpncpy(wchar_t *dest, const wchar_t *src, size_t count) {
for (wchar_t *bound = dest + count; dest < bound; ) {
if ((*dest++ = *src++) == 0) return dest - 1;
}
return dest;
}
As we know, different encodings map different representations to same characters. Using setlocale we can specify the encoding of strings that are read from input, but does this apply to string literals as well? I'd find this surprising since these are compile-time!
This matters for tasks as simple as, for example, determining whether a string read from input contains a specific character. When reading strings from input it seems sensible to set the locale to to the user's locale (setlocale("LC_ALL", "");) so that the string is read and processed correctly. But when we're comparing this string with a character literal, won't problems arise due to mismatched encoding?
In other words: The following snippet seems to work for me. But doesn't it work only because of coincidence? Because - for example? - the source code happened to be saved in the same encoding that is used on the machine during runtime?
#include <stdio.h>
#include <wchar.h>
#include <stdlib.h>
#include <locale.h>
int main()
{
setlocale(LC_ALL, "");
// Read line and convert it to wide string so that wcschr can be used
// So many lines! And that's even though I'm omitting the necessary
// error checking for brevity. Ah I'm also omitting free's
char *s = NULL; size_t n = 0;
getline(&s, &n, stdin);
mbstate_t st = {0}; const char* cs = s;
size_t wn = mbsrtowcs(NULL, &cs, 0, &st);
wchar_t *ws = malloc((wn+1) * sizeof(wchar_t));
st = (mbstate_t){0};
mbsrtowcs(ws, &cs, (wn+1), &st);
int contains_guitar = (wcschr(ws, L'🎸') != NULL);
if(contains_guitar)
printf("Let's rock!\n");
else
printf("Let's not.\n");
return 0;
}
How to do this correctly?
Using setlocale we can specify the encoding of strings that are read from input, but does this apply to string literals as well?
No. String literals use the execution character set, which is defined by your compiler at compile time.
Execution character set does not have to be the same as the source character set, the character set used in the source code. The C compiler is responsible for the translation, and should have options for choosing/defining them. The default depends on the compiler, but on Linux and most current POSIXy systems, is usually UTF-8.
The following snippet seems to work for me. But doesn't it work only because of coincidence?
The example works because the character set of your locale, the source character set, and the execution character set used when the binary was constructed, all happen to be UTF-8.
How to do this correctly?
Two options. One is to use wide characters and string literals. The other is to use UTF-8 everywhere.
For wide input and output, see e.g. this example in another answer here.
Do note that getwline() and getwdelim() are not in POSIX.1, but in C11 Annex K. This means they are optional, and as of this writing, not widely available at all. Thus, a custom implementation around fgetwc() is recommended instead. (One based on fgetws(), wcslen(), and/or wcscspn() will not be able to handle embedded nuls, L'\0', correctly.)
In a typical wide I/O program, you only need mbstowcs() to convert command-line arguments and environment variables to wide strings.
Using UTF-8 everywhere is also a perfectly valid practical approach, at least if it is well documented, so that users know the program inputs and outputs UTF-8 strings, and developers know to ensure their C compiler uses UTF-8 as the execution character set when compiling those binaries.
Your program can even use e.g.
if (!setlocale(LC_ALL, ""))
fprintf(stderr, "Warning: Your C library does not support your current locale.\n");
if (strcmp("UTF-8", nl_langinfo(CODESET)))
fprintf(stderr, "Warning: Your locale does not use the UTF-8 character set.\n");
to verify the current locale uses UTF-8.
I have used both approaches, depending on the circumstances. It is difficult to say which one is more portable in practice, because as usual, both work just fine on non-Windows OSes without issues.
If you're willing to assume UTF-8,
strstr(s,"🎸")
Or:
strstr(s,u8"🎸")
The latter avoids some assumptions but requires a C11 compiler. If you want the best of both and can sacrifice readability:
strstr(s,"\360\237\216\270")
Is snprintf always null terminating the destination buffer?
In other words, is this sufficient:
char dst[10];
snprintf(dst, sizeof (dst), "blah %s", somestr);
or do you have to do like this, if somestr is long enough?
char dst[10];
somestr[sizeof (dst) - 1] = '\0';
snprintf(dst, sizeof (dst) - 1, "blah %s", somestr);
I am interested both in what the standard says and what some popular libc might do which is not standard behavior.
As the other answers establish: It should:
snprintf ... Writes the results to a character string buffer. (...)
will be terminated with a null character, unless buf_size is zero.
So all you have to take care is that you don't pass an zero-size buffer to it, because (obviously) it cannot write a zero to "nowhere".
However, beware that Microsoft's library does not have a function called snprintf but instead historically only had a function called _snprintf (note leading underscore) which does not append a terminating null. Here's the docs (VS 2012, ~~ VS 2013):
http://msdn.microsoft.com/en-us/library/2ts7cx93%28v=vs.110%29.aspx
Return Value
Let len be the length of the formatted data string (not including the
terminating null). len and count are in bytes for _snprintf, wide
characters for _snwprintf.
If len < count, then len characters are stored in buffer, a
null-terminator is appended, and len is returned.
If len = count, then len characters are stored in buffer, no
null-terminator is appended, and len is returned.
If len > count, then count characters are stored in buffer, no
null-terminator is appended, and a negative value is returned.
(...)
Visual Studio 2015 (VC14) apparently introduced the conforming snprintf function, but the legacy one with the leading underscore and the non null-terminating behavior is still there:
The snprintf function truncates the output when len is greater
than or equal to count, by placing a null-terminator at
buffer[count-1]. (...)
For all functions other than snprintf, if len = count, len
characters are stored in buffer, no null-terminator is appended,
(...)
According to snprintf(3) manpage.
The functions snprintf() and vsnprintf() write at most size bytes (including the trailing null byte ('\0')) to str.
So, yes, no need to terminate if size >= 1.
According to the C standard, unless the buffer size is 0, vsnprintf() and snprintf() null terminates its output.
The snprintf() function shall be equivalent to sprintf(), with the addition of the n argument which states the size of the buffer referred to by s. If n is zero, nothing shall be written and s may be a null pointer. Otherwise, output bytes beyond the n-1st shall be discarded instead of being written to the array, and a null byte is written at the end of the bytes actually written into the array.
So, if you need to know how big a buffer to allocate, use a size of zero, and you can then use a null pointer as the destination. Note that I linked to the POSIX pages, but these explicitly say that there is not intended to be any divergence between Standard C and POSIX where they cover the same ground:
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard.
Be wary of the Microsoft version of vsnprintf(). It definitely behaves differently from the standard C version when there is not enough space in the buffer (it returns -1 where the standard function returns the required length). It is not entirely clear that the Microsoft version null terminates its output under error conditions, whereas the standard C version does.
However, note that Microsoft has changed the rules (vsnprintf()) since this answer was originally written:
Beginning with the UCRT in Visual Studio 2015 and Windows 10, vsnprintf is no longer identical to _vsnprintf. The vsnprintf function conforms to the C99 standard; _vnsprintf is kept for backward compatibility with older Visual Studio code.
Similar comments apply to snprintf() and sprintf().
Note also the answers to Do you use the TR 24731 safe functions? (see MSDN for the Microsoft version of the vsprintf_s()) and the Mac solution for the safe alternatives to unsafe C standard library functions?
Some older versions of SunOS did weird things with snprintf and might have not NUL-terminated the output and had return values that didn't match what everyone else was doing, but anything that has been released in the past 10 years have been doing what C99 says.
The ambiguity starts from the C Standard itself. Both C99 and C11 have identical description of snprintf function. Here is the description from C99:
7.19.6.5 The snprintf function
Synopsis
1 #include <stdio.h>
int snprintf(char * restrict s, size_t n, const char * restrict format, ...);
Description
2 The snprintf function is equivalent to fprintf, except that the output is written into an array (specified by argument s) rather than to a stream. If n is zero, nothing is written, and s may be a null pointer. Otherwise, output characters beyond the n-1st are
discarded rather than being written to the array, and a null character is written at the end of the characters actually written into the array. If copying takes place between objects that overlap, the behavior is undefined.
Returns
3 The snprintf function returns the number of characters that would have been written had n been sufficiently large, not counting the terminating null character, or a negative value if an encoding error occurred. Thus, the null-terminated output has been completely written if and only if the returned value is nonnegative and less than n.
On the one hand the sentence
Otherwise, output characters beyond the n-1st are discarded rather than being written to the array, and a null character is written at the end of the characters actually written into the array
says that
if (the s points to a 3-character-long array, and) n is 3, then 2 characters will be written, and the characters beyond the 2nd one are discarded; then the null character is written after those 2 (and the null character will be the 3rd character written).
And this I believe answers the original question.
THE ANSWER:
If copying takes place between objects that overlap, the behavior is undefined.
If n is 0 then nothing is written to the output
otherwise, if no encoding errors encountered, the output is ALWAYS null-terminated (regardless of whether the output fits in the output array or not; if not then some characters are discarded such that the output array is never overflown),
otherwise (if encoding errors are encountered) the output can stay non-null-terminated.
On the other hand
The last sentence
Thus, the null-terminated output has been completely written if and only if the returned value is nonnegative and less than n
gives ambiguity (or my English is not good enough). I can interpret this sentence in at least two ways:
1. The output is null-terminated if and only if the returned value is nonnegative and less than n (which means that if the returned value is not less than n, i.e. the output (including the terminating null character) does not fit in the array, then the output is not null-terminated).
2. The output is complete (no characters have been discarded) if and only if the returned value is nonnegative and less than n.
I believe that the interpretation 1 above contradicts THE ANSWER, causes misunderstanding and lengthy discussions. That is why the last sentence describing the snprintf function needs a change in order to remove any ambiguity (which gives grounds for writing a Proposal to the C language Standard).
The example of non-ambiguous wording I believe can be taken from http://en.cppreference.com/w/c/io/fprintf (see 4)), thanks to #"Martin Ba" for the link.
See also the question "snprintf: Are there any C Standard Proposals/plans to change the description of this func?".
strncpy() supposedly protects from buffer overflows. But if it prevents an overflow without null terminating, in all likelihood a subsequent string operation is going to overflow. So to protect against this I find myself doing:
strncpy( dest, src, LEN );
dest[LEN - 1] = '\0';
man strncpy gives:
The strncpy() function is similar, except that not more than n bytes of src are copied. Thus, if there is no null byte among the first n bytes of src, the result will not be null-terminated.
Without null terminating something seemingly innocent like:
printf( "FOO: %s\n", dest );
...could crash.
Are there better, safer alternatives to strncpy()?
strncpy() is not intended to be used as a safer strcpy(), it is supposed to be used to insert one string in the middle of another.
All those "safe" string handling functions such as snprintf() and vsnprintf() are fixes that have been added in later standards to mitigate buffer overflow exploits etc.
Wikipedia mentions strncat() as an alternative to writing your own safe strncpy():
*dst = '\0';
strncat(dst, src, LEN);
EDIT
I missed that strncat() exceeds LEN characters when null terminating the string if it is longer or equal to LEN char's.
Anyway, the point of using strncat() instead of any homegrown solution such as memcpy(..., strlen(...))/whatever is that the implementation of strncat() might be target/platform optimized in the library.
Of course you need to check that dst holds at least the nullchar, so the correct use of strncat() would be something like:
if (LEN) {
*dst = '\0'; strncat(dst, src, LEN-1);
}
I also admit that strncpy() is not very useful for copying a substring into another string, if the src is shorter than n char's, the destination string will be truncated.
Originally, the 7th Edition UNIX file system (see DIR(5)) had directory entries that limited file names to 14 bytes; each entry in a directory consisted of 2 bytes for the inode number plus 14 bytes for the name, null padded to 14 characters, but not necessarily null-terminated. It's my belief that strncpy() was designed to work with those directory structures - or, at least, it works perfectly for that structure.
Consider:
A 14 character file name was not null terminated.
If the name was shorter than 14 bytes, it was null padded to full length (14 bytes).
This is exactly what would be achieved by:
strncpy(inode->d_name, filename, 14);
So, strncpy() was ideally fitted to its original niche application. It was only coincidentally about preventing overflows of null-terminated strings.
(Note that null padding up to the length 14 is not a serious overhead - if the length of the buffer is 4 KB and all you want is to safely copy 20 characters into it, then the extra 4075 nulls is serious overkill, and can easily lead to quadratic behaviour if you are repeatedly adding material to a long buffer.)
There are already open source implementations like strlcpy that do safe copying.
http://en.wikipedia.org/wiki/Strlcpy
In the references there are links to the sources.
Strncpy is safer against stack overflow attacks by the user of your program, it doesn't protect you against errors you the programmer do, such as printing a non-null-terminated string, the way you've described.
You can avoid crashing from the problem you've described by limiting the number of chars printed by printf:
char my_string[10];
//other code here
printf("%.9s",my_string); //limit the number of chars to be printed to 9
Some new alternatives are specified in ISO/IEC TR 24731 (Check https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/coding/317-BSI.html for info). Most of these functions take an additional parameter that specifies the maximum length of the target variable, ensure that all strings are null-terminated, and have names that end in _s (for "safe" ?) to differentiate them from their earlier "unsafe" versions.1
Unfortunately, they're still gaining support and may not be available with your particular tool set. Later versions of Visual Studio will throw warnings if you use the old unsafe functions.
If your tools don't support the new functions, it should be fairly easy to create your own wrappers for the old functions. Here's an example:
errCode_t strncpy_safe(char *sDst, size_t lenDst,
const char *sSrc, size_t count)
{
// No NULLs allowed.
if (sDst == NULL || sSrc == NULL)
return ERR_INVALID_ARGUMENT;
// Validate buffer space.
if (count >= lenDst)
return ERR_BUFFER_OVERFLOW;
// Copy and always null-terminate
memcpy(sDst, sSrc, count);
*(sDst + count) = '\0';
return OK;
}
You can change the function to suit your needs, for example, to always copy as much of the string as possible without overflowing. In fact, the VC++ implementation can do this if you pass _TRUNCATE as the count.
1Of course, you still need to be accurate about the size of the target buffer: if you supply a 3-character buffer but tell strcpy_s() it has space for 25 chars, you're still in trouble.
Use strlcpy(), specified here: http://www.courtesan.com/todd/papers/strlcpy.html
If your libc doesn't have an implementation, then try this one:
size_t strlcpy(char* dst, const char* src, size_t bufsize)
{
size_t srclen =strlen(src);
size_t result =srclen; /* Result is always the length of the src string */
if(bufsize>0)
{
if(srclen>=bufsize)
srclen=bufsize-1;
if(srclen>0)
memcpy(dst,src,srclen);
dst[srclen]='\0';
}
return result;
}
(Written by me in 2004 - dedicated to the public domain.)
Instead of strncpy(), you could use
snprintf(buffer, BUFFER_SIZE, "%s", src);
Here's a one-liner which copies at most size-1 non-null characters from src to dest and adds a null terminator:
static inline void cpystr(char *dest, const char *src, size_t size)
{ if(size) while((*dest++ = --size ? *src++ : 0)); }
strncpy works directly with the string buffers available, if you are working directly with your memory, you MUST now buffer sizes and you could set the '\0' manually.
I believe there is no better alternative in plain C, but its not really that bad if you are as careful as you should be when playing with raw memory.
Without relying on newer extensions, I have done something like this in the past:
/* copy N "visible" chars, adding a null in the position just beyond them */
#define MSTRNCPY( dst, src, len) ( strncpy( (dst), (src), (len)), (dst)[ (len) ] = '\0')
and perhaps even:
/* pull up to size - 1 "visible" characters into a fixed size buffer of known size */
#define MFBCPY( dst, src) MSTRNCPY( (dst), (src), sizeof( dst) - 1)
Why the macros instead of newer "built-in" (?) functions? Because there used to be quite a few different unices, as well as other non-unix (non-windows) environments that I had to port to back when I was doing C on a daily basis.
I have always preferred:
memset(dest, 0, LEN);
strncpy(dest, src, LEN - 1);
to the fix it up afterwards approach, but that is really just a matter of preference.
These functions have evolved more than being designed, so there really is no "why".
You just have to learn "how". Unfortunately the linux man pages at least are
devoid of common use case examples for these functions, and I've noticed lots
of misuse in code I've reviewed. I've made some notes here:
http://www.pixelbeat.org/programming/gcc/string_buffers.html
Edit: I've added the source for the example.
I came across this example:
char source[MAX] = "123456789";
char source1[MAX] = "123456789";
char destination[MAX] = "abcdefg";
char destination1[MAX] = "abcdefg";
char *return_string;
int index = 5;
/* This is how strcpy works */
printf("destination is originally = '%s'\n", destination);
return_string = strcpy(destination, source);
printf("after strcpy, dest becomes '%s'\n\n", destination);
/* This is how strncpy works */
printf( "destination1 is originally = '%s'\n", destination1 );
return_string = strncpy( destination1, source1, index );
printf( "After strncpy, destination1 becomes '%s'\n", destination1 );
Which produced this output:
destination is originally = 'abcdefg'
After strcpy, destination becomes '123456789'
destination1 is originally = 'abcdefg'
After strncpy, destination1 becomes '12345fg'
Which makes me wonder why anyone would want this effect. It looks like it would be confusing. This program makes me think you could basically copy over someone's name (eg. Tom Brokaw) with Tom Bro763.
What are the advantages of using strncpy() over strcpy()?
The strncpy() function was designed with a very particular problem in mind: manipulating strings stored in the manner of original UNIX directory entries. These used a short fixed-sized array (14 bytes), and a nul-terminator was only used if the filename was shorter than the array.
That's what's behind the two oddities of strncpy():
It doesn't put a nul-terminator on the destination if it is completely filled; and
It always completely fills the destination, with nuls if necessary.
For a "safer strcpy()", you are better off using strncat() like so:
if (dest_size > 0)
{
dest[0] = '\0';
strncat(dest, source, dest_size - 1);
}
That will always nul-terminate the result, and won't copy more than necessary.
strncpy combats buffer overflow by requiring you to put a length in it. strcpy depends on a trailing \0, which may not always occur.
Secondly, why you chose to only copy 5 characters on 7 character string is beyond me, but it's producing expected behavior. It's only copying over the first n characters, where n is the third argument.
The n functions are all used as defensive coding against buffer overflows. Please use them in lieu of older functions, such as strcpy.
While I know the intent behind strncpy, it is not really a good function. Avoid both. Raymond Chen explains.
Personally, my conclusion is simply to avoid strncpy and all its friends if you are dealing with null-terminated strings. Despite the "str" in the name, these functions do not produce null-terminated strings. They convert a null-terminated string into a raw character buffer. Using them where a null-terminated string is expected as the second buffer is plain wrong. Not only do you fail to get proper null termination if the source is too long, but if the source is short you get unnecessary null padding.
See also Why is strncpy insecure?
strncpy is NOT safer than strcpy, it just trades one type of bugs with another. In C, when handling C strings, you need to know the size of your buffers, there is no way around it. strncpy was justified for the directory thing mentioned by others, but otherwise, you should never use it:
if you know the length of your string and buffer, why using strncpy ? It is a waste of computing power at best (adding useless 0)
if you don't know the lengths, then you risk silently truncating your strings, which is not much better than a buffer overflow
What you're looking for is the function strlcpy() which does terminate always the string with 0 and initializes the buffer. It also is able to detect overflows. Only problem, it's not (really) portable and is present only on some systems (BSD, Solaris). The problem with this function is that it opens another can of worms as can be seen by the discussions on
http://en.wikipedia.org/wiki/Strlcpy
My personal opinion is that it is vastly more useful than strncpy() and strcpy(). It has better performance and is a good companion to snprintf(). For platforms which do not have it, it is relatively easy to implement.
(for the developement phase of a application I substitute these two function (snprintf() and strlcpy()) with a trapping version which aborts brutally the program on buffer overflows or truncations. This allows to catch quickly the worst offenders. Especially if you work on a codebase from someone else.
EDIT: strlcpy() can be implemented easily:
size_t strlcpy(char *dst, const char *src, size_t dstsize)
{
size_t len = strlen(src);
if(dstsize) {
size_t bl = (len < dstsize-1 ? len : dstsize-1);
((char*)memcpy(dst, src, bl))[bl] = 0;
}
return len;
}
The strncpy() function is the safer one: you have to pass the maximum length the destination buffer can accept. Otherwise it could happen that the source string is not correctly 0 terminated, in which case the strcpy() function could write more characters to destination, corrupting anything which is in the memory after the destination buffer. This is the buffer-overrun problem used in many exploits
Also for POSIX API functions like read() which does not put the terminating 0 in the buffer, but returns the number of bytes read, you will either manually put the 0, or copy it using strncpy().
In your example code, index is actually not an index, but a count - it tells how many characters at most to copy from source to destination. If there is no null byte among the first n bytes of source, the string placed in destination will not be null terminated
strncpy fills the destination up with '\0' for the size of source, eventhough the size of the destination is smaller....
manpage:
If the length of src is less than n, strncpy() pads the remainder of
dest with null bytes.
and not only the remainder...also after this until n characters is
reached. And thus you get an overflow... (see the man page
implementation)
This may be used in many other scenarios, where you need to copy only a portion of your original string to the destination. Using strncpy() you can copy a limited portion of the original string as opposed by strcpy(). I see the code you have put up comes from publib.boulder.ibm.com.
That depends on our requirement.
For windows users
We use strncpy whenever we don't want to copy entire string or we want to copy only n number of characters. But strcpy copies the entire string including terminating null character.
These links will help you more to know about strcpy and strncpy
and where we can use.
about strcpy
about strncpy
the strncpy is a safer version of strcpy as a matter of fact you should never use strcpy because its potential buffer overflow vulnerability which makes you system vulnerable to all sort of attacks