How do the execution speeds of snprintf and strncat compare? - c

I have a program which concatenates some number of strings in a buffer.
I was using strncpy before. After looking into some suggetions on web to use snprintf instead of strncat, I switched to snprintf. However, I have noticed a delay in execution of this part of program(string concatenation) then before. Is this possible, that snprintf can slow down the program exucution speed? If not then I will look for some other factors in my program.
if(iOffset<MAX_BUFF_SIZE && iOffset>0)
{
.......
iOffset += snprintf(cBuff+iOffset, sizeof(cBuff)-iOffset, "%s", "String1");
......
}
I repeat the above code snippet like 12 times in order to append strings to cBuff.
Edit : It repeats 12 times after each 1 sec.

There are a couple of suggestions that come to mind:
don't optimize your program too early
when you're ready to optimize, use a tool like a profiler to discover where the real slow downs are
If you're not familiar with profiling, or are really eager to know the timing details you can always include timers around your code. See references on gettimeofday() or clock() and the appropriate caveats about those codes. Essentially you can get the time before execution, the time after and compare them. Comment out your line of code and put in the old code and time it.
With all that said ... that's part of what a profiler does for you. Figuring out "why" something is slow is sometimes complex because there may be other considerations going on (ie at the hardware level) that you're not aware of.
The execution of a function 12 times is probably trivial compared to the total execution time of the program.

strncat suffers from the Schlemiel the Painter problem. The way you're using snprintf does not (though it's possible to use strncat that way too, and sidestep the problem that way).

Copying the "repeated concatenation" idiom and just using snprintf to do it is not genuinely following the suggestion to avoid strcat. You should be doing something like:
snprintf(buf, sizeof buf, "%s%s%s%s", str1, str2, str3, str4);
This will be a lot faster than repeated calls to snprintf, which would incur a lot of overhead for each call, and it would actually be using the idiom correctly.

Assuming that you take into account the issue that Chris Jester-Young referred to, by accounting for the "Schlemiel the Painter" problem, you are likely to find that in a head to head comparison that strncat() is quicker than snprintf() - in the way that you are using it.
snprintf() does have the (albeit minor, in this case) overhead of having to parse the format string that strncat() does not.
The snippet of code that you provided is different enough from the example given in the linked question to potentially change the comparison.
As Stephen has mentioned, for most platforms running strncpy() 12 times versus snprintf() should produce a negligible difference. The use of a profiler would be useful here to make sure that you're concentrating on the right area.
In the example that you have provided you are trying to append a const string to your buffer. If that is what you're doing in your real code (rather than just as a simple example) and the string copying is really a significant area of execution time, you might be able to optimise that area.
One common method of optimisation is to find calculations that can be pre-computed at compile time, rather than at execution time. If you are only interested in handling const strings you may be able to effectively precompute the length of the strings and then use memcpy(), instead of strncpy() to perform the string append. memcpy() is typically very well optimised to your platform.
Of course such a solution would be at the expense of having to take more care of buffer overflows and pointer arithmetic.

I'm not a profiler, so I can't say for sure what is slow.
As #chris mentions, strncat can do more work than it needs to in finding the end of the destination string over and over. You might have been able to mitigate that since it appears you are tracking space consumed, so you can start from the end on each subsequent call.
On the other hand, snprintf likely has to do extra work compared to a strcat or strcpy, because it must parse the "%s" on each call to figure out how to deal with the variable argument list.
I'm not surprised that strncat could run faster, especially in the absence of aggressive optimization levels.

People who mentioned not to optimize early after you noticed a slow down are silly. In this case, you don't even need to run a profiler. You already have empirical evidence switching to snprintf slowed down your program.
I work in the financial industry where performance is key and almost always prefer sprintf to snprintf. It is possible to write fast and stable apps with it within a tigthly controlled environment.

Related

How to construct a string where every char is doubled?

TL;DR:
I am asking you to tell me what would be the most efficient approach to double my strings and print them out?
Full story:
I had trouble with the title, and the actual problem may be a bit different than you expect.
Imagine I have a main buffer.
At some index determined by the program, I want to insert a string.
But every char in that string needs to be doubled.
So "abc", inserted at index 10 of buffer[999], needs to be "aabbcc".
Now, the second part of the problem - this needs to be as efficient as possible. I could make this easily, but I need the fastest option.
I thought I had devised several approaches, but it boils down to:
fill buffer(1000) with single chars and double the chars when printing (pushing to stdout)
fill buffer(2000) with double chars and print like normal
The variations to the second approach would be When to double the chars (when copying or generating "aabbcc" from the start and copy the full thing).
The first approach would be the most intuitive, but I fear I would need to devise a low level char-doubling function because putc and printf and any large amount of function calls will have much overhead. (There are allegedly very efficient functions in libc with bitshifting and pointer magic, but I couldn't find them. I can only find the very dissappointing versions where fgets() is just a wrapper for getc() - which can't be efficient.)
The second approach obviously wastes a lot of memory and requires a lot of copying, but it could probably put everything into stdout more efficiently as a chunk without the overhead of copying single chars.
I am unsure if under everything there is just a system write call, and I also lack the knowledge how it works. I am just going with my research that says that fgets is about 12 times faster than fgetc for equal data. And so I assume it is with all the single-char vs line functions.
So in conclusion, I am asking you to tell me what would be the most efficient approach to double my strings and print them out?

How to buffer a line in a file by using System Calls in C?

Here is my approach:
int linesize=1
int ReadStatus;
char buff[200];
ReadStatus=read(file,buff,linesize)
while(buff[linesize-1]!='\n' && ReadStatus!=0)
{
linesize++;
ReadStatus=read(file,buf,linesize)
}
Is this idea right?
I think my code is a bit inefficient because the run time is O(FileWidth); however I think it can be O(log(FileWidth)) if we exponentially increase linesize to find the linefeed character.
What do you think?
....
I just saw a new problem. How do we read the second line?. Is there anyway to delimit the bytes?
Is this idea right?
No. At the heart of a comment written by Siguza, lies the summary of an issue:
1) read doesn't read lines, it just reads bytes. There's no reason buff should end with \n.
Additionally, there's no reason buff shouldn't contain multiple newline characters, and as there's no [posix] tag here there's no reason to suggest what read does, let alone whether it's a syscall. Assuming you're referring to the POSIX function, there's no error handling. Where's your logic to handle the return value/s reserved for errors?
I think my code is a bit inefficient because the run time is O(FileWidth); however I think it can be O(log(FileWidth)) if we exponentially increase linesize to find the linefeed character.
Providing you fix the issues mentioned above (more on that later), if you were to test this theory, you'd likely find, also at the heart of the comment by Siguza,
Disks usually work on a 512-byte basis and file system caches and even CPU/memory caches are a lot larger than that.
To an extent, you can expect your idea to approach O(log n), but your bottleneck will be one of those cache lines (likely the one closest to your keyboard/the filesystem/whatever is feeding the stream with information). At that point, you should stop guzzling memory which other programs might need because your optimisation becomes less and less effective.
What do you think?
I think you should just STOP! You're guessing!
Once you've written your program, decide whether or not it's too slow. If it's not too slow, it doesn't need optimisation, and you probably won't shave enough nanoseconds to make optimisation worthwhile.
If it is to slow, then you should:
Use a profiler to determine what the most significant bottleneck is,
apply optimisations based on what your profiler tells you, then
use your profiler again, with the same inputs as before, to measure the effect your optimisation had.
If you don't use a profiler, your guess-work could result in slower code, or you might miss opportunities for more significant optimisations...
How do we read the second line?
Naturally, it makes sense to read character by character, rather than two hundred characters at a time, because there's no other way to stop reading the moment you reach a line terminating character.
Is there anyway to delimit the bytes?
Yes. The most sensible tools to use are provided by the C standard, and syscalls are managed automatically to be most efficient based on configurations decided by the standard library devs (who are much likely better at this than you are). Those tools are:
fgets to attempt to read a line (by reading one character at a time), up to a threshold (the size of your buffer). You get to decide how large a line should be, because it's more often the case that you won't expect a user/program to input huge lines.
strchr or strcspn to detect newlines from within your buffer, in order to determine whether you read a complete line.
scanf("%*[^\n]"); to discard the remainder of an incomplete line, when you detect those.
realloc to reallocate your buffer, if you decide you want to resize it and call fgets a second time to retrieve more data rather than discarding the remainder. Note: this will have an effect on the runtime complexity of your code, not that I think you should care about that...
Other options are available for the first three. You could use fgetc (or even read one character at a time) like I did at the end of this answer, for example...
In fact, that answer is highly relevant to your question, as it does make an attempt to exponentially increase the size. I wrote another example of this here.
It should be pointed out that the reason to address these problems is not so much optimisation, but the need to read a large, yet variadic in size chunk of memory. Remember, if you haven't yet written the code, it's likely you won't know whether it's worthwhile optimising it!
Suffice to say, it isn't the read function you should try to reduce your dependence upon, but the malloc/realloc/calloc function... That's the real kicker! If you don't absolutely need to store the entire line, then don't!

C fgets versus fgetc for reading line

I need to read a line of text (terminated by a newline) without making assumptions about the length. So I now face to possibilities:
Use fgets and check each time if the last character is a newline and continuously append to a buffer
Read each character using fgetc and occasionally realloc the buffer
Intuition tells me the fgetc variant might be slower, but then again I don't see how fgets can do it without examining every character (also my intuition isn't always that good). The lines are quite large so the performance is important.
I would like to know the pros and cons of each approach. Thank you in advance.
I suggest using fgets() coupled with dynamic memory allocation - or you can investigate the interface to getline() that is in the POSIX 2008 standard and available on more recent Linux machines. That does the memory allocation stuff for you. You need to keep tabs on the buffer length as well as its address - so you might even create yourself a structure to handle the information.
Although fgetc() also works, it is marginally fiddlier - but only marginally so. Underneath the covers, it uses the same mechanisms as fgets(). The internals may be able to exploit speedier operation - analogous to strchr() - that are not available when you call fgetc() directly.
Does your environment provide the getline(3) function? If so, I'd say go for that.
The big advantage I see is that it allocates the buffer itself (if you want), and will realloc() the buffer you pass in if it's too small. (So this means you need to pass in something gotten from malloc()).
This gets rid of some of the pain of fgets/fgetc, and you can hope that whoever wrote the C library that implements it took care of making it efficient.
Bonus: the man page on Linux has a nice example of how to use it in an efficient manner.
If performance matters much to you, you generally want to call getc instead of fgetc. The standard tries to make it easier to implement getc as a macro to avoid function call overhead.
Past that, the main thing to deal with is probably your strategy in allocating the buffer. Most people use fixed increments (e.g., when/if we run out of space, allocate another 128 bytes). I'd advise instead using a constant factor, so if you run out of space allocate a buffer that's, say, 1 1/2 times the previous size.
Especially when getc is implemented as a macro, the difference between getc and fgets is usually quite minimal, so you're best off concentrating on other issues.
If you can set a maximum line length, even a large one, then one fgets would do the trick. If not, multiple fgets calls will still be faster than multiple fgetc calls because the overhead of the latter will be greater.
A better answer, though, is that it's not worth worrying about the performance difference until and unless you have to. If fgetc is fast enough, what does it matter?
I would allocate a large buffer and then use fgets, checking, reallocing and repeating if you haven't read to the end of the line.
Each time you read (either via fgetc or fgets) you are making a system call which takes time, you want to minimize the number of times that happens, so calling fgets fewer times and iterating in memory is faster.
If you are reading from a file, mmap()ing in the file is another option.

String-handling practices in C [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm starting a new project in plain C (c99) that is going to work primarily with text. Because of external project constraints, this code has to be extremely simple and compact, consisting of a single source-code file without external dependencies or libraries except for libc and similar ubiquitous system libraries.
With that understanding, what are some best-practices, gotchas, tricks, or other techniques that can help make the string handling of the project more robust and secure?
Without any additional information about what your code is doing, I would recommend designing all your interfaces like this:
size_t foobar(char *dest, size_t buf_size, /* operands here */)
with semantics like snprintf:
dest points to a buffer of size at least buf_size.
If buf_size is zero, null/invalid pointers are acceptable for dest and nothing will be written.
If buf_size is non-zero, dest is always null-terminated.
Each function foobar returns the length of the full non-truncated output; the output has been truncated if buf_size is less than or equal to the return value.
This way, when the caller can easily know the destination buffer size that's required, a sufficiently large buffer can be obtained in advance. If the caller cannot easily know, it can call the function once with either a zero argument for buf_size, or with a buffer that's "probably big enough" and only retry if you ran out of space.
You can also make a wrapped version of such calls analogous to the GNU asprintf function, but if you want your code to be as flexible as possible I would avoid doing any allocation in the actual string functions. Handling the possibility of failure is always easier at the caller level, and many callers can ensure that failure is never a possibility by using a local buffer or a buffer that was obtained much earlier in the program so that the success or failure of a larger operation is atomic (which greatly simplifies error handling).
Some thoughts from a long-time embedded developer, most of which elaborate on your requirement for simplicity and are not C-specific:
Decide which string-handling functions you'll need, and keep that set as small as possible to minimize the points of failure.
Follow R.'s suggestion to define a clear interface that is consistent across all string handlers. A strict, small-but-detailed set of rules allows you to use pattern-matching as a debugging tool: you can be suspicious of any code that looks different from the rest.
As Bart van Ingen Schenau noted, track the buffer length independently of the string length. If you'll always be working with text it's safe to use the standard null character to indicate end-of-string, but it's up to you to ensure the text+null will fit in the buffer.
Ensure consistent behavior across all string handlers, particularly where the standard functions are lacking: truncation, null inputs, null-termination, padding, etc.
If you absolutely need to violate any of your rules, create a separate function for that purpose and name it appropriately. In other words, give each function a single unambiguous behavior. So you might use str_copy_and_pad() for a function that always pads its target with nulls.
Wherever possible, use safe built-in functions (e.g. memmove() per Jonathan Leffler) to do the heavy lifting. But test them to be sure they're doing what you think they're doing!
Check for errors as soon as possible. Undetected buffer overruns can lead to "ricochet" errors that are notoriously difficult to locate.
Write tests for every function to ensure it satisfies its contract. Be sure to cover the edge cases (off by 1, null/empty strings, source/destination overlap, etc.) And this may sound obvious, but be sure you understand how to create and detect a buffer underrun/overrun, then write tests that explicitly generate and check for those problems. (My QA folks are probably sick of hearing my instructions to "don't just test to make sure it works; test to make sure it doesn't break.")
Here are some techniques that have worked for me:
Create wrappers for your memory-management routines that allocate "fence bytes" on either end of your buffers during allocation and check them upon deallocation. You can also verify them within your string handlers, perhaps when a STR_DEBUG macro is set. Caveat: you'll need to test your diagnostics thoroughly, lest they create additional points of failure.
Create a data structure that encapsulates both the buffer and its length. (It can also contain the fence bytes if you use them.) Caveat: you now have a non-standard data structure that your entire code base must manage, which may mean a substantial re-write (and therefore additional points of failure).
Make your string handlers validate their inputs. If a function forbids null pointers, check for them explicitly. If it requires a valid string (like strlen() should) and you know the buffer length, check that the buffer contains a null character. In other words, verify any assumptions you might be making about the code or data.
Write your tests first. That will help you understand each function's contract--exactly what it expects from the caller, and what the caller should expect from it. You'll find yourself thinking about the ways you'll use it, the ways it might break, and about the edge cases it must handle.
Thanks so much for asking this question! I wish more developers would think about these issues--especially before they start coding. Good luck, and best wishes for a robust, successful product!
Have a look at strlcpy and strlcat , see the original paper for details.
Two cents:
Always use the "n" version of the string functions: strncpy, strncmp, (or wcsncpy, wcsncmp etc.)
Always allocate using the +1 idiom: e.g. char* str[MAX_STR_SIZE+1], and then pass MAX_STR_SIZE as the size for the "n" version of the string functions and finish with str[MAX_STR_SIZE] = '\0'; to make sure all strings are properly finalized.
The final step is important since the "n" version of the string functions won't append '\0' after copying if the maximum size was reached.
Work with arrays on the stack
whenever this is possible and initialize them properly. You don't have to keep track of allocations, sizes and initializations.
char myCopy[] = { "the interesting string" };
For medium sized strings C99 has VLA.
They are a bit less usable since you
can't initialize them. But you still have
the first two of the above
advantages.
char myBuffer[n];
myBuffer[0] = '\0';
Some important gotchas are:
In C, there is no relation at all between string length and buffer size. A string always runs up to (and including) the first '\0'-character. It is your responsibility as a programmer to make sure this character can be found within the reserved buffer for that string.
Always explicitly keep track of buffer sizes. The compiler keeps track of array sizes, but that information will be lost to you before you know it.
When it comes to time vs space, don't forget to pick the standard bit twiddling from here
During my early firmware projects, I used the look up tables to count the bit set in a O(1) operation efficiency.

Why are strlcpy and strlcat considered insecure?

I understand that strlcpy and strlcat were designed as secure replacements for strncpy and strncat. However, some people are still of the opinion that they are insecure, and simply cause a different type of problem.
Can someone give an example of how using strlcpy or strlcat (i.e. a function that always null terminates its strings) can lead to security problems?
Ulrich Drepper and James Antill state this is true, but never provide examples or clarify this point.
Firstly, strlcpy has never been intended as a secure version of strncpy (and strncpy has never been intended as a secure version of strcpy). These two functions are totally unrelated. strncpy is a function that has no relation to C-strings (i.e. null-terminated strings) at all. The fact that it has the str... prefix in its name is just a historical blunder. The history and purpose of strncpy is well-known and well-documented. This is a function created for working with so called "fixed width" strings (not with C-strings) used in some historical versions of Unix file system. Some programmers today get confused by its name and assume that strncpy is somehow supposed to serve as limited-length C-string copying function (a "secure" sibling of strcpy), which in reality is complete nonsense and leads to bad programming practice. C standard library in its current form has no function for limited-length C-string copying whatsoever. This is where strlcpy fits in. strlcpy is indeed a true limited-length copying function created for working with C-strings. strlcpy correctly does everything a limited-length copying function should do. The only criticism one can aim at it is that it is, regretfully, not standard.
Secondly, strncat on the other hand, is indeed a function that works with C-strings and performs a limited-length concatenation (it is indeed a "secure" sibling of strcat). In order to use this function properly the programmer has to take some special care, since the size parameter this function accepts is not really the size of the buffer that receives the result, but rather the size of its remaining part (also, the terminator character is counted implicitly). This could be confusing, since in order to tie that size to the size of the buffer, programmer has to remember to perform some additional calculations, which is often used to criticize the strncat. strlcat takes care of these issues, changing the interface so that no extra calculations are necessary (at least in the calling code). Again, the only basis I see one can criticise this on is that the function is not standard. Also, functions from strcat group is something you won't see in professional code very often due to the limited usability of the very idea of rescan-based string concatenation.
As for how these functions can lead to security problems... They simply can't. They can't lead to security problems in any greater degree than the C language itself can "lead to security problems". You see, for quite a while there was a strong sentiment out there that C++ language has to move in the direction of developing into some weird flavor of Java. This sentiment sometimes spills into the domain of C language as well, resulting in rather clueless and forced criticism of C language features and the features of C standard library. I suspect that we might be dealing with something like that in this case as well, although I surely hope things are not really that bad.
Ulrich's criticism is based on the idea that a string truncation that is not detected by the program can lead to security issues, through incorrect logic. Therefore, to be secure, you need to check for truncation. To do this for a string concatenation means that you are doing a check along the lines of this:
if (destlen + sourcelen > dest_maxlen)
{
/* Bug out */
}
Now, strlcat does effectively do this check, if the programmer remembers to check the result - so you can use it safely:
if (strlcat(dest, source, dest_bufferlen) >= dest_bufferlen)
{
/* Bug out */
}
Ulrich's point is that since you have to have destlen and sourcelen around (or recalculate them, which is what strlcat effectively does), you might as well just use the more efficient memcpy anyway:
if (destlen + sourcelen > dest_maxlen)
{
goto error_out;
}
memcpy(dest + destlen, source, sourcelen + 1);
destlen += sourcelen;
(In the above code, dest_maxlen is the maximum length of the string that can be stored in dest - one less than the size of the dest buffer. dest_bufferlen is the full size of the dest buffer).
When people say, "strcpy() is dangerous, use strncpy() instead" (or similar statements about strcat() etc., but I am going to use strcpy() here as my focus), they mean that there is no bounds checking in strcpy(). Thus, an overly long string will result in buffer overruns. They are correct. Using strncpy() in this case will prevent buffer overruns.
I feel that strncpy() really doesn't fix bugs: it solves a problem that can be easily avoided by a good programmer.
As a C programmer, you must know the destination size before you are trying to copy strings. That is the assumption in strncpy() and strlcpy()'s last parameters too: you supply that size to them. You can also know the source size before you copy strings. Then, if the destination is not big enough, don't call strcpy(). Either reallocate the buffer, or do something else.
Why do I not like strncpy()?
strncpy() is a bad solution in most cases: your string is going to be truncated without any notice—I would rather write extra code to figure this out myself and then take the course of action that I want to take, rather than let some function decide for me about what to do.
strncpy() is very inefficient. It writes to every byte in the destination buffer. You don't need those thousands of '\0' at the end of your destination.
It doesn't write a terminating '\0' if the destination is not big enough. So, you must do so yourself anyway. The complexity of doing this is not worth the trouble.
Now, we come to strlcpy(). The changes from strncpy() make it better, but I am not sure if the specific behavior of strl* warrants their existence: they are far too specific. You still have to know the destination size. It is more efficient than strncpy() because it doesn't necessarily write to every byte in the destination. But it solves a problem that can be solved by doing: *((char *)mempcpy(dst, src, n)) = 0;.
I don't think anyone says that strlcpy() or strlcat() can lead to security issues, what they (and I) are saying that they can result in bugs, for example, when you expect the complete string to be written instead of a part of it.
The main issue here is: how many bytes to copy? The programmer must know this and if he doesn't, strncpy() or strlcpy() won't save him.
strlcpy() and strlcat() are not standard, neither ISO C nor POSIX. So, their use in portable programs is impossible. In fact, strlcat() has two different variants: the Solaris implementation is different from the others for edge cases involving length 0. This makes it even less useful than otherwise.
I think Ulrich and others think it'll give a false sense of security. Accidentally truncating strings can have security implications for other parts of the code (for example, if a file system path is truncated, the program might not be performing operations on the intended file).
There are two "problems" related to using strl functions:
You have to check return values
to avoid truncation.
The c1x standard draft writers and Drepper, argue that programmers won't check the return value. Drepper says we should somehow know the length and use memcpy and avoid string functions altogether, The standards committee argues that the secure strcpy should return nonzero on truncation unless otherwise stated by the _TRUNCATE flag. The idea is that people are more likely to use if(strncpy_s(...)).
Cannot be used on non-strings.
Some people think that string functions should never crash even when fed bogus data. This affects standard functions such as strlen which in normal conditions will segfault. The new standard will include many such functions. The checks of course have a performance penalty.
The upside over the proposed standard functions is that you can know how much data you missed with strl functions.
I don't think strlcpy and strlcat are consider insecure or it least it isn't the reason why they're not included in glibc - after all, glibc includes strncpy and even strcpy.
The criticism they got was that they are allegedly inefficient, not insecure.
According to the Secure Portability paper by Damien Miller:
The strlcpy and strlcat API properly check the target buffer’s bounds,
nul-terminate in all cases and return the length of the source string,
allowing detection of truncation. This API has been adopted by most
modern operating systems and many standalone software packages,
including OpenBSD (where it originated), Sun Solaris, FreeBSD, NetBSD,
the Linux kernel, rsync and the GNOME project. The notable exception
is the GNU standard C library, glibc [12], whose maintainer
steadfastly refuses to include these improved APIs, labelling them
“horribly inefficient BSD crap” [4], despite prior evidence that they
are faster is most cases than the APIs they replace [13]. As a result,
over 100 of the software packages present in the OpenBSD ports tree
maintain their own strlcpy and/or strlcat replacements or equivalent
APIs - not an ideal state of affairs.
That is why they are not available in glibc, but it is not true that they are not available on Linux. They are available on Linux in libbsd:
https://libbsd.freedesktop.org/
They're packaged in Debian and Ubuntu and other distros. You can also just grab a copy and use in your project - it's short and under a permissive license:
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/string/strlcpy.c?rev=1.11
Security is not a boolean. C functions are not wholly "secure" or "insecure", "safe" or "unsafe". When used incorrectly, a simple assignment operation in C can be "insecure". strlcpy() and strlcat() may be used safely (securely) just as strcpy() and strcat() can be used safely when the programmer provides the necessary assurances of correct usage.
The main point with all of these C string functions, standard and not-so-standard, is the level to which they make safe/secure usage easy. strcpy() and strcat() are not trivial to use safely; this is proven by the number of times that C programmers have gotten it wrong over the years and nasty vulnerabilities and exploits have ensued. strlcpy() and strlcat() and for that matter, strncpy() and strncat(), strncpy_s() and strncat_s(), are a bit easier to use safely, but still, non-trivial. Are they unsafe/insecure? No more than memcpy() is, when used incorrectly.
strlcpy may trigger SIGSEGV, if src is not NUL-terminated.
/* Not enough room in dst, add NUL and traverse rest of src */
if (n == 0) {
if (siz != 0)
*d = '\0'; /* NUL-terminate dst */
while (*s++)
;
}
return(s - src - 1); /* count does not include NUL */

Resources