Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Been reading through a lot of the bigger, more popular threads here on SO and found the thread about casting malloc() returns particularly interesting. I'm guilty of casting my returns simply because this is how I was taught.
The thing I'm wondering is, if casting a return from malloc() can hide bugs from not including stdlib.h then why is the answer not to cast rather than to always make sure stdlib.h is included?
Isn't not including stdlib.h lazy or bad practice or am I missing something? I realise there are other reasons for not casting but this one stands out for me in particular since it seems like bad practice is in some small way being promoted or accepted here.
So are there any particular instances where one would willingly not include stdlib.h if it's actually required? I see a lot of people being put down for doing these things... casting, yet it seems that nobody really has a problem with this negligence... can someone explain why casting the return is frowned on yet neglecting to include necessary headers is not?
I know it's a contentious issue here and has been the subject of various threads in the past. I'm trying to get back up to speed with things here and break old habits.
Lastly, any good sources of info that's more up to date with current standards...I'm still finding various examples online where the casting is being done, and some of them are quite recent.
A lot of conflicting info out there. Why would you not want to include stdlib.h or be so cavalier with regards to that, yet be so pedantic about the casting?
The "strong-typing" idea says that the compiler should be able to catch most of the programmer's errors before the program is run. Not doing the proper #include is an error, which the compiler can catch (unless you inadvertently suppress it by casting).
"Don't do this error" is not a solution - bugs always happen.
This sort of error is plausible, because it's annoying to check whether your code already has the proper #include whenever you add dynamic memory allocations to it. People tend to forget (or "forget") to do annoying things.
Especially in situations like this:
void addSomeData(someType **data) {
...
... manylines ...
data = (someType *) malloc(sizeof(someType)*n);
where it should have been:
*data = (someType *) malloc(sizeof(someType)*n);
Here the cast enables the compiler to check if you did really what you intended to do. Furthermore, adding redundant information to code can enhance or worsen readability, it is a matter of the situation, the code writers and code readers personal style.
On the other hand, in code lines like this:
struct foo *bar = malloc(sizeof(struct foo));
a cast probably indeed would be of no benefit.
However, I think it is a too simple and indifferentiate rule to say, that a cast of malloc() is always an error (!), as a much cited answer here on SO claims.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This code can be compiled in C:
#include <stdio.h>
int main()
{
printf("Hello World");
return 0;
printf("shouldnt allow this line");
return 1; //also this line
}
the line printf("shouldnt allow this line"); and return 1; are unreachable. Is there any ways to check this during compilation with warning messages? Also, why does the compiler allows this?
Unreachable code is not an error because:
It's often useful, especially as the result of macro expansion or functions which are only ever called in a way that makes some paths unreachable due to some of their arguments being constant or limited to particular range. For instance, with an inline version of isdigit that's only ever called with non-negative arguments, the code path for an EOF argument would be unreachable.
In general, determining whether code is unreachable is equivalent to the halting problem. Sure there are certain cases like yours that are trivial to determine, but there is no way you can specify something like "trivial cases of unreachable code are errors, but nontrivial ones aren't".
Broadly speaking, C does not aim to help a developer catch mistakes; rather, C trusts the developer to do a perfect job, just as it trusts the compiler to do a perfect job.
Many newer languages take a more active stance, aiming to protect the developer from his or herself — and plenty of C compilers will emit compile-warnings (which can typically be "promoted" to errors via command-line flags) — but the C community has never wanted the language to stop trusting developers. It's just a philosophical difference. (If you've ever run into a case where a language prevents you from doing something that seems wrong but that you actually have a good reason for, you'll probably understand where they're coming from, even if you don't agree.)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am prone to writing code like this:
if (*t) while (*++t);
It reads: if string t does not start with /0, then move to the end.
Note the while loop has no body, so the semicolon terminates it.
I'd like to know if it is good practice to do this? Why and why not?
C is one of the oldest popular language in use today. I believe there's a good chance of finding one or more established style guide(s).
I know that Google has one for their C++ open source projects - http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml
Can anyone point me to resources on why or why not write code in certain manner?
Usually it is a good practice to write separate lines of code. Like in case of large pieces of code, debugging is clearer if we write code in separate lines.
It depends! Who is going to have to read and maintain this code? Coding standards exist for two major reasons:
To make code more readable and maintainable. When there are multiple developers, it makes code more consisent (which is more readable).
To discourage common errors. For example, a standard might require putting literals first in conditionals to discourage the assignment-as-comparison bug.
How do these goals apply to your specific code? Are you prone to making mistakes? If this is Linux kernel code, it's a lot more tolerable to have code like this than if it's a web app maintained by entry level programmers.
It reads: if string t does not start with /0, then move to the end.
Then consider putting a comment on it that says that.
Surprisingly - it is usually more expensive to maintain code over time than to write it in the first place. Maintenance costs are minimized if code is more readable.
There are three audiences for your code. You should think of how valuable their time is while you are formatting:
Fellow coders, including your co-workers and code-reviewers. You
want these people to have a high reputation of you. You should write code that is easily understandable for them.
Your future self. Convoluted code may be obvious while you are
writing it, but pick it up again in two weeks, and you will not
remember what it means. The 'concise' statement that you wrote in 10
minutes will someday take you 20 minutes to decipher.
The Optimizing Compiler, which will produce efficient code no matter
whether your line is concise or not. The compiler does not care - try to save time for the other two. (Cue angry remarks about this item. I am in favor of writing efficient code, but concise styles like the one we are describing here will not affect compiler efficiency.)
Bad practice, because not easy to parse. I'd do
while (*t) ++t;
and let the compiler do the tiny bit of optimization.
The textual translation of it reads even shorter than yours
advance t until it points to a 0
Although you can write some pretty clever code in one line in C, it's usually not good practice in terms of readability and ease of maintenance. What's straightforward for you to understand may look completely foreign to someone maintaining your code in future.
You need to strike a balance between conciseness and readability. To this end, it's usually better to separate the code out so each line does one thing.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have been told by more senior, experienced and better-educated programmers than myself that the use of function-pointers in c should be avoided. I have seen the fact that some code contains function pointers as a rationale not to re-use that code, even when the only alternative is complete re-implementation. Upon further discussion I haven't been able to determine why this would be. I am happy to use function pointers where appropriate, and like the interesting and powerful things they allow you to do, but am I throwing caution to the wind by using them?
I see the pros and cons of function pointers as follows:
Pros:
Great opportunity for code modularity
OO-like features in non-OO c (i.e. code and data in the same object)
How else could you reasonably implement a callback?
Cons:
Negative impact to code readability - not always obvious what function is actually called when a function pointer is invoked
Minor performance hit compared to a direct function call
I think Con # 1. can usually reasonably be mitigated by well chosen symbol names and good comments. And Con # 2. will in general not be a big deal. Am I missing something - are there other reasons to avoid function pointers like the plague?
This question looks a little discussion-ey, but I'm looking for good reasons why I shouldn't use function pointers, not opinions
Function pointers are not evil. The main times you "shouldn't" use them are when either:
The use is gratuitous, i.e. not actually needed for what you're doing, or
In situations where you're writing hardened code and the function pointer might be stored at a location you're concerned may be a likely candidate for buffer overflow attacks.
As for when function pointers are needed, Adam's answer provided some good examples. The common theme in all those examples is that the caller needs to be able to provide part of the code that runs from the called function. Without function pointers, the only way you could do this would be to copy-and-paste the implementation of the function and change part of it to call a different function, for every individual usage case. For qsort and bsearch, which can be implemented portably, this would just be a nuisance and hideously ugly. For thread creation, on the other hand, without function pointers you would have to copy and paste part of the system implementation for the particular OS you're running on, and adapt it to call the function you want called. This is obviously unacceptable; your program would then be completely non-portable.
As such, function pointers are absolutely necessary for some tasks, and for other tasks, they are a major convenience which allows general code to be reused. I see no reason why they should not be used in such cases.
No, they're not evil. They're absolute necessary in order to implement various features such as callback functions in C.
Without function pointers, you could not implement:
qsort(3)
bsearch(3)
Window procedures
Threads
Signal handlers
And many more.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
c89
gcc (GCC) 4.7.2
Hello,
I am looking at some string functions as I need to search for different words in a sentence.
I am just wondering are the c standard functions fully optimized.
For example the functions like these:
memchr, strstr, strspn, strchr, etc
In terms of high performance as that is what I need. Is there anything better?
Regards,
You will almost certainly find that the standard library functions have been optimised as much as they can, and they will probably outdo anything you code up in C.
Note that this is for the general case. If there is some restriction you're able to put on the functions, or some extra piece of information you have on the data itself, you may be able to get your code to run faster, since you have the advantage of that restriction or information.
For example, I've written C code for malloc that blew a library-supplied malloc away because I knew the application would never ask for more than 256 bytes so I just gave 256 bytes on every request. Yes, that was a waste of memory but it allowed speed improvements beyond the general case.
But, for the general case, you're better off sticking with the supplied stuff.
Fully optimized? Optimized for what?
Yes, C functions of stdlib written to be very efficient and were tested/debugged for a years, so you definitely shouldn't worry about most of them.
Assuming, that you always align your data to 16-byte boundaries and allocate every time about 16 bytes extra or so, it's definitely possible to speed up most stdlib routines.
But assuming that eg. strlen is not known in advance, or that reading just one byte too much can cause a segmentation fault, I wouldn't bother.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We are planning to use Splint as code analyzer for our C code base. But we never tried Splint tool before so we want your input on it's benifts, pros and cons.
Lint tools are useful for finding common problems and errors that code reviews tend to miss. My opinion is that you have nothing to lose when doing static code analysis. The only down side is that you might get a lot of false positives or warnings that might be unimportant (i.e. coding style recommendation). You just have to develop good filtering skills. Static analyzers might also not catch everything, but hey it is better than nothing.
Here is a white paper from the SANS institute that might interest you:
http://www.sans.org/reading_room/whitepapers/securecode/secure-software-development-code-analysis-tools_389
Read this blog post and these slides for a quick overview of what it can do for you.
Splint excels at making your code more idiomatic (and therefore easier to read, for various compilers to parse, more portable, and easier to refactor). Splint can find subtle bugs such as implicit casts between ints and floats. Splint tracks down memory leaks and other security vulnerabilities.
Try it: splint hello.c.
As waffleman suggested static analysers do produce a lot of false alarms. I have found Prevent to give better alarms than Sparrow. Those are two we use for static analysis.
An example of a typical false alarm and good alarm is:
bar (char **output)
{
*output = malloc(100);
}
foo()
{
char *output=NULL;
bar(&output)
}
In function bar it would report memory leak for the pointer output. In the function foo it reports NULL dereference when the function bar is called. But nevertheless its a choice between finding a true alarm between 100s of false alarms.
So we can find memory leaks which can be missed during code reviews. Prevent license is expensive and once a alarm is marked false it doesnt appear in the subsequent analysis. Hence you have to find if Splint does the same.
The tool looks for pattern that could possibly be errors. The advantage is that the tool may find latent bugs and the disadvantage is that it may find a whole bunch on false positives as well.