Why do people define their own offsetof? - c

Having just seen What does the following macro do? I gotta ask my own question: why do so many applications' headers define offsetof themselves? Is there some reason why <stddef.h> is not to be relied upon?

I don't think it's that they distrust the standard offsetof -- at least from what I've seen, it's usually that they're just unaware of it.

Is there some reason why is not to be relied upon?
I know one of the reasons. GCC produces a warning when standard offsetof() is used on fields of C++ classes. That leads some people to roll out their own version which doesn't trigger the warning.

Or maybe it's legacy code from a C compiler that wasn't ANSI compliant and didn't have offsetof?

Related

Are there targets where `va_end` does anything non-trivial?

All versions of varargs.h and/or stdarg.h I've seen either define va_end as an empty macro or some opaque compiler-specific function that I guess doesn't do anything either. The rationale in the C standard states "some implementations might need it", but gives no more details.
When would there be a real need for a va_end()? Any examples of ABI that would require such, preferably with an explanation?
C has been around for a long, long time and has many standards. These standards tend to give compilers some leeway, letting them implement their own padding/headers/etc. for performance and other reasons. I assume this was added because, as mentioned, "some compilers might need it." Though languages try to abstract their implementation details, some things - inevitably - slip through the cracks... Hope this was helpful =]

Why it is good practice to define custom datatypes in larger projects [duplicate]

This question already has answers here:
Why does everybody typedef over standard C types?
(4 answers)
Closed 3 years ago.
Please help me understand reason of defining C types in some projects.
In my current company I found that someone made definitions of types that are equivalent of those that are already defined in <stdint.h>.
Such approach makes harder to integrate 3party code into project, and makes programmer work, bit more frustrating.
But I can also see that some projects, like gnome do the same. There is a gchar, gsize, and gint32 for example.
Because I don't see ANY reason for such approach, I kindly ask for explanation.
What is the reason that <stdint.h> isn't sufficient.
This is not good practice. It only leads to less compatibility and portability. I don't see any reason that it should be done. stdint.h exists to avoid this.
<stdint.h> was standardized in C99. Perhaps the codebase predates <stdint.h>, or it needs to compile on platforms that don't have it. It was an extremely common thing to do in C89 and K&R C when there were no portable fixed size typedefs. Even modern projects may keep around these compatibility shims if they still aim to be compilable on decades-old platforms.
In my current company I found that someone made definitions of types that are equivalent of those that are already defined in <stdint.h>.
If your codebase targets C99 or later then there's no need.

Status of cerf, cerfc, ctgamma and clgamma functions?

If we look to the draft of C11, the following names were reserved :
7.31 Future library directions
The following names are grouped under individual headers for convenience. All external
names described below are reserved no matter what headers are included by the program.
7.31.1 Complex arithmetic <complex.h>
The function names
cerf cerfc cexp2 cexpm1 clog10 clog1p clog2 clgamma ctgamma
and the same names suffixed with f or l may be added to the declarations in the
<complex.h> header.
As I would like very much to see the complex gamma functions as a part of standard C (because they are a basis for a lot of other complex functions), I wonder what is the real signification of the 7.31.1 clause.
Why only add declarations and not their definitions ?
Can we expect them for the next C standard or for a minor release ? (and if the answer is yes, when the next standard is expected ?)
Is there any implementations already available as non-standard extensions of compilers ?
A couple of months ago, I have published a library libcerf that provides the missing functions cerf and cerfc, based on numerical code by Steven G. Johnson. Our implementation is accurate to 13-14 digits, which is good enough for almost every practical use - but in achieving this, one understands how much more work needs to be done to write an acceptable standard: it is not likely that this will be undertaken by anybody any soon.
So concerning your question about clgamma and ctgamma: don't wait for the standard. Search for code that just works. Ideally, wrap this code and provide a library like libcerf that is almost as good as a standard implementation.
The glibc maintainers almost never want to add new functions that aren't standardized in any way, and cerf is NOT part of the C99 standard --- it is merely reserved for possible future use, which makes it especially unlikely that glibc will accept an implementation until the required behavior of the function has been standardized.
It sure would be nice though, if it were incorporated, to be just like "erf()" in the C++ code, but instead "cerf()".
As per manual :
cerf[f|l]C99 An optional complex error function that, if provided, must be
declared in complex.h.
As per above statement, the functions will be declared only if provided.

Should I use ANSI C (C89)?

It's 2012. I'm writing some code in C. Should I be still be using C89? Are there still compilers that do not support C99?
I don't mind using /* */ instead of //.
I'm not sure about C89 forbids mixing declarations and code. I'm kind of leaning towards the idea that it's actually more readable to have all the declarations in one place, and if it isn't, the function is too long.
VLAs look useful but I haven't needed them yet.
Should I stick with C89 if I don't have a compelling reason not to? Are there other things I haven't considered?
Unless you know that you cannot use a C99-compatible compiler (the Visual Studio C compiler is the most prominent candidate) there is no good reason for not using the nice things C99 gives you.
However, even if you need to support that compiler you can use some C99 features - just not all of them.
One feature of C99 that is incredibly handy is being able to do for(int i = ...) instead of having to declare your loop variable on top of the function - especially since C actually has a block scope. That's the kind of declaration where having it on top really doesn't improve the readability.
There is a reason (or many) why C89 was superseded by C99. If you know for sure that no C99 compiler is available for your particular piece of work (unlikely unless you are stuck with Visual Studio which never supported C officially anyway), then you need to stay with C89 but otherwise you should certainly put yourself in a position where you can benefit from the last 20+ years of improvement. There is nothing inherently slower about C99.
Perhaps you should even consider looking at the newest C11 standard. There has been some important fixes for dealing with Unicode that any C programmer could benefit from (other changes in the standard are absolutely minimal)...
Good code is a mixture of performance, scalability, readability, and maintainability.
In my opinion, C99 makes code easier to read and maintain. Very, very few compilers don't support C99, so I say go with it. Use the tools you have available, unless you are certain you will need to compile your project with a compiler that requires the earlier standard.
Check out these links to learn more about the advantages to C99:
http://www.kuro5hin.org/?op=displaystory;sid=2001/2/23/194544/139
http://en.wikipedia.org/wiki/C99#Design
Note that C99 also supports library functions such as snprintf, which are very useful, and have better floating point support. Also, I find macros to be extremely helpful, especially when working with math-intensive applications (like cryptographic algorithms)
I disagree with Paul R's "bottom line" comment. There are multiple cases where C89 is advantageous for portability.
Targeting embedded systems, which may or may not have compilers supporting C99:
https://groups.google.com/forum/#!topic/comp.arch.embedded/WNvhw3T_9pI%5B1-25%5D
Targeting the TinyCC compiler, as might be required in a restricted environment where installing a gigantic toolchain is either impractical or not allowed. (TCC is no longer being developed, and Bellard's last statement as to ISOC99 support was that it was "heading towards" full compliance.)
Supporting dynamic compilation via libtcc (see above).
Targeting MSVC, as others have noted.
For source-compatibility with projects that may be required by their company to use the C89 standard. This is especially relevant if you're writing an open source library, and want to maximize its application in some industry.
As cegfault noted, some of the C99 features as listed on Wikipedia can be very useful, but none I would consider indispensable if your priority is portability, or any of the above reasons apply.
It appears Microsoft hasn't budged on C99 compliance. SimonRev from Beijer Electronics commented on a related MSDN thread in November 2016:
In broad strokes, the only parts of the C99 compiler that were
implemented are those parts that they needed to keep the C++ compiler
up to date.
Microsoft has done basically nothing to the C compiler since VC6, and
they haven't made much secret that C++ is their vision of the future
of native code, not C.
In conclusion, if you want portability for embedded or restricted systems, dynamic compilation, MSVC, or compatibility with proprietary source code, I would say C89 is advantageous.

Alternatives to C "inline" keyword

From my course instructor, he has repeatedly emphasized and asked us not to use the "inline" keyword for functions. He says it is not "portable" across compilers and is not "standard". Considering this, are there any "standard" alternatives that allow for "inline expansion"?
Your course instructor is wrong. It is standard. It's actually in the current standard, right there in section 6.7.4 Function specifiers (C99). The fact that it's a suggestion to the compiler that may be totally ignored does not make it any less standard.
I don't think it was in C89/90 which may be what some embedded compilers use but I would give serious consideration to upgrading in that case.
However, even where inline is available, I generally leave those decisions up to the compiler itself since most modern ones are more than capable of figuring out how best to optimise code (and usually far better than I). The inline keyword, like register and auto, is not something I normally worry about at all.
You can use macros instead since that's relatively simple text substitution that generally happens before the compile phase but you should be aware of the limitations and foibles.
Or you can manually inline code (ie, duplicate it) although I wouldn't suggest this as an option since it may quickly become a maintenance nightmare.
Myself, I would write the code using normal functions without any of those tricks and then introduce them where necessary (and only if you can demonstrate that they're needed, such as a specific performance issue).
You should always assume that the coder who has to maintain your code is a psychopathic killer who knows where you live :-)
As others have said, inline was integrated to the C standard 11 years ago.
Other than was indicated, inline makes a difference since it changes the visibility properties of the function. In particular for large libraries with a lot of functions declared only static you might have one version of any these function in all object files (e.g when you compile with debugging switched on).
Please have a look into that post: Myth and reality about inline in C99
As evil as they may be, macros are still king (although specific compilers may support extra capabilities).
Here, now it's "portable across compilers":
#if (__STDC_VERSION__ < 199901L)
#define inline
#endif
static inline int foobar(int x) /* ... */
By the way, as others have said, the inline keyword is just a hint and rather useless, but the important keyword is static. Unless your function is declared static, it will have external linkage, and the compiler is unlikely to consider it a candidate for inlining when it makes its own decisions about which functions to inline.
Also note that unlike in C++, the C language does not allow inline without static.

Resources