Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
See
http://pubs.opengroup.org/onlinepubs/009696699/basedefs/sys/socket.h.html
(http://pubs.opengroup.org/onlinepubs/9699919799 is from Issue 7 - from 2013 and still the same!)
sockaddr_storage is meant to be cast to other structure types,
but that contradicts the ANSI and ISO C standards aliasing rules
as far as I can tell. (Objects may not be accessed through
pointers to incompatible types, with the exception that anything
can be accessed through the 3 char types and that the structure
and its first member are interchangeable.)
I know that that practice of working with sockets existed long
before C was standardised, but POSIX is supposed to conform to
ISO C and actually it contradicts the standards in its manual. (Even in the
newer versions of POSIX.)
Why did they make it like this in the first place?
Why didn't they change it?
The strict-aliasing rules in the standard constrain user code, not implementation code. Since the POSIX headers and libraries are part of the implementation, there is no actual conflict between the POSIX and the C standard.
In an open-source platform, and in particular in Linux where the C library and compiler are developed by different teams, this makes life difficult for implementors, but that is their concern, not yours. For example, the implementors could:
refrain from exposing the potential conflict between the standards (that is, disable strict-aliasing optimizations);
admit that their implementation is not POSIX compliant (and note that, for example, there are no POSIX-certified Linux distributions);
provide facilities to ensure that the potentially conflicting facilities do not actually conflict. From the point of view of the C standard, this would be an extension.
This last option is how the gcc and glibc teams are working to resolve the sockaddr issue; see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71255
As a matter of fact, I do not think there is a violation of strict aliasing rule here. Yes, you cast it to a different type when you call a function, but who said it has to be accessed through pointer of this type?
Protocol implementations know the proper type of the structure, so when they access the structure, they convert it back to the proper type. Conversion here is only used for passing pointer from one routine to another, but converted type is not used to access the data.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
ISO:IEC 9899 standardizes the prototypes of the functions of the C standard library and describes their behavior. It specifies the identifier, the return type and the parameter(s) with its matching type(s) of a certain C standard function.
But why it does not specify the definitions - (the core how the specific functions actually do work)?
Why can a C standard library function X differ in its actual source code between f.e. the gcc compiler suite on Linux (GNU C Library), clang suite on macOS and the core system dynamic libraries for Microsoft Visual C++ on Windows? Why is it dependent upon the implementation, the operation system and the relative compiler design?
Edit:
I know the question seems bad for the most of yours at the first sight but it has definitely a right to get answered, since I don´t know the reason for that yet.
I do not suggest that the ISO shall standardize the definitions because the question was closed as opinion-based - don´t get me wrong. I just ask why are things that way and want to learn from your knowledge and experience.
Take strlen for example. If the ISO C standard standardized the definition of this function, it would probably look like this:
size_t strlen(char *s)
{
size_t l = 0;
while(s[l]) l++;
return l;
}
This is highly inefficient. The GNU C library has implementations written in assembly and C that are very fast, but aren't portable.
Some functions may be impossible to standardize. For example, how would it define putchar, vfprintf, and fwrite? What about assembly functions like longjmp? Or "macros" like setjmp?
Other definitions may be exploited. For example, if the Standard C committee standardizes memcpy, two things would happen:
people can abuse the copy order, and
existing implementations would be invalidated.
This question already has answers here:
Why does everybody typedef over standard C types?
(4 answers)
Closed 3 years ago.
Please help me understand reason of defining C types in some projects.
In my current company I found that someone made definitions of types that are equivalent of those that are already defined in <stdint.h>.
Such approach makes harder to integrate 3party code into project, and makes programmer work, bit more frustrating.
But I can also see that some projects, like gnome do the same. There is a gchar, gsize, and gint32 for example.
Because I don't see ANY reason for such approach, I kindly ask for explanation.
What is the reason that <stdint.h> isn't sufficient.
This is not good practice. It only leads to less compatibility and portability. I don't see any reason that it should be done. stdint.h exists to avoid this.
<stdint.h> was standardized in C99. Perhaps the codebase predates <stdint.h>, or it needs to compile on platforms that don't have it. It was an extremely common thing to do in C89 and K&R C when there were no portable fixed size typedefs. Even modern projects may keep around these compatibility shims if they still aim to be compilable on decades-old platforms.
In my current company I found that someone made definitions of types that are equivalent of those that are already defined in <stdint.h>.
If your codebase targets C99 or later then there's no need.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Basically, I have a inline function in C:
struct array {
unsigned long size;
void* items;
};
typedef struct array* Array;
inline Array array_create(unsigned long initsize);
inline void array_free(Array this);
Am I free to use the this keyword in this kind of situation, or is it better to avoid it, and why (not)?
EDIT: This question originated from a bug in my code where I used inline void array_free(Array array); which changed the result of sizeof(array); and gave me the idea to use this instead of adapting to the (in my opinion ugly) sizeof(struct array);.
It's technically correct because C is not C++, so this is not a keyword in C.
Whether it's wise, now that's a different question. If there is any chance that this piece of code will ever be compiled as C++, then an identifier called this will break the code.
Using this in any fashion you want is totally valid C.
What you have to ask yourself, by the way these apply to any C++ reserved words like class, final etc, is :
Do I use a program that highlights this as a keyword conveing the wrong message ? e.g. Visual Studio highlights this even when you're in a .c file so some confusion may arise.
Do I plan to promote my code to C++ in the future ? In such a case you'll have some extra work to do that could be avoided.
Does my code interact with C++ ? This is not a problem per se but you have to bear in mind that your code will interact with C++ programmers as well. And you don't won't to confuse people that may not be aware of C in great detail (even though someone may say it's their duty do be aware of what they're doing when reading another language).
Since this is something that can be avoided I find using it immoral but not incorrect.
I'm not saying you have to study C++ prior to writing C, but once you know something, it's a good practice to make good use of it.
A subtle problem you may cause is to make your code 'uncallable' from C++ for example a
#define this some_definition
in a C header file that is later included from C++ may have weird effects to your code.
The reasons not to use this in standard C is that it makes your code:
Slightly less readable, due to the "well-known" usage of this in C++. Your code can look as if it's C++ and an uninformed reader can easily get confused.
Unportable to C++. If you (or someone else) ever want to take this code and use it in a C++ compilation, it will fail. That's a downside and an upside at the same time, since getting an error can be indicative that care must be taken, where's not getting one might let important issues slip.
It depends what your objective is.
If your C code program will always be built with a C compiler that is not a C++ compiler, then it makes no difference whether you use this as an identifier in your code. It is not a reserved identifier, so you are free to use it.
The potential problem with that premise is that a number of mainstream C compilers are actually C++ compilers, or they support some C++ features as extensions, so they may reject your code (or - less likely - do something with it that you don't expect). It is not possible to predict with absolute certainty that the vendor of your compiler(s) of choice will never (even if they give you a promise in writing) release a future version of their C compiler that will reject or do something unexpected with your code. The likelihood of this happening is relatively low, but non-zero.
In the end you need to decide what risk you are willing to take with maintaining your code in future.
Of course, if you are a C fanatic who wants your code to have an incompatibility with C++ (yes, such people do exist) then using a number of keywords or reserved identifiers that are specific to C++, as well as using such keywords or identifiers that are specific to (more recent versions of) C may be a worthwhile approach.
As it is already mentioned, you can use any non-reserved keyword as a variable name.
However, I suggest to use something like 'pThis', or '[struct name]This' or similar to express your intent of using a C struct together with functions that are taking as first argument a pointer to [struct name] instance, and are meant to be used in a similar manner as member functions of a C++ class.
This way your code may be more readable and your intent more understandable by someone who is using it.
In C you do not have the this keyword. Only in C++ and in a class, so your code is C and you use your this variable as a local method parameter, where you access the array struct.
Yes, you can. If the code happens to get compiled as C++, it is not your fault. However, in case that happens, other things won't be accepted; the fact that you likely assign to items without a cast, for instance, because C++ does not allow that unlike C.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm starting a pet project, aimed at portability. It's a simple platform game and i'm planning to compile this to many different platforms with different toolchains. The video/input/system stuff is already abstracted by having multiple video drivers, which i include based on ifdef's around my code. Each platform makefile has a define of the platform (DC, NDS, PSP, etc.) and then i include the proper video drivers, which are C files with various functions called around my code.
However, i'm not sure about other caveats of portable applications in C. Should i redefine stuff from the stdlib? u8, u16, u32 and s8, s16, s32, etc? What knowledge can you share with me for this project?
A portable program is a program that:
only uses the features of the language and library defined in the C Standard
does not invoke undefined behavior
does not depend on unspecified or implementation defined behavior.
For a list of undefined, unspecified and implementation defined behaviors, you can go the C Standard C11, Appendix J (Portability issues).
Writing in C is more or less portable as long as you make no suppositions about the sizes of your types and the pointers you use to access them. I personally prefer using the types defined in stdint.h (http://pubs.opengroup.org/onlinepubs/7999959899/basedefs/stdint.h.html) - this defines like uint8_t, uint16_t ... - but feel free to research more alternatives, such as types.h (from POSIX Standard: 2.6 Primitive System Data Types) which defines them as u_int8_t etc ...
Possibly, you will end up at the end defining your own types based on what you managed to mangle together from the various sources found on the net ... such as: game_int_16 ,game_int_32 ...
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I usually write C code in C89, now some features of C99 (like intxx_t or __VA_ARGS__ or snprintf) are very useful, and can be even vital.
Before I more my requirements from C89 to C99 I wanted to know which of C99 features were widely supported and which ones were not widely supported or even considered harmful.
I know we could just check our target compiler support, but this would narrow our support a lot, and as this is for open source software, I'd prefer having a wider support.
For example, we use Solaris (suncc) compiler and gcc, but there might be other compiler we would move out of the way while we could keep compatibility with very little efforts.
For example, I never worked on Windows nor I know anything about Windows compilers, but it would be good to keep Windows compatibility.
goto is still considered harmful.
Somehow I have collected four down votes. I presented the statement above to add levity, and am only 30% serious about the concept behind it.
I expect the down votes are from youngsters who do not understand the history of programming languages. Not every single goto is evil, but–compared to 100% unadulterated spaghetti code I have worked on (millions of lines of FORTRAN 66)–it is reasonable and productive to replace as many goto statements with structured statements (for, while, do .. while, switch) as possible. But sometimes a goto is just fine when it avoids complexity, such as extra flag variables to break out of multiple nested loops.
Well, gcc is basically going to be gcc regardless of which desktop OS you're targeting.
Visual C++, being primarily a C++ compiler, isn't quite as concerned with the C99 spec. stdint.h does declare your favorite intxx_t macros. __VA_ARGS__ is available. _Bool, _Complex, and _Pragma aren't implemented on the Microsoft Visual C++ compiler. I'm pretty sure %a fields in printf/scanf haven't been implemented, though maybe VC2010 handles them. snprintf is present, but has a leading underscore and slightly different semantics.
Short answer: The "easier" a C99 feature is to implement without changing compiler grammars or replumbing the standard library, the more likely VC++ is to support it. If there's a conflict between C99 and C++, expect C++ to win.
A number of C99 features are optional, so their lack isn't technically non-conforming. I won't distinguish below.
Hmm, win doesn't have <stdint.h>, although there is an open-source version of stdint.h for Microsoft. Even when the file is implemented, many of the individual types are missing.
Complex and imaginary support is often missing or broken.
Extended identifiers and wide characters can be problem points.
See this list of C99 feature issues in gcc.
Runtime sizeof is a nightmare of compiler's writers. So I consider is harmful.
glibc does not implement a C99-conforming realloc, so realloc(ptr, 0) is not portable.
http://sourceware.org/bugzilla/show_bug.cgi?id=12547
restrict became a keyword in C99. That is implementation encroaching on users' namespace. If you have a valid C89 program that contains the word restrict, you must change your program to make it work with C99. In other words: no backward compatibility. If they were going to break backward compatibility, they should have removed gets from the standard first.
The type generic maths functions from <tgmath.h> are not necessarily widely implemented, though they do seem to be provided with GCC 4.2.1 on MacOS X 10.6.2.