Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Is there a C standard conforming way to do struct alignment?
I know that one could put the biggest elements first but I am in a situation where I am about to implement a protocol.
No, not in general.
You absolutely cannot change the order of elements in a structure, they have to be in the exact order as declared.
But you also cannot know in advance what some future processor will prefer, in terms of alignment.
Protocols (which are "external" representations) should never involve directly copying to/from a struct in memory; instead you must serialize/deserialize each struct member on its own.
For implementing a protocol, it is best to serialize the values as needed and deserialize them as well.
This retains compatibilty across architectures with varying data field sizes, alignment requirements and endianness.
You can specify alignment in C11 with the _Alignas keyword (see also stdalign.h). You can look it up in the draft of the C11 standard that is freely available.
Depending on some kind of directive is probably not a good idea. I would not do it. Custom serializers/deserializers are the standard here.
I suggest writing a simple parser that would process a struct declaration and prepare a set of serialize/deserialize functions. If it is a simple plain old data structure, the parser and code generator will be very simple and efficient.
It sounds like you plan to do something like a union overlay to encode / decode the protocol. This is a bad idea. It is much better to do a proper serialise / deserialise, field by field.
This is much more portable.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a disagreement with my colleague about sending/receiving a data structure between two machines (Also with different compilers) by UART.
Our data structure has several simple variable types as its fields (like int32, uint8 and etc).
In his opinion, to have a data structure with the same sequence and alignment in their fields, we have to use serializer and deserializer. Otherwise, Our code has the potential of different struct layout between two sides.
But I did it without using serializer/deserializer many times and never saw any problem.
I think using from the #pragma pack(...), guarantee our purpose. Because of most differences in each compiler (In data structures compiling) occurs in fields alignment due to padding for speedup or size optimization. (Ignore the different of endianness).
For more details, We want to send/receive a struct between a Cortex-M4 (IAR) and PC (Qt in windows) by UART currently.
Am I in a wrong way? Or my friend?!
This is, I'm afraid, fundamentally a question of opinion, that can never be fully resolved.
For what it's worth, I am adamantly, vociferously with your colleague. I believe in writing explicit serializers and deserializers. I don't believe in blatting out an in-memory data structure and hoping that the other side can slurp it down without error. I don't believe in ignoring endianness differences. I believe that "blatting it out" will inevitably fail, in the end, somewhere, and I don't want to run that risk. I believe that although the explicit de/serializers may seem to be more trouble to write up front, they save time in the long run because of all the fussing and debugging you don't have to do later.
But there are also huge swaths of programmers (I suspect a significant majority) who agree entirely with you: that given enough hacking, and suitable pragmas and packing directives, you can get the "blat it out" technique to work at least most of the time, and it may be more efficient, to boot. So you're in good company, and with as many people as there are out there who obviously agree with you, I can't tell you that you're wrong.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
All of my initial programming experience has been in object-oriented languages (Java, Python). I am now learning C, and it seems that there are still things that resemble objects.
Say for example, a FILE pointer created with the standard C library. This is just a pointer to the memory location of a struct. Is this essentially the same thing as an object in OO languages?
There are existing questions asking about the difference between a struct and a class, but I'm asking about how a program uses those things. A struct is used or accessed via a pointer, while an object is a particular instance of a class. In this sense, it seems that a class is more general than a struct. Though I really only added this paragraph to prevent this questions from being marked as a duplicate, and it strays from my original question.
If a FILE pointer is, indeed, comparable to some FILE object in a different language, then where is the key difference between how that "thing" called FILE will be handled in a object-oriented language vs a non-object-oriented language. Seems like the line starts to blur.
In the C programming language, an object is a “region of data storage in the execution environment, the contents of which can represent values” (cf ISO 9899:2011 §3.15). Almost everything is an object, including pointers, arrays, structures, and integers (but not functions).
This notion however is different from what you understand as an “object” in most object-oriented languages. Notably, objects in C don't have behaviour associated with them, don't have classes and don't have any guarantees whatsoever. There isn't even a guarantee that an object may represent any value at all. An object is only a bit of memory.
The typical API design pattern in procedural programming languages like C is that there is a set of functions (like fopen, fprintf, fclose, fwrite, etc.) and a set of structure types (like FILE) that collect data required for these functions. The association between these structures and the corresponding behaviour is made by passing structures to functions.
You can build all the things you have in an object-oriented language like that, including virtual function calls and classes, you just have to build all of this manually. I believe this is a strength of C as you are not forced into a certain program structure, you can apply the design pattern of object-orientation where appropriate and use other approaches where not.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
It might be a silly question, but I'm interested in it very much. Is it possible to implement operator new, dynamically expanding arrays, classes in pure C?
Any links or code examples will be appreciated.
new: #define new(type) malloc(sizeof(type)) (have to call it using function syntax, like struct stat *st = new(struct stat))
dynamically expanding arrays: realloc plus some custom array-manipulation functions (like push_back, etc.) - this is commonly implemented by third-party C utility libraries (and, as #Mgetz points out, some compilers have built-in extensions for it)
classes: structs with function pointer members (this is very common in several projects, such as the Linux kernel)
You might want to look at GObject, which is a C library providing some object-oriented features to C. Also see the dozens of hits you get for googling "Object-Oriented C".
A quick google search revealed this:
http://ooc-coding.sourceforge.net/
Haven't read it through but it sounds like what you're after.
Yes, it is possible (common?) to implement object orientedness in C - or at least the bits that are especially needed.
An example is a once created a garbage collector by storing the pointers to malloced memory and the free function in linked lists.
The best thing about C is that it just works and there is almost zero overhead. The more work a language does for you automatically can mean there is a lot more overhead - though this is not always the case.
It depends if it is OK for you to reimplement the compiler.
If it's ok - you can do whatever you wish, otherwise:
new - as an operator - no, but you can define a function + macros that will simulate it.
classes - yep, you can. you may simulate it pretty closely with static functions and an array of pointers to functions. But there will be no overloading.
expanding arrays - yes, with the classes simulation above.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
So, across my programming experience I have come across two types of type annotations for statically typed languages: I call them 'before' and 'after'. C-Style languages use the format
int i = 5
While most non-c-family languages use the format
var c:int = 5
Examples of the former category would be C, C++, Java; examples of the latter category would be Scala, Haxe, Go.
This may seem to some to be superficial, but my question is: what are the advantages of each style? Why use one over the other? Why did C adopt that style in the first place?
The machine doesn't care - it's just that people who designed certain languages felt that some types of syntax are better or more easily readable than the others. Modern compilers usually have several stages of processing, and almost all of this syntactic differences are usually lost after the first stage, which parses text and converts then into compiler internal structures (AST - abstract syntax tree).
There are some historical precedences, e.g. the "prefix" vs "infix" vs "postfix" notation (http://en.wikipedia.org/wiki/Polish_notation, http://en.wikipedia.org/wiki/Infix_notation, http://en.wikipedia.org/wiki/Reverse_Polish_notation) which in the context of computer engineering history were used in edge cases - e.g. the "infix" notation is usually harder to parse and requires more memory than the postfix/RPN notation so it was not used where resources were really scarce (several KiB of memory or less), but most of those reasons are now obsolete as hardware is powerful enough.
Today, when designing a language, the choice of such syntax details is influenced by trying to make the language similar to some other popular language or group of languages for which there are already existing programmers, to avoid making a "language from Mars" which few people will use.
tl;dr: Depends on the person who created the language and what he though is more readable or "the right thing to do").
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
c89
gcc (GCC) 4.7.2
Hello,
I am looking at some string functions as I need to search for different words in a sentence.
I am just wondering are the c standard functions fully optimized.
For example the functions like these:
memchr, strstr, strspn, strchr, etc
In terms of high performance as that is what I need. Is there anything better?
Regards,
You will almost certainly find that the standard library functions have been optimised as much as they can, and they will probably outdo anything you code up in C.
Note that this is for the general case. If there is some restriction you're able to put on the functions, or some extra piece of information you have on the data itself, you may be able to get your code to run faster, since you have the advantage of that restriction or information.
For example, I've written C code for malloc that blew a library-supplied malloc away because I knew the application would never ask for more than 256 bytes so I just gave 256 bytes on every request. Yes, that was a waste of memory but it allowed speed improvements beyond the general case.
But, for the general case, you're better off sticking with the supplied stuff.
Fully optimized? Optimized for what?
Yes, C functions of stdlib written to be very efficient and were tested/debugged for a years, so you definitely shouldn't worry about most of them.
Assuming, that you always align your data to 16-byte boundaries and allocate every time about 16 bytes extra or so, it's definitely possible to speed up most stdlib routines.
But assuming that eg. strlen is not known in advance, or that reading just one byte too much can cause a segmentation fault, I wouldn't bother.