Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
An experienced programer claims that passing values by pointer can slow the program or at least the compiler. Is that true ?
https://www.youtube.com/watch?feature=player_embedded&v=w7ay7QXmo_o#t=288
I watched the given segment of the video.
Situation:
A guy has a small third-party struct and passes it by value.
Why is it good:
1. Small struct doesn't take so much space to slow down the parameter passing through stack and you can (theoretically) achieve better memory/cache usage since you don't use the pointer to access memory. It's possible that compiler/optimizer couldn't do this for you as the guy mentions.
2. It is a third-party struct, it is not very probable that its size will change during the development of the program.
3. There is a difference in what the function signature is saying about its access/ownership with regard to the struct when it takes const pointer vs non-const pointer vs value, ...
What is questionable:
1. The guy doesn't really explain in-depth what is going on and why he did this optimization. Why to do it and speak about it at all then?
2. I don't see how this would slow down a compiler/optimizer in any way, but I'm not any expert on this matter.
Why this shouldn't be a general programming rule:
1. If you're not using a third-party struct, it is quite probable that your struct will change during the development process and you will either have inefficient code or alot to rewrite. Or probably the compiler will do the job for you and then there's no point of starting with it in the first place.
2. In development process where you are creating a new code, only thing you should think about performance-wise is the efficiency of the core algorithms and datastructures. If you write terrible sort algorithm, you won't help it by passing a struct by a value. As mentioned in comments, it depends on the consequences. I doubt that anyone can really foresee that something as marginal (performance-wise) as passing by value vs passing by pointer, when it comes to small structs, makes significant performance impact. Making such decision should be based on either knowing the consequences very well (ideally having solved this exact issue earlier) or having a profiler report that states that there is a performance problem with this.
Taking that into account, then a function that updates the game(?) window, that is run 60 or possibly even 120 times per second, is to be assumed the core of the program and should be optimized as much as possible. And it seems that the guy was working on it and found that he gets better results by passing the struct by value instead of passing by pointer.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a disagreement with my colleague about sending/receiving a data structure between two machines (Also with different compilers) by UART.
Our data structure has several simple variable types as its fields (like int32, uint8 and etc).
In his opinion, to have a data structure with the same sequence and alignment in their fields, we have to use serializer and deserializer. Otherwise, Our code has the potential of different struct layout between two sides.
But I did it without using serializer/deserializer many times and never saw any problem.
I think using from the #pragma pack(...), guarantee our purpose. Because of most differences in each compiler (In data structures compiling) occurs in fields alignment due to padding for speedup or size optimization. (Ignore the different of endianness).
For more details, We want to send/receive a struct between a Cortex-M4 (IAR) and PC (Qt in windows) by UART currently.
Am I in a wrong way? Or my friend?!
This is, I'm afraid, fundamentally a question of opinion, that can never be fully resolved.
For what it's worth, I am adamantly, vociferously with your colleague. I believe in writing explicit serializers and deserializers. I don't believe in blatting out an in-memory data structure and hoping that the other side can slurp it down without error. I don't believe in ignoring endianness differences. I believe that "blatting it out" will inevitably fail, in the end, somewhere, and I don't want to run that risk. I believe that although the explicit de/serializers may seem to be more trouble to write up front, they save time in the long run because of all the fussing and debugging you don't have to do later.
But there are also huge swaths of programmers (I suspect a significant majority) who agree entirely with you: that given enough hacking, and suitable pragmas and packing directives, you can get the "blat it out" technique to work at least most of the time, and it may be more efficient, to boot. So you're in good company, and with as many people as there are out there who obviously agree with you, I can't tell you that you're wrong.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have some old C programs to maintain. For some functions (at least 10) with exactly the same parameters, the programmer utilized a macro to avoid typing the same parameters again and again. Here is the macro definition:
#define FUNC_DECL(foo) int foo(int p1, int p2, ....)
Then, if I want to define function with the same parameters, I need only type:
FUNC_DECL(func1)
Besides avoiding the tedious work of typing same parameters many times, are there any other advantages of this implementation?
And this kind of implementation confuses me a little bit. Are there other disadvantages of it?
Is this kind of implementation a good one?
As I noted in comments to the main question, the advantage of using a macro to declare the functions with the same argument list is that it ensures the definitions do have the same argument list.
The primary disadvantage is that it doesn't look like regular C, so people reading the code have to search more code to work out what it means.
On the whole, I don't like that sort of macro-based scheme, but occasionally there are good enough reasons to use it — this might be a borderline example.
There are at least ten functions with the same parameters. Currently, every function only has 3 parameters.
Oh, only 3 parameters? No excuse for using the macro then — I thought it was 10 parameters. Clarity is more important. I don't think that the code will be clearer using the macro. The chances that you'll need to change 10 functions to use 4 parameters instead of 3 is rather limited — and you'd have to change the code to use the extra parameter anyway. The saving of typing is not relevant; the saving of time spent puzzling over the meaning of the macro is relevant. And the first person who has to puzzle over the code will spend longer doing that than you'd save typing the function declarations out — even if you hunt and peck when typing.
Away with it — off with its head! Expunge the macro. Make your code happy again.
#define is a text processor kind of thing. So, whether you write the full function declaration or use the preprocessor instead, both will do the same thing with similar execution times. Using #define makes a program readable/short and doesn't affect end result at all but more number of #define means more compilation time and nothing else. But generally, programs are used more than they are compiled. So, the usage of #define doesn't hamper your production environment at all.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have been told by more senior, experienced and better-educated programmers than myself that the use of function-pointers in c should be avoided. I have seen the fact that some code contains function pointers as a rationale not to re-use that code, even when the only alternative is complete re-implementation. Upon further discussion I haven't been able to determine why this would be. I am happy to use function pointers where appropriate, and like the interesting and powerful things they allow you to do, but am I throwing caution to the wind by using them?
I see the pros and cons of function pointers as follows:
Pros:
Great opportunity for code modularity
OO-like features in non-OO c (i.e. code and data in the same object)
How else could you reasonably implement a callback?
Cons:
Negative impact to code readability - not always obvious what function is actually called when a function pointer is invoked
Minor performance hit compared to a direct function call
I think Con # 1. can usually reasonably be mitigated by well chosen symbol names and good comments. And Con # 2. will in general not be a big deal. Am I missing something - are there other reasons to avoid function pointers like the plague?
This question looks a little discussion-ey, but I'm looking for good reasons why I shouldn't use function pointers, not opinions
Function pointers are not evil. The main times you "shouldn't" use them are when either:
The use is gratuitous, i.e. not actually needed for what you're doing, or
In situations where you're writing hardened code and the function pointer might be stored at a location you're concerned may be a likely candidate for buffer overflow attacks.
As for when function pointers are needed, Adam's answer provided some good examples. The common theme in all those examples is that the caller needs to be able to provide part of the code that runs from the called function. Without function pointers, the only way you could do this would be to copy-and-paste the implementation of the function and change part of it to call a different function, for every individual usage case. For qsort and bsearch, which can be implemented portably, this would just be a nuisance and hideously ugly. For thread creation, on the other hand, without function pointers you would have to copy and paste part of the system implementation for the particular OS you're running on, and adapt it to call the function you want called. This is obviously unacceptable; your program would then be completely non-portable.
As such, function pointers are absolutely necessary for some tasks, and for other tasks, they are a major convenience which allows general code to be reused. I see no reason why they should not be used in such cases.
No, they're not evil. They're absolute necessary in order to implement various features such as callback functions in C.
Without function pointers, you could not implement:
qsort(3)
bsearch(3)
Window procedures
Threads
Signal handlers
And many more.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want to obfuscate code just for fun. I'm looking at code from the international obfuscated c contest: http://www.ioccc.org/ And I seriously just have no idea how to even start reverse engineering some of this code to make anything of sense.
What are some common obfuscation techniques and how do you make sense of obfuscated code?
There is a lot of different techniques to obfuscate code, here is a small, very incomplete list:
Identifier mangling. Either you will find people using names like a, b, c exclusively, or you find identifiers that have absolutely nothing to do with the actual purpose of the variable/function. Deobfuscation would be to assign sensible names.
Heavy use of the conditional evaluation operator ? :, replacing all occurences of if() else. In most cases that's a lot harder to read, deobfuscation would reinsert if().
Heavy use of the comma operator instead of ;. In combination with 2. and 4., this basically allows the entire program to be one single statement in main().
Recursive calls of main(). You can fold any function into main by having an argument that main can use to decide what to do. Combine this with replacing loops by recursion, and you end up with the entire program being the main function.
You can go the exact opposite direction to 3. and 4., and hack everything into pieces by creating an insane amount of functions that all do virtually nothing.
You can obfuscate the storage of an array by storing the values on the stack. Should you need to walk the data twice, there's always the fork() call handy to make a convenient copy of your stack.
As I said, this is a very incomplete list, but generally, obfuscation is usually the heavy, systematic abuse of any valid programming technique. If the IOCCC were allowing C++ entries, I would bet on a lot of template code entering, making heavy use of throwing exceptions as an if replacement, hiding structure behind polymorphism, etc.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Just out of curiosity, assuming there exists a software life form. How would you detect him/her? What are your criteria of figuring out if something/someone is intelligent or not?
It seems to me that it should be quite simple to create such software once you set the right target (not just following a naive "mimic human->pass Turing Test" way).
When posting an answer try also finding a counter example. I have real difficuly inventing anything consistent which I myself agree with.
Warmup
First we need to understand what a life form is.
Take this explanation, for example:
An entity which exists and tries to continue its existence through nourishment or procreation.
If we accept this explanation then in fact many programs represent a life form.
They exist, that's obvious. They attempt to continue their existence through opening child processes, surviving in persistent data storages and continuing the next day.
So, here we are, among digital life forms around us.
On the other hand, there's the idea of evolving and being sentient.
With evolving, it's easy. Many programs have been written to be able to modify their body to adapt to certain scenarios. Computer viruses are first examples of that.
With sentience, it is a different story. An entity needs to be aware of its existence, understand itself and the environment around it, also take active decisions on its life activities.
A computer program has nothing of that kind. In fact, if it still applies, the scientists haven't figured out the definition of "being aware of itself" and consciousness. So until we know what that means, we can't attribute that quality to an entity or the other way around, to take it away.
The bottom line is, you can argue a computer program to be a life form, but it does not qualify for a sentient being.
Thinks humanly, acts humanly.
OR
Thinks rationally, acts rationally.