My main question is, Is there any difference between int and int8_t for execution time ?
In a framework I am working on, I often read code where some paramteres are set as int8_t in function because "that particular parameter cannot be outside the -126,125 range".
In many places, int8_t is used for communication protocol, or to cut a packet into many fields into a __attribute((packed)) struct.
But at some point, it was mainly put there because someone thought it would be better to use a type that match more closely the size of the data, probably think ahead of the compiler.
Given that the code is made to run on Linux, compiled with gcc using glibc, and that memory or portability is not an issue, I am wondering if it is actually a good idea, performance-wise.
My first impression comes from the rule "Trying to be smarter than the compiler is always a bad idea" (unless you know where and how you need to optimize).
However, I do not know if using int8_t is actually a cost for performance (more testing and computation to match the int8_t size, more operations are needed to ensure the variable do not go out of bounds, etc.), or if it does improve performance in some way.
I am not good at reading simple asm, so I did not compile a test code into asm to try to know which one is better.
I tried to find a related question, but all discussion I found on int<size>_t versus int is about portability rather than performance.
Thanks for your input. Assembly samples explained or sources about this issue would be greatly appreciated.
int is generally equivalent of the size of register on CPU. C standard says that any smaller types must be converted to int before using operators on them.
These conversions (sign extension) can be costly.
int8_t a=1, b=2, c=3;
...
a = b + c; // This will translate to: a = (int8_t)((int)b + (int)c);
If you need speed, int is a safe bet, or use int_fast8_t (even safer). If exact size is important, use int8_t (if available).
when you talk about code performance, there are several things you need to take into account which affect this:
CPU architecture, more to the point, which data types does the cpu support natively ( does it support 8 bit operations? 16 bit? 32 bit? etc...)
compiler, working with a well known compiler is not enough, you need to be familiar with it: they way you write your code influences the code it generates
data types and compiler intrinsics: these are always considered by the compiler when generating code, using the correct data type (even signed vs unsigned matters) can have a dramatic performance impact.
"Trying to be smarter than the compiler is always a bad idea" - that is not actually true; remember, the compiler is written to optimize the general case and you are interested in you particular case; it's always a good idea to try and be smarter than the compiler.
Your question is really too broad for me to give a "to the point" answer (i.e. what is better performance wise). The only way to know for sure is to check the generated assembly code; at least count the number of cycles the code would take to execute in both cases. But you need to understand the code to understand how to help the compiler.
Related
Assuming to have a counter which counts from 0 to 100, is there an advantage of declaring the counter variable as uint16_t instead of uint8_t.
Obviously if I use uint8_t I could save some space. On a processor with natural wordsize of 16 bits access times would be the same for both I guess. I couldn't think why I would use a uint16_t if uint8_t can cover the range.
Using a wider type than necessary can allow the compiler to avoid having to mask the higher bits.
Suppose you were working on a 16 bit architecture, then using uint16_t could be more efficient, however if you used uint16_t instead of uint8_t on a 32 bit architecture then you would still have the mask instructions but just masking a different number of bits.
The most efficient type to use in a cross-platform portable way is just plain int or unsigned int, which will always be the correct type to avoid the need for masking instructions, and will always be able to hold numbers up to 100.
If you are in a MISRA or similar regulated environment that forbids the use of native types, then the correct standard-compliant type to use is uint_fast8_t. This guarantees to be the fastest unsigned integer type that has at least 8 bits.
However, all of this is nonsense really. Your primary goal in writing code should be to make it readable, not to make it as fast as possible. Penny-pinching instructions like this makes code convoluted and more likely to have bugs. Also because it is harder to read, the bugs are less likely to be found during code review.
You should only try to optimize like this once the code is finished and you have tested it and found the particular part which is the bottleneck. Masking a loop counter is very unlikely to be the bottleneck in any real code.
Obviously if I use uint8_t I could save some space.
Actually, that's not necessarily obvious! A loop index variable is likely to end up in a register, and if it does there's no memory to be saved. Also, since the definition of the C language says that much arithmetic takes place using type int, it's possible that using a variable smaller than int might actually end up costing you space in terms of extra code emitted by the compiler to convert back and forth between int and your smaller variable. So while it could save you some space, it's not at all guaranteed that it will — and, in any case, the actual savings are going to be almost imperceptibly small in the grand scheme of things.
If you have an array of some number of integers in the range 0-100, using uint8_t is a fine idea if you want to save space. For an individual variable, on the other hand, the arguments are pretty different.
In general, I'd say that there are two reasons not to use type uint8_t (or, equivalently, char or unsigned char) as a loop index:
It's not going to save much data space (if at all), and it might cost code size and/or speed.
If the loop runs over exactly 256 elements (yours didn't, but I'm speaking more generally here), you may have introduced a bug (which you'll discover soon enough): your loop may run forever.
The interviewer was probably expecting #1 as an answer. It's not a guaranteed answer — under plenty of circumstances, using the smaller type won't cost you anything, and evidently there are microprocessors where it can actually save something — but as a general rule, I agree that using an 8-bit type as a loop index is, well, silly. And whether or not you agree, it's certainly an issue to be aware of, so I think it's a fair interview question.
See also this question, which discusses the same sorts of issues.
The interview question doesn't make much sense from a platform-generic point of view. If we look at code such as this:
for(uint8_t i=0; i<n; i++)
array[i] = x;
Then the expression i<n will get carried out on type int or larger because of implicit promotion. Though the compiler may optimize it to use a smaller type if it doesn't affect the result.
As for array[i], the compiler is likely to use a type corresponding to whatever address size the system is using.
What the interviewer was fishing for is likely that uint32_t on a 32 bitter tend to generate faster code in some situations. For those cases you can use uint_fast8_t, but more likely the compiler will perform optimizations no matter.
The only optimization uint8_t blocks the compiler from doing, is to allocate a larger variable than 8 bits on the stack. It doesn't however block the compiler from optimizing out the variable entirely and using a register instead. Such as for example storing it in an index register with the same width as the address bus.
Example with gcc x86_64: https://godbolt.org/z/vYscf3KW9. The disassembly is pretty painful to read, but the compiler just picked CPU registers to store anything regardless of the type of i, giving identical machine code between uint8_t anduint16_t. I would have been surprised if it didn't.
On a processor with natural wordsize of 16 bits access times would be the same for both I guess.
Yes this is true for all mainstream 16 bitters. Some might even manage faster code if given 8 bits instead of 16. Some exotic systems like DSP exist, but in case of lets say a 1 byte=16 bits DSP, then the compiler doesn't even provide you with uint8_t to begin with - it is an optional type. One generally doesn't bother with portability to wildly exotic systems, since doing so is a waste of everyone's time and money.
The correct answer: it is senseless to do manual optimization without a specific system in mind. uint8_t is perfectly fine to use for generic, portable code.
I am reviewing some source code and I was wondering if the following was thread safe? I have heard of compiler or CPU instruction/read reordering (would it have something to do with branch prediction?) and the Data->unsafe_variable variable below can be modified at any time by another thread.
My question is: depending on how the compiler/CPU reorder read/writes, would it be possible that the below code would allow the Data->unsafe_variable to be fetched twice? (see 2nd snippet)
Note: I do not worry about the first access, any data can be there as long as it does not pass the 'if', I am just concerned by the possibility that the data would be fetched another time after the 'if'. I was also wondering if the cast into volatile here would help preventing a double fetch?
int function(void* Data) {
// Data is allocated on the heap
// What it contains at this point is not important
size_t _varSize = ((volatile DATA *)Data)->unsafe_variable;
if (_varSize > x * y)
{
return FALSE;
}
// I do not want Data->unsafe_variable to be fetch once this point reached,
// I want to use the value "supposedly" stored in _varSize
// Would any compiler/CPU reordering would allow it to be double fetched?
size_t size = _varSize - t * q;
function_xy(size);
return TRUE;
}
Basically I do not want the program to behave like this for security reasons:
_varSize = ((volatile DATA *)Data)->unsafe_variable;
if (_varSize > x * y)
{
return FALSE;
}
size_t size = ((volatile DATA *)Data)->unsafe_variable - t * q;
function10(size);
I am simplifying here and they cannot use mutex. However, would it be safer to use _ReadWriteBarrier() or MemoryBarrier() after the fist line instead of a volatile cast? (VS compiler)
Edit: Giving slightly more context to the code.
The code is broken for many reasons. I'll just point out one of the more subtle ones as others have pointed out the more obvious ones. The object is not volatile. Casting a pointer to a pointer to a volatile object doesn't make the object volatile, it just lies to the compiler.
But there's a much bigger point -- you are going about this totally the wrong way. You are supposed to be checking whether the code is correct, that is, whether it is guaranteed to work. You aren't clever enough, nobody is, to think of every possible way the system might fail to do what you assume it will do. So instead, just don't make those assumptions.
Thinking about things like CPU read re-ordering is totally wrong. You should expect the CPU to do what, and only what, it is required to do. You should definitely not think about specific mechanisms by which it might fail, but only whether it is guaranteed to work.
What you are doing is like trying to figure out if an employee is guaranteed to show up for work by checking if he had his flu shot, checking if he is still alive, and so on. You can't check for, or even think of, every possible way he might fail to show up. So if find that you have to check those kinds of things, then it's not guaranteed and relying on it is broken. Period.
You cannot make reliable code by saying "the CPU doesn't do anything that can break this, so it's okay". You can make reliable code by saying "I make sure my code doesn't rely on anything that isn't guaranteed by the relevant standards."
You are provided with all the tools you need to do the job, including memory barriers, atomic operations, mutexes, and so on. Please use them.
You are not clever enough to think of every way something not guaranteed to work might fail. And you have a plethora of things that are guaranteed to work. Fix this code, and if possible, have a talk with the person who wrote it about using proper synchronization.
This sounds a bit ranty, and I apologize for that. But I've seen too much code that used "tricks" like this that worked perfectly on the test machines but then broke when a new CPU came out, a new compiler, or a new version of the OS. Fixing code like this can be an incredible pain because these hacks hide the actual synchronization requirements. The right answer is almost always to code clearly and precisely what you actually want, rather than to assume that you'll get it because you don't know of any reason you won't.
This is valuable advice from painful experience.
The standard(s) are clear. If any thread may be modifying the object, all accesses, in all threads, must be synchronized, or you have undefined behavior.
The only portable solution for C++ is C++11 atomics, which is available in upcoming VS 2012.
As for C, I do not know if recent C standards bring some portable facilities, I am not following that, but as you are using Visal Studio, it does not matter anyway, as Microsoft is not implementing recent C standards.
Still, if you know you are developing for Visual Studio, you can rely on guarantees provided by this compiler, which apply to both C and C++. Some of them are implicit (accessing volatile variables implies also some memory barriers applied), some are explicit, like using _MemoryBarrier intrinsic.
The whole topic of the memory model is discussed in depth in Lockless Programming Considerations for Xbox 360 and Microsoft Windows, this should give you a good overview. Beware: the topic you are entering is full of hard topics and nasty surprises.
Note: Relying on volatile is not portable, but if you are using old C / C++ standards, there is no portable solution anyway, therefore be prepared to facing the need of reimplementing this for different platform should the need ever arise. When writing portable threaded code, volatile is considered almost useless:
For multi-threaded programming, there two key issues that volatile is often mistakenly thought to address:
atomicity
memory consistency, i.e. the order of a thread's operations as seen by another thread.
I already know that stdint is used to when you need specific variable sizes for portability between platforms. I don't really have such an issue for now, but what are the cons and pros of using it besides the already shown fact above?
Looking for this on stackoverflow and others sites, I found 2 links that treats about the theme:
codealias.info - this one talks about the portability of the stdint.
stackoverflow - this one is more specific about uint8_t.
These two links are great specially if one is looking to know more about the main reason of this header - portability. But for me, what I like most about it is that I think uint8_t is cleaner than unsigned char (for storing an RBG channel value for example), int32_t looks more meaningful than simply int, etc.
So, my question is, exactly what are the cons and pros of using stdint besides the portability? Should I use it just in some specifics parts of my code, or everywhere? if everywhere, how can I use functions like atoi(), strtok(), etc. with it?
Thanks!
Pros
Using well-defined types makes the code far easier and safer to port, as you won't get any surprises when for example one machine interprets int as 16-bit and another as 32-bit. With stdint.h, what you type is what you get.
Using int etc also makes it hard to detect dangerous type promotions.
Another advantage is that by using int8_t instead of char, you know that you always get a signed 8 bit variable. char can be signed or unsigned, it is implementation-defined behavior and varies between compilers. Therefore, the default char is plain dangerous to use in code that should be portable.
If you want to give the compiler hints of that a variable should be optimized, you can use the uint_fastx_t which tells the compiler to use the fastest possible integer type, at least as large as 'x'. Most of the time this doesn't matter, the compiler is smart enough to make optimizations on type sizes no matter what you have typed in. Between sequence points, the compiler can implicitly change the type to another one than specified, as long as it doesn't affect the result.
Cons
None.
Reference: MISRA-C:2004 rule 6.3."typedefs that indicate size and signedness shall be used in place of the basic types".
EDIT : Removed incorrect example.
The only reason to use uint8_t rather than unsigned char (aside from aesthetic preference) is if you want to document that your program requires char to be exactly 8 bits. uint8_t exists if and only if CHAR_BIT==8, per the requirements of the C standard.
The rest of the intX_t and uintX_t types are useful in the following situations:
reading/writing disk/network (but then you also have to use endian conversion functions)
when you want unsigned wraparound behavior at an exact cutoff (but this can be done more portably with the & operator).
when you're controlling the exact layout of a struct because you need to ensure no padding exists (e.g. for memcmp or hashing purposes).
On the other hand, the uint_least8_t, etc. types are useful anywhere that you want to avoid using wastefully large or slow types but need to ensure that you can store values of a certain magnitude. For example, while long long is at least 64 bits, it might be 128-bit on some machines, and using it when what you need is just a type that can store 64 bit numbers would be very wasteful on such machines. int_least64_t solves the problem.
I would avoid using the [u]int_fastX_t types entirely since they've sometimes changed on a given machine (breaking the ABI) and since the definitions are usually wrong. For instance, on x86_64, the 64-bit integer type is considered the "fast" one for 16-, 32-, and 64-bit values, but while addition, subtraction, and multiplication are exactly the same speed whether you use 32-bit or 64-bit values, division is almost surely slower with larger-than-necessary types, and even if they were the same speed, you're using twice the memory for no benefit.
Finally, note that the arguments some answers have made about the inefficiency of using int32_t for a counter when it's not the native integer size are technically mostly correct, but it's irrelevant to correct code. Unless you're counting some small number of things where the maximum count is under your control, or some external (not in your program's memory) thing where the count might be astronomical, the correct type for a count is almost always size_t. This is why all the standard C functions use size_t for counts. Don't consider using anything else unless you have a very good reason.
cons
The primary reason the C language does not specify the size of int or long, etc. is for computational efficiency. Each architecture has a natural, most-efficient size, and the designers specifically empowered and intended the compiler implementor to use the natural native data size data for speed and code size efficiency.
In years past, communication with other machines was not a primary concern—most programs were local to the machine—so the predictability of each data type's size was of little concern.
Insisting that a particular architecture use a particular size int to count with is a really bad idea, even though it would seem to make other things easier.
In a way, thanks to XML and its brethren, data type size again is no longer much of a concern. Shipping machine-specific binary structures from machine to machine is again the exception rather than the rule.
I use stdint types for one reason only, when the data I hold in memory shall go on disk/network/descriptor in binary form. You only have to fight the little-endian/big-endian issue but that's relatively easy to overcome.
The obvious reason not to use stdint is when the code is size-independent, in maths terms everything that works over the rational integers. It would produce ugly code duplicates if you provided a uint*_t version of, say, qsort() for every expansion of *.
I use my own types in that case, derived from size_t when I'm lazy or the largest supported unsigned integer on the platform when I'm not.
Edit, because I ran into this issue earlier:
I think it's noteworthy that at least uint8_t, uint32_t and uint64_t are broken in Solaris 2.5.1.
So for maximum portability I still suggest avoiding stdint.h (at least for the next few years).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I understand most of the micro-optimizations out there but are they really useful?
Exempli gratia: does doing ++i instead of i++, or while(1) or for(;;) really result in performance improvements (either in memory fingerprint or CPU cycles)?
So the question is, what micro-optimizations can be done in C? Are they really useful?
You should rely on your compiler to optimise this stuff. Concentrate on using appropriate algorithms and writing reliable, readable and maintainable code.
The day tclhttpd, a webserver written in Tcl, one of the slowest scripting language, managed to outperform Apache, a webserver written in C, one of the supposedly fastest compiled language, was the day I was convinced that micro-optimizations significantly pales in comparison to using a faster algorithm/technique*.
Never worry about micro-optimizations until you can prove in a debugger that it is the problem. Even then, I would recommend first coming here to SO and ask if it is a good idea hoping someone would convince you not to do it.
It is counter-intuitive but very often code, especially tight nested loops or recursion, are optimized by adding code rather than removing them. The gaming industry has come up with countless tricks to speed up nested loops using filters to avoid unnecessary processing. Those filters add significantly more instructions than the difference between i++ and ++i.
*note: We have learned a lot since then. The realization that a slow scripting language can outperform compiled machine code because spawning threads is expensive led to the developments of lighttpd, NginX and Apache2.
There's a difference, I think, between a micro-optimization, a trick, and alternative means of doing something. It can be a micro-optimization to use ++i instead of i++, though I would think of it as merely avoiding a pessimization, because when you pre-increment (or decrement) the compiler need not insert code to keep track of the current value of the variable for use in the expression. If using pre-increment/decrement doesn't change the semantics of the expression, then you should use it and avoid the overhead.
A trick, on the other hand, is code that uses a non-obvious mechanism to achieve a result faster than a straight-forward mechanism would. Tricks should be avoided unless absolutely needed. Gaining a small percentage of speed-up is generally not worth the damage to code readability unless that small percentage reflects a meaningful amount of time. Extremely long-running programs, especially calculation-heavy ones, or real-time programs are often candidates for tricks because the amount of time saved may be necessary to meet the systems performance goals. Tricks should be clearly documented if used.
Alternatives, are just that. There may be no performance gain or little; they just represent two different ways of expressing the same intent. The compiler may even produce the same code. In this case, choose the most readable expression. I would say to do so even if it results in some performance loss (though see the preceding paragraph).
I think you do not need to think about these micro-optimizations because most of them is done by compiler. These things can only make code more difficult to read.
Remember, [edited] premature [/edited] optimization is an evil.
To be honest, that question, while valid, is not relevant today - why?
Compiler writers are a lot more smarter than they were 20 years ago, rewind back in time, then these optimizations would have been very relevant, we were all working with old 80286/386 processors, and coders would often resort to tricks to squeeze even more bytes out of the compiled code.
Today, processors are too fast, compiler writers knows the intimate details of operand instructions to make every thing work, considering that there is pipe-lining, core processors, acres of RAM, remember, with a 80386 processor, there would be 4Mb RAM and if you're lucky, 8Mb was considered superior!!
The paradigm has shifted, it was about squeezing every byte out of compiled code, now it is more on programmer productivity and getting the release out the door much sooner.
The above I have stated the nature of the processor, and compilers, I was talking about the Intel 80x86 processor family, Borland/Microsoft compilers.
Hope this helps,
Best regards,
Tom.
If you can easily see that two different code sequences produce identical results, without making assumptions about the data other than what's present in the code, then the compiler can too, and generally will.
It's only when the transformation from one to the other is highly non-obvious or requires assuming something that you may know to be true but the compiler has no way to infer (eg. that an operation cannot overflow or that two pointers will never alias, even though they aren't declared with the restrict keyword) that you should spend time thinking about these things. Even then, the best thing to do is usually to find a way to inform the compiler about the assumptions that it can make.
If you do find specific cases where the compiler misses simple transformations, 99% of the time you should just file a bug against the compiler and get on with working on more important things.
Keeping the fact that memory is the new disk in mind will likely improve your performance far more than applying any of those micro-optimizations.
For a slightly more pragmatic take on the question of ++i vs. i++ (at least in a C++ context) see http://llvm.org/docs/CodingStandards.html#micro_preincrement.
If Chris Lattner says it, I've got to pay attention. ;-)
You would do better to consider every program you write primarily as a language in which you communicate your ideas, intentions and reasoning to other human beings who will have to bug-fix, reuse and understand it. They will spend more time on decoding garbled code than any compiler or runtime system will do executing it.
To summarise, say what you mean in the clearest way, using the common idioms of the language in question.
For these specific examples in C, for(;;) is the idiom for an infinite loop and "i++" is the usual idiom for "add one to i" unless you use the value in an expression, in which case it depends whether the value with the clearest meaning is the one before or after the increment.
Here's real optimization, in my experience.
Someone on SO once remarked that micro-optimization was like "getting a haircut to lose weight". On American TV there is a show called "The Biggest Loser" where obese people compete to lose weight. If they were able to get their body weight down to a few grams, then getting a haircut would help.
Maybe that's overstating the analogy to micro-optimization, because I have seen (and written) code where micro-optimization actually did make a difference, but when starting off there is a lot more to be gained by simply not solving problems you don't have.
x ^= y
y ^= x
x ^= y
++i should be prefered over i++ for situations where you don't use the return value because it better represents the semantics of what you are trying to do (increment i) rather than any possible optimisation (it might be slightly faster, and is probably not worse).
Generally, loops that count towards zero are faster than loops that count towards some other number. I can imagine a situation where the compiler can't make this optimization for you, but you can make it yourself.
Say that you have and array of length x, where x is some very big number, and that you need to perform some operation on each element of x. Further, let's say that you don't care what order these operations occur in. You might do this...
int i;
for (i = 0; i < x; i++)
doStuff(array[i]);
But, you could get a little optimization by doing it this way instead -
int i;
for (i = x-1; i != 0; i--)
{
doStuff(array[i]);
}
doStuff(array[0]);
The compiler doesn't do it for you because it can't assume that order is unimportant.
MaR's example code is better. Consider this, assuming doStuff() returns an int:
int i = x;
while (i != 0)
{
--i;
printf("%d\n",doStuff(array[i]));
}
This is ok as long as printing the array contents in reverse order is acceptable, but the compiler can't decide that for you.
This being an optimization is hardware dependent. From what I remember about writing assembler (many, many years ago), counting up rather than counting down to zero requires an extra machine instruction each time you go through the loop.
If your test is something like (x < y), then evaluation of the test goes something like this:
subtract y from x, storing the result in some register r1
test r1, to set the n and z flags
branch based on the values of the n and z flags
If your test is ( x != 0), you can do this:
test x, to set the z flag
branch based on the value of the z flag
You get to skip a subtract instruction for each iteration.
There are architectures where you can have the subtract instruction set the flags based on the result of the subtraction, but I'm pretty sure x86 isn't one of them, so most of us aren't using compilers that have access to such a machine instruction.
We're writing code inside the Linux kernel so, try as I might, I wasn't able to get PC-Lint/Flexelint working on Linux kernel code. Just too many built-in symbols etc. But that's a side issue.
We have any number of compilers, starting with gcc, but others also. Their warnings options have been getting stronger over time, to where they are pretty strong static analysis tools too.
Here is what I want to catch. Yes, I know it violates some things that are easy to catch in code review, such as "no magic numbers", and "beware of bit shifting", but that's only if you happen to look at that section of code. Anyway, here it is:
unsigned long long foo;
unsigned long bar;
[... lots of other code ...]
foo = ~(foo + (1<<bar));
Further UPDATED problem description -- even with bar limited to 16, still a problem. Clarifying, the problem is implicit int type of constant that, unplanned, makes the complex expression violate the rule that all calculations be carried out in the same size and signedness.
Problem: '1' is not long long, but, as a small-value constant, defaults to an int. Therefore even if bar's actual value never exceeds, say, 16, still the (1<<bar) expression will overflow and ruin the entire calculation.
Possibly correct solution: write 1ULL instead.
Is there a well-known compiler and compiler warning flag that will point out this (revised) problem?
I am not sure what criteria you are thinking of to flag
this construction as suspicious. There is clearly
something wrong if the value of bar is as large as than
the size (in bits) of an int, but usually the compiler
wouldn't know that.
From the point of view of a heuristic, bug-finding tool,
having good patterns to separate likely bugs from
normal constructions is key to avoiding too many false
positives (which make users hate the tool and refuse to
use it).
The Open Source tool in my URL flags logical shifts by a number larger
than the size of the type, but it is primarily a verification
tool for critical embedded software and expect a lot of work
to appropriate it if you intend to use it on the Linux kernel
with its linked structures and other difficulties.