I have been messing around trying to learn C lately. Coming from Java, it surprised me that you can perform certain operations declared as "undefined".
This just seems extremely unsafe to me. I understand it is the programmer's responsibility not to perform undefined operations, but why is it even allowed to start with? Why does the compiler not catch, for instance, array indices out of bounds, or even dangling pointers? You just end up accessing blocks of memory you never should access, with no (apparent) good reason.
As a comparison, Java makes extra sure you don't do anything stupid, throwing Exceptions around like hot cakes.
Surely there must be a reason why this is allowed? What is it?
ANSWER: To my understanding, the main reason is performance. Also, Java does have undefined behaviours, although not labeled as such.
EDIT: restricted question to C
Undefined behavior is not allowed, it's just not caught by the compiler.
The tradeoff here is between the speed and the safety. Many kinds of undefined behavior could be prevented at the expense of a few additional CPU cycles.
For example, you could prevent UB that happens when you read from memory that has been allocated but not initialized by having the compiled code write zeros into it. This, however, costs you a whole additional write into a memory, which is entirely unnecessary.
Similarly, one could prevent reading/writing past the end of an array by checking its bounds inside [] operator. However, this would cost you a few additional CPU cycles on each array access.
C++ designers decided that it is better to have speed and allow potential UB than to force everyone pay for what they do not need. This approach, however, is incompatible with Java's "write once, run anywhere" requirement, so designers of Java language insisted on fully defined behavior in nearly all situations.
Originally, most forms of Undefined Behavior represented things which some implementations might trap, but other implementations might not. Because there was no way for the authors of the Standard to predict all the things a platform might do in case of a trap (including, literally, the possibility that a system would sound an alarm and lock up until an operator manually cleared the fault), the consequences of traps fell outside the jurisdiction of the C Standard, and thus almost every action for which some platform might conceivably cause a trap is--from the point of view of the Standard--considered "Undefined Behavior".
That should not be taken to imply that the authors of the Standard didn't believe implementations should try to behave sensibly for such things when practical. The authors of the C89 Standard noted, for example, that the majority of current systems of that era would define behavior for:
/* Assume USmall is half the size of "int" */
unsigned mult(USmall x, USmall y) { return x*y; }
which would in all cases, including those where the mathematical product of x and y was between INT_MAX+1 and UINT_MAX, be equivalent to (unsigned)x*y;. I see no reason to believe they wouldn't have expected that trend to continue.
Unfortunately, a new philosophy has become fashionable, based on the revisionist viewpoint that compiler writers only supported useful behaviors in cases not mandated by the Standard because they were too unsophisticated to do anything else. In gcc, for example, using optimization level 2 but no other non-default options, the above "mult" routine will sometimes generate bogus code in cases where the product would be between 0x80000000u and 0xFFFFFFFFu, even when running on platforms where such computations would historically have worked. This is supposedly being done in the name of "optimization"; it would be interesting to know how many of the "optimizations" such techniques end up performing are actually useful and could not have been achieved via safer means.
Historically, Undefined Behavior was a license for a C compiler to expose the behavior of the underlying platform; in cases where the underlying platform's behavior fit the programmer's needs, this allowed the programmer's requirements to be expressed in machine code more efficiently than if everything had to be done in ways defined by the Standard. Lately, however, it has been interpreted as license for compilers to implement behaviors which not only bear no relation to anything in the underlying platform nor to any plausible programmer expectations, but aren't even bound by laws of time and causality.
Java has a run time environment to take care of you. That's why an exception is thrown when going out of bounds - it's something that can't be figured out at compile time.
There is run time bounds checking in C++ when using the at() method for a vector. It's what distinguishes at() from the []operator
Related
Technically, subtracting a null pointer is undefined behaviour in C. Clang 13 issues a warning for it.
Yet this construct is used anyway, usually to determine the alignment of a pointer. For example, BSD-derived implementations of qsort use it. See here (OpenBSD) and an explanation of what it's for:
Snippet of a code sample with null pointer subtraction from OpenBSD. Please see the link above for full context.
#define TYPE_ALIGNED(TYPE, a, es) \
(((char *)a - (char *)0) % sizeof(TYPE) == 0 && es % sizeof(TYPE) == 0)
Question: Is such code safe to use on typical modern platforms (64-bit or 32-bit) with typical modern compilers? A lot of prominent production code seemed to have used this construct for many years.
I notice that code like this was removed from FreeBSD's qsort (see revision 334928), because GCC miscompiled some of it. However, I do not understand all the details in the discussion of the issue, and I cannot tell if the problem was a direct consequence of the null pointer subtraction. However, their proposed fix essentially eliminates the null pointer subtraction. I would appreciate some clarifications on the topic.
When the C Standard was written, many hardware platforms performed pointer arithmetic in such a way that adding zero to a null pointer would yield a null pointer with no side effects, and subtracting one null pointer from another would yield zero with no side effects. These behaviors were often useful, since they could eliminate the need for corner-case code when performing tasks involving N-byte chunks of storage, where N might be zero.
Even though many platforms could support the aforementioned corner cases without having to generate any extra machine code, it was hardly clear that all platforms would be able to do so (I don't know of any particular platforms that couldn't, but wouldn't be at all surprised if some such platforms existed). The Standard thus handled such situations the same way as it handles other situations where almost all implementations would process a construct in the same useful fashion, but it might be impractical for all to do so: categorized the action as "Undefined Behavior" but allow implementations to, as a form of "conforming language extension", process it in a manner consistent with the underlying execution environment.
There was never any doubt about how such constructs should be processed on commonplace platforms. The only doubt would have been whether implementations whose target platforms would require extra machine code to yield the commonplace semantics should generate such extra machine code, and classifying such constructs as UB would allow such decisions to be made by people who were working with such platforms, and would thus be better placed than the Committee to weigh the costs and benefits of supporting the commonplace behavior.
The following code produces strange things on my system:
#include <stdio.h>
void f (int x) {
int y = x + x;
int v = !y;
if (x == (1 << 31))
printf ("y: %d, !y: %d\n", y, !y);
}
int main () {
f (1 << 31);
return 0;
}
Compiled with -O1, this prints y: 0, !y: 0.
Now beyond the puzzling fact that removing the int v or the if lines produces the expected result, I'm not comfortable with undefined behavior of overflows translating to logical inconsistency.
Should this be considered a bug, or is the GCC team philosophy that one unexpected behavior can cascade into logical contradiction?
When invoking undefined behavior, anything can happen. There's a reason why it's called undefined behavior, after all.
Should this be considered a bug, or is the GCC team philosophy that one unexpected behavior can cascade into logical contradiction?
It's not a bug. I don't know much about the philosophy of the GCC team, but in general undefined behavior is "useful" to compiler developers to implement certain optimizations: assuming something will never happen makes it easier to optimize code. The reason why anything can happen after UB is exactly because of this. The compiler makes a lot of assumptions and if any of them is broken then the emitted code cannot be trusted.
As I said in another answer of mine:
Undefined behavior means that anything can happen. There is no explanation as to why anything strange happens after invoking undefined behavior, nor there needs to be. The compiler could very well emit 16-bit Real Mode x86 assembly, produce a binary that deletes your entire home folder, emit the Apollo 11 Guidance Computer assembly code, or whatever else. It is not a bug. It's perfectly conforming to the standard.
The 2018 C standard defines, in clause 3.4.3, paragraph 1, “undefined behavior” to be:
behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this document imposes no requirements
That is quite simple. There are no requirements from the standard. So, no, the standard does not require the behavior to be “consistent.” There is no requirement.
Furthermore, compilers, operating systems, and other things involved in building and running a program generally do not impose any requirement of “consistency” in the sense asked about in this question.
Addendum
Note that answers that say “anything can happen” are incorrect. The C standard only says that it imposes no requirements when there is behavior that it deems “undefined.” It does not nullify other requirements and has no authority to nullify them. Any specifications of compilers, operating systems, machine architectures, or consumer product laws; or laws of physics; laws of logic; or other constraints still apply. One situation where this matters is simply linking to software libraries not written in C: The C standard does not define what happens, but what does happen is still constrained by the other programming language(s) used and the specifications of the libraries, as well as the linker, operating system, and so on.
Marco Bonelli has given the reasons why such behaviour is allowed; I'd like to attempt an explanation of why it might be practical.
Optimising compilers, by definition, are expected to do various stuff in order to make programs run faster. They are allowed to delete unused code, unwrap loops, rearrange operations and so on.
Taking your code, can the compiler be really expected to perform the !y operation strictly before the call to printf()? I'd say if you impose such rules, there'll be no place left for any optimisations. So, a compiler should be free to rewrite the code as
void f (int x) {
int y = x + x;
int notY = !(x + x);
if (x == (1 << 31))
printf ("y: %d, !y: %d\n", y, notY);
}
Now, it should be obvious that for any inputs which don't cause overflow the behaviour would be identical. However, in the case of overflow y and notY experience the effects of UB independently, and may both become 0 because why not.
For some reason, a myth has emerged that the authors of the Standard used the phrase "Undefined Behavior" to describe actions which earlier descriptions of the language by its inventor characterized as "machine dependent" was to allow compilers to infer that various things wouldn't happen. While it is true that the Standard doesn't require that implementations process such actions meaningfully even on platforms where there would be a natural "machine-dependent" behavior, the Standard also doesn't require that any implementation be capable of processing any useful programs meaningfully; an implementation could be conforming without being able to meaningfully process anything other than a single contrived and useless program. That's not a twisting of the Standard's intention: "While a deficient implementation could probably contrive
a program that meets this requirement, yet still succeed in being useless, the C89 Committee felt that such ingenuity would probably require more work than making something useful."
In discussing the decision to make short unsigned values promote to signed int, the authors of the Standard observed that most current implementations used quiet wraparound integer-overflow semantics, and having values promote to signed int would not adversely affect behavior if the value was used in overflow scenarios where the upper bits wouldn't matter.
From a practical perspective, guaranteeing clean wraparound semantics costs a little more than would allowing integer computations to behave as though performed on larger types at unspecified times. Even in the absence of "optimization", even straightforward code generation for an expression like long1 = int1*int2+long2; would on many platforms benefit from being able to use the result of a 16x16->32 or 32x32->64 multiply instruction directly, rather than having to sign-extend the lower half of the result. Further, allowing a compiler to evaluate x+1 as a type larger than x at its convenience would allow it to replace x+1 > y with x >= y--generally a useful and safe optimization.
Compilers like gcc go further, however. Even though the authors of the Standard observed that in the evaluation of something like:
unsigned mul(unsigned short x, unsigned short y) { return x*y; }
the Standard's decision to promote x and y to signed int wouldn't adversely affect behavior compared with using unsigned ("Both schemes give the same answer in the vast majority of cases, and both give the same effective result in even more cases in implementations with two’s-complement arithmetic and quiet wraparound on signed overflow—that is, in most current implementations."), gcc will sometimes use the above function to infer within calling code that x cannot possibly exceed INT_MAX/y. I've seen no evidence that the authors of the Standard anticipated such behavior, much less intended to encourage it. While the authors of gcc claim any code that would invoke overflow in such cases is "broken", I don't think the authors of the Standard would agree, since in discussing conformance, they note: "The goal is to give the programmer a fighting chance to make powerful C programs that are also highly portable, without seeming to demean perfectly useful C programs that happen not to be portable, thus the adverb strictly."
Because the authors of the Standard failed to forbid the authors of gcc from processing code nonsensically in case of integer overflow, even on quiet-wraparound platforms, they insist that they should jump the rails in such cases. No compiler writer who was trying to win paying customers would take such an attitude, but the authors of the Standard failed to realize that compiler writers might value cleverness over customer satisfaction.
I am reviewing some source code and I was wondering if the following was thread safe? I have heard of compiler or CPU instruction/read reordering (would it have something to do with branch prediction?) and the Data->unsafe_variable variable below can be modified at any time by another thread.
My question is: depending on how the compiler/CPU reorder read/writes, would it be possible that the below code would allow the Data->unsafe_variable to be fetched twice? (see 2nd snippet)
Note: I do not worry about the first access, any data can be there as long as it does not pass the 'if', I am just concerned by the possibility that the data would be fetched another time after the 'if'. I was also wondering if the cast into volatile here would help preventing a double fetch?
int function(void* Data) {
// Data is allocated on the heap
// What it contains at this point is not important
size_t _varSize = ((volatile DATA *)Data)->unsafe_variable;
if (_varSize > x * y)
{
return FALSE;
}
// I do not want Data->unsafe_variable to be fetch once this point reached,
// I want to use the value "supposedly" stored in _varSize
// Would any compiler/CPU reordering would allow it to be double fetched?
size_t size = _varSize - t * q;
function_xy(size);
return TRUE;
}
Basically I do not want the program to behave like this for security reasons:
_varSize = ((volatile DATA *)Data)->unsafe_variable;
if (_varSize > x * y)
{
return FALSE;
}
size_t size = ((volatile DATA *)Data)->unsafe_variable - t * q;
function10(size);
I am simplifying here and they cannot use mutex. However, would it be safer to use _ReadWriteBarrier() or MemoryBarrier() after the fist line instead of a volatile cast? (VS compiler)
Edit: Giving slightly more context to the code.
The code is broken for many reasons. I'll just point out one of the more subtle ones as others have pointed out the more obvious ones. The object is not volatile. Casting a pointer to a pointer to a volatile object doesn't make the object volatile, it just lies to the compiler.
But there's a much bigger point -- you are going about this totally the wrong way. You are supposed to be checking whether the code is correct, that is, whether it is guaranteed to work. You aren't clever enough, nobody is, to think of every possible way the system might fail to do what you assume it will do. So instead, just don't make those assumptions.
Thinking about things like CPU read re-ordering is totally wrong. You should expect the CPU to do what, and only what, it is required to do. You should definitely not think about specific mechanisms by which it might fail, but only whether it is guaranteed to work.
What you are doing is like trying to figure out if an employee is guaranteed to show up for work by checking if he had his flu shot, checking if he is still alive, and so on. You can't check for, or even think of, every possible way he might fail to show up. So if find that you have to check those kinds of things, then it's not guaranteed and relying on it is broken. Period.
You cannot make reliable code by saying "the CPU doesn't do anything that can break this, so it's okay". You can make reliable code by saying "I make sure my code doesn't rely on anything that isn't guaranteed by the relevant standards."
You are provided with all the tools you need to do the job, including memory barriers, atomic operations, mutexes, and so on. Please use them.
You are not clever enough to think of every way something not guaranteed to work might fail. And you have a plethora of things that are guaranteed to work. Fix this code, and if possible, have a talk with the person who wrote it about using proper synchronization.
This sounds a bit ranty, and I apologize for that. But I've seen too much code that used "tricks" like this that worked perfectly on the test machines but then broke when a new CPU came out, a new compiler, or a new version of the OS. Fixing code like this can be an incredible pain because these hacks hide the actual synchronization requirements. The right answer is almost always to code clearly and precisely what you actually want, rather than to assume that you'll get it because you don't know of any reason you won't.
This is valuable advice from painful experience.
The standard(s) are clear. If any thread may be modifying the object, all accesses, in all threads, must be synchronized, or you have undefined behavior.
The only portable solution for C++ is C++11 atomics, which is available in upcoming VS 2012.
As for C, I do not know if recent C standards bring some portable facilities, I am not following that, but as you are using Visal Studio, it does not matter anyway, as Microsoft is not implementing recent C standards.
Still, if you know you are developing for Visual Studio, you can rely on guarantees provided by this compiler, which apply to both C and C++. Some of them are implicit (accessing volatile variables implies also some memory barriers applied), some are explicit, like using _MemoryBarrier intrinsic.
The whole topic of the memory model is discussed in depth in Lockless Programming Considerations for Xbox 360 and Microsoft Windows, this should give you a good overview. Beware: the topic you are entering is full of hard topics and nasty surprises.
Note: Relying on volatile is not portable, but if you are using old C / C++ standards, there is no portable solution anyway, therefore be prepared to facing the need of reimplementing this for different platform should the need ever arise. When writing portable threaded code, volatile is considered almost useless:
For multi-threaded programming, there two key issues that volatile is often mistakenly thought to address:
atomicity
memory consistency, i.e. the order of a thread's operations as seen by another thread.
The line
a = a++;
is undefined behaviour in C. The question I am asking is: why?
I mean, I get that it might be hard to provide a consistent order in which things should be done. But, certain compilers will always do it in one order or the other (at a given optimization level). So why exactly is this left up to the compiler to decide?
To be clear, I want to know if this was a design decision and if so, what prompted it? Or maybe there is a hardware limitation of some kind?
UPDATE: This question was the subject of my blog on June 18th, 2012. Thanks for the great question!
Why? I want to know if this was a design decision and if so, what prompted it?
You are essentially asking for the minutes of the meeting of the ANSI C design committee, and I don't have those handy. If your question can only be answered definitively by someone who was in the room that day, then you're going to have to find someone who was in that room.
However, I can answer a broader question:
What are some of the factors that lead a language design committee to leave the behaviour of a legal program (*) "undefined" or "implementation defined" (**)?
The first major factor is: are there two existing implementations of the language in the marketplace that disagree on the behaviour of a particular program? If FooCorp's compiler compiles M(A(), B()) as "call A, call B, call M", and BarCorp's compiler compiles it as "call B, call A, call M", and neither is the "obviously correct" behaviour then there is strong incentive to the language design committee to say "you're both right", and make it implementation defined behaviour. Particularly this is the case if FooCorp and BarCorp both have representatives on the committee.
The next major factor is: does the feature naturally present many different possibilities for implementation? For example, in C# the compiler's analysis of a "query comprehension" expression is specified as "do a syntactic transformation into an equivalent program that does not have query comprehensions, and then analyze that program normally". There is very little freedom for an implementation to do otherwise.
By contrast, the C# specification says that the foreach loop should be treated as the equivalent while loop inside a try block, but allows the implementation some flexibility. A C# compiler is permitted to say, for example "I know how to implement foreach loop semantics more efficiently over an array" and use the array's indexing feature rather than converting the array to a sequence as the specification suggests it should.
A third factor is: is the feature so complex that a detailed breakdown of its exact behaviour would be difficult or expensive to specify? The C# specification says very little indeed about how anonymous methods, lambda expressions, expression trees, dynamic calls, iterator blocks and async blocks are to be implemented; it merely describes the desired semantics and some restrictions on behaviour, and leaves the rest up to the implementation.
A fourth factor is: does the feature impose a high burden on the compiler to analyze? For example, in C# if you have:
Func<int, int> f1 = (int x)=>x + 1;
Func<int, int> f2 = (int x)=>x + 1;
bool b = object.ReferenceEquals(f1, f2);
Suppose we require b to be true. How are you going to determine when two functions are "the same"? Doing an "intensionality" analysis -- do the function bodies have the same content? -- is hard, and doing an "extensionality" analysis -- do the functions have the same results when given the same inputs? -- is even harder. A language specification committee should seek to minimize the number of open research problems that an implementation team has to solve!
In C# this is therefore left to be implementation-defined; a compiler can choose to make them reference equal or not at its discretion.
A fifth factor is: does the feature impose a high burden on the runtime environment?
For example, in C# dereferencing past the end of an array is well-defined; it produces an array-index-was-out-of-bounds exception. This feature can be implemented with a small -- not zero, but small -- cost at runtime. Calling an instance or virtual method with a null receiver is defined as producing a null-was-dereferenced exception; again, this can be implemented with a small, but non-zero cost. The benefit of eliminating the undefined behaviour pays for the small runtime cost.
A sixth factor is: does making the behaviour defined preclude some major optimization? For example, C# defines the ordering of side effects when observed from the thread that causes the side effects. But the behaviour of a program that observes side effects of one thread from another thread is implementation-defined except for a few "special" side effects. (Like a volatile write, or entering a lock.) If the C# language required that all threads observe the same side effects in the same order then we would have to restrict modern processors from doing their jobs efficiently; modern processors depend on out-of-order execution and sophisticated caching strategies to obtain their high level of performance.
Those are just a few factors that come to mind; there are of course many, many other factors that language design committees debate before making a feature "implementation defined" or "undefined".
Now let's return to your specific example.
The C# language does make that behaviour strictly defined(†); the side effect of the increment is observed to happen before the side effect of the assignment. So there cannot be any "well, it's just impossible" argument there, because it is possible to choose a behaviour and stick to it. Nor does this preclude major opportunities for optimizations. And there are not a multiplicity of possible complex implementation strategies.
My guess, therefore, and I emphasize that this is a guess, is that the C language committee made ordering of side effects into implementation defined behaviour because there were multiple compilers in the marketplace that did it differently, none was clearly "more correct", and the committee was unwilling to tell half of them that they were wrong.
(*) Or, sometimes, its compiler! But let's ignore that factor.
(**) "Undefined" behaviour means that the code can do anything, including erasing your hard disk. The compiler is not required to generate code that has any particular behaviour, and not required to tell you that it is generating code with undefined behaviour. "Implementation defined" behaviour means that the compiler author is given considerable freedom in choice of implementation strategy, but is required to pick a strategy, use it consistently, and document that choice.
(†) When observed from a single thread, of course.
It's undefined because there is no good reason for writing code like that, and by not requiring any specific behaviour for bogus code, compilers can more aggressively optimize well-written code. For example, *p = i++ may be optimized in a way that causes a crash if p happens to point to i, possibly because two cores write to the same memory location at the same time. The fact that this also happens to be undefined in the specific case that *p is explicitly written out as i, to get i = i++, logically follows.
It's ambiguous but not syntactically wrong. What should a be? Both = and ++ have the same "timing." So instead of defining an arbitrary order it was left undefined since either order would be in conflict with one of the two operators definitions.
With a few exceptions, the order in which expressions are evaluated is unspecified; this was a deliberate design decision, and it allows implementations to rearrange the evaluation order from what's written if that will result in more efficient machine code. Similarly, the order in which the side effects of ++ and -- are applied is unspecified beyond the requirement that it happen before the next sequence point, again to give implementations the freedom to arrange operations in an optimal manner.
Unfortunately, this means that the result of an expression like a = a++ will vary based on the compiler, compiler settings, surrounding code, etc. The behavior is specifically called out as undefined in the language standard so that compiler implementors don't have to worry about detecting such cases and issuing a diagnostic against them. Cases like a = a++ are obvious, but what about something like
void foo(int *a, int *b)
{
*a = (*b)++;
}
If that's the only function in the file (or if its caller is in a different file), there's no way to know at compile time whether a and b point to the same object; what do you do?
Note that it's entirely possible to mandate that all expressions be evaluated in a specific order, and that all side effects be applied at a specific point in evaluation; that's what Java and C# do, and in those languages expressions like a = a++ are always well-defined.
The postfix ++ operator returns the value prior to the incrementation. So, at the first step, a gets assigned to its old value (that's what ++ returns). At the next point it is undefined whether the increment or the assignment will take place first, because both operations are applied over the same object (a), and the language says nothing about the order of evaluation of these operators.
Somebody may provide another reason, but from an optimization (better say assembler presentation) point of view, a needs be loaded into a CPU register, and the postfix operator's value should be placed into another register or the same.
So the last assignment can depend on either the optimizer using one register or two.
Updating the same object twice without an intervening sequence point is undefined behaviour ...
because that makes compiler writers happier
because it allows implementations to define it anyway
because it doesn't force a specific constraint when it isn't needed
Suppose a is a pointer with value 0x0001FFFF. And suppose the architecture is segmented so that the compiler needs to apply the increment to the high and low parts separately, with a carry between them. The optimiser could conceivably reorder the writes so that the final value stored is 0x0002FFFF; that is, the low part before the increment and the high part after the increment.
This value is twice either value that you might have expected. It may point to memory not owned by the application, or it may (in general) be a trapping representation. In other words, the CPU may raise a hardware fault as soon as this value is loaded into a register, crashing the application. Even if it doesn't cause an immediate crash, it is a profoundly wrong value for the application to be using.
The same kind of thing can happen with other basic types, and the C language allows even ints to have trapping representations. C tries to allow efficient implementation on a wide range of hardware. Getting efficient code on a segmented machine such as the 8086 is hard. By making this undefined behaviour, a language implementer has a bit more freedom to optimise aggressively. I don't know if it has ever made a performance difference in practice, but evidently the language committee wanted to give every benefit to the optimiser.
I am interested to know on what things I need to concentrate on debugging c code without a debugger. What are the things to look for?
Generally I look for the following:
Check whether correct value and type is being passed to a function.
Look for unallocated and uninitialized variables
Check for function syntax and function is used in right way.
Check for return values
Check for locks are used in the right way.
Check for string termination
Returning a varible in stack memory from a function
Off by one errors
Normal syntax errors
Function declaration errors
Any structured approach is very much appreciated.
Most of these errors will be picked up by passing the appropriate warning flags to the compiler.
However from the original list, points 1, 5, 6, 7, 8 are very much worth checking as a human, some compiler/flag combinations however will pick up on unhandled values, pointers to automatic memory, and off-by-one errors in array indexing etc.
You may want to take a look at such things as mudflap, valgrind, efence and others to catch runtime cases you're unaware of. You might also try splint, to augment your static analysis.
For the unautomated side of things, try statically following the flow of your program for particular cases, especially corner cases, and verify to yourself that it appears to do the right thing. Try writing unit tests/test scripts. Be sure to use some automated checking as discussed above.
If your emphasis is on testing without any test execution, splint might very well be the best place to start. The technique you want to research is called static code analysis.
I recommend trying one of the many static code analyzers. Those that I used personally and can recommend:
cppcheck - free and open-source, has cmd-line program and windows gui
Clang Static Analyzer - Apple's free and open-source, best supported on mac, also built in recent XCode versions
Visual Studio's static checker, only available in Premium and Ultimate (i.e. expensive) versions
Coverity - expensive
If you want more details, you can read an article I wrote on that subject.
A big one you left out is integer overflow. This includes both undefined behavior from overflow of signed expressions, and well-defined but possibly-dangerous behavior of unsigned overflow being reduced mod TYPE_MAX+1. In particular, things like foo=malloc(count*sizeof *foo); can be very dangerous if count came from a potentially untrusted source (like a data file), especially if sizeof *foo is large.
Some others:
mixing of signed and unsigned values in comparisons.
use of functions with locale-specific behavior (e.g. radix character, case mapping, etc.) when well-defined uniform behavior is needed.
use of char when doing anything more than copying values or comparison for equality (otherwise you probably want unsigned char or perhaps in rare cases, signed char).
use of signed expressions with /POWER_OF_2 and %POWER_OF_2 (hint: (-3)%8==-3 but (-3)&7==5).
use of signed division/modulo in general with negative numbers, since C's version of it disagrees with the usual algebraic definition when a negative number is divided by a positive one, and rarely gives the desired result.