What parts of C are most portable? - c

I recently read an interview with Lua co-creators Luiz H. de Figueredo and Roberto Ierusalimschy, where they discussed the design, and implementation of Lua. It was very intriguing to say the least. However, one part of the discussion brought something up in my mind. Roberto spoke of Lua as a "freestanding application" (that is, it's pure ANSI C that uses nothing from the OS.) He said, that the core of Lua was completely portable, and because of its purity has been able to be ported much more easily and to platforms never even considered (such as robots, and embedded devices.)
Now this makes me wonder. C in general is a very portable language. So, what parts of C (namely those in the the standard library) are the most unportable? and what are those that can be expected to work on most platforms? Should only a limited set of data types be used (e.g. avoiding short and maybe float)? What about the FILE and the stdio system? malloc and free? It seems that Lua avoids all of these. Is that taking things to the extreme? Or are they the root of portability issues? Beyond this, what other things can be done to make code extremely portable?
The reason I'm asking all of this, is because I'm currently writing an application in pure C89, and it's optimal that it be as portable as possible. I'm willing take a middle road in implementing it (portable enough, but no so much that I have to write everything from scratch.) Anyways, I just wanted to see what in general is key to writing the best C code.
As a final note, all of this discussion is related to C89 only.

In the case of Lua, we don't have much to complain about the C language itself but we have found that the C standard library contains many functions that seem harmless and straight-forward to use, until you consider that they do not check their input for validity (which is fine if inconveninent). The C standard says that handling bad input is undefined behavior, allowing those functions to do whatever they want, even crash the host program. Consider, for instance, strftime. Some libc's simply ignore invalid format specifiers but other libc's (e.g., in Windows) crash! Now, strftime is not a crucial function. Why crash instead of doing something sensible? So, Lua has to do its own validation of input before calling strftime and exporting strftime to Lua programs becomes a chore. Hence, we have tried to stay clear from these problems in the Lua core by aiming at freestanding for the core. But the Lua standard libraries cannot do that, because their goal is to export facilities to Lua programs, including what is available in the C standard library.

"Freestanding" has a particular meaning in the context of C. Roughly, freestanding hosts are not required to provide any of the standard libraries, including the library functions malloc/free, printf, etc. Certain standard headers are still required, but they only define types and macros (for example stddef.h).

C89 allows two types of compilers: hosted and freestanding. The basic difference is that a hosted compiler provides all of the C89 library, while a freestanding compiler need only provide <float.h>, <limits.h>, <stdarg.h>, and <stddef.h>. If you limit yourself to these headers, your code will be portable to any C89 compiler.

This is a very broad question. I'm not going to give the definite answer, instead I'll raise some issues.
Note that the C standard specifies certain things as "implementation-defined"; a conforming program will always compile on and run on any conforming platform, but it may behave differently depending on the platform. Specifically, there's
Word size. sizeof(long) may be four bytes on one platform, eight on another. The sizes of short, int, long etc. each have some minimum (often relative to each other), but otherwise there are no guarantees.
Endianness. int a = 0xff00; int b = ((char *)&a)[0]; may assign 0 to b on one platform, -1 on another.
Character encoding. \0 is always the null byte, but how the other characters show up depends on the OS and other factors.
Text-mode I/O. putchar('\n') may produce a line-feed character on one platform, a carriage return on the next, and a combination of each on yet another.
Signedness of char. It may or it may not be possible for a char to take on negative values.
Byte size. While nowadays, a byte is eight bits virtually everywhere, C caters even to the few exotic platforms where it is not.
Various word sizes and endiannesses are common. Character encoding issues are likely to come up in any text-processing application. Machines with 9-bit bytes are most likely to be found in museums. This is by no means an exhaustive list.
(And please don't write C89, that's an outdated standard. C99 added some pretty useful stuff for portability, such as the fixed-width integers int32_t etc.)

C was designed so that a compiler may be written to generate code for any platform and call the language it compiles, "C". Such freedom acts in opposition to C being a language for writing code that can be used on any platform.
Anyone writing code for C must decide (either deliberately or by default) what sizes of int they will support; while it is possible to write C code which will work with any legal size of int, it requires considerable effort and the resulting code will often be far less readable than code which is designed for a particular integer size. For example, if one has a variable x of type uint32_t, and one wishes to multiply it by another y, computing the result mod 4294967296, the statement x*=y; will work on platforms where int is 32 bits or smaller, or where int is 65 bits or larger, but will invoke Undefined Behavior in cases where int is 33 to 64 bits, and the product, if the operands were regarded as whole numbers rather than members of an algebraic ring that wraps mod 4294967296, would exceed INT_MAX. One could make the statement work independent of the size of int by rewriting it as x*=1u*y;, but doing so makes the code less clear, and accidentally omitting the 1u* from one of the multiplications could be disastrous.
Under the present rules, C is reasonably portable if code is only used on machines whose integer size matches expectations. On machines where the size of int does not match expectations, code is not likely to be portable unless it includes enough type coercions to render most of the language's typing rules irrelevant.

Anything that is a part of the C89 standard should be portable to any compiler that conforms to that standard. If you stick to pure C89, you should be able to port it fairly easily. Any portability problems would then be due to compiler bugs or places where the code invokes implementation-specific behavior.

Related

C WikiBooks - How is C a small "what you see is all you get" language?

I'm unable to understand one of the following sentence from WikiBooks :
Why C, and not assembly language?
" C is a compiled language, which creates fast and efficient executable files. It is also a small "what you see is all you get" language: a C statement corresponds to at most a handful of assembly statements, everything else is provided by library functions. "
Website Link : C Programming/Why learn C? - Wikibooks, open books for an open world
Note : I am a complete beginner and I've started to learn C . So, I need a precise explanation of what the above sentence means.
The assembly is the language for a single processor family, it is directly compiled to the machine code that the processor runs. If one programs in assembly, one needs to rewrite the entire code for the different processor family. Phones usually use ARM processors whereas the desktop computers have 32-bit or 64-bit x86-compatible processors. Each 3 of these potentially need a completely separately written program, and perhaps not even limited to that.
In contrast standard C is a portable language - if you write so-called strictly conforming programs. C11 4p5:
A strictly conforming program shall use only those features of the language and library specified in this International Standard. (3) It shall not produce output dependent on any unspecified, undefined, or implementation-defined behavior, and shall not exceed any minimum implementation limit.
With footnote 5 noting that:
Strictly conforming programs are intended to be maximally portable among conforming implementations. Conforming programs may depend upon nonportable features of a conforming implementation
Unlike the assembler whose specifics vary from processor to another, it is possible to write programs in C and then port them to various platforms without any changes into the source code, yet these programs will still be compiled into the assembly language, and performance could - and often will - surpass hand-written assembly when using a modern high-quality optimizing compiler.
Additionally the C standard library, which any conforming hosted implementation needs to provide, provides for a portable way to manage files, dynamic memory, input and output, all of which are not only processor but also operating-system specific when using assembler.
However, C is still quite close to the assembly, to the extent that it has been called a "high-level assembly language" by some.
It makes no sense to say compiled language and interpreted language.
This kind of statement are made by persons without education and could not understand the foundations of programming.
A language is defined mathematically via a way to define languages -- operational, denotational, axiomatic, etc and the programmers implement the language as they wish.
There are machines that run C via interpretation, they dispatch the code at the moment of execution and execute it instead of accumulating some object code that would be executed later by some machine, etc.
It is correct to say compiled implementation, interpreted implementation for the language, but even so it is relative to a given machine. Because when you compile it for x86 processors, the compiled code is interpreted by the datapath and controller of a stack machine for the X86 language, etc.
Basically the statement what you see is all you get means that is almost 1 to 1 correspondence between the operators of the CAM defined in the abstract semantics of ISO 9899 and the current stack-machines on the market, like x86, mips, etc.
C is nothing more than a platform-independent Assembly translator, what you write in C is efficiently "translated" into machine code as it would if you write it directly in Assembly. Thats the point of:
"what you see is all you get" language: a C statement corresponds to at most a handful of assembly statements
Any C sentence you write is directly transformed to ASM by the compiler without abstraction layers, interpreters, etc, unlike other languages.
By definition, C is tinny, it has nothing but the esentials to be considered a turing complete language and nothing more. Any additional feature is achieved via libraries, C ships the std lib (diferent implementations tho) that packs things like RNG, memory management, etc.
That's what this means:
everything else is provided by library functions
It's an old and largely outdated claim about C.
C was originally designed as, roughly, a more readable and portable assembler. For this reason, most of the core language features tended - on most target machines - to be easily translated. Generally more complicated functionality was provided by library functions, including the standard library.
Over time, C (both the language and the standard library) have evolved, and become more complicated. Computing hardware has also become more complicated - for example, supporting a set of more advanced instructions - and C constructs which can be implemented in terms of advanced instructions will translate to more complicated assembler on machines that support older and simpler instruction sets.
The distinction between a "small" language and a "large" one is completely subjective - so some people still continue to describe C as small and simple, both others describe is as large and complex. While simpler than some other languages (like C++), C is now also significantly more complex - by various measures - than quite a few other programming languages.
This quote is absolutely true for the good old K&R C implementation of the 70'. In this old days, C was indeed a thin wrapper around machine instructions, and the programmer could easily guess how the compiler would translate the source:
for loop: a counter in appropriate register, a test at end of loop a goto
function call: push arguments to the stack (with no conversion!), call the sub-routine address. On return put the return value (required to be scalar or pointer) to the appropriate register and use machine return. On return, the caller cleans up the stack
On a symetric point of view, anything that could be executed by the processor could be expressed in C. If you have an array of two integers and know that the internal representation is a valid double, just cast a pointer and use it.
That's all wrong with recent version of the C language and with optimizing compilers. The as if rule allows the optimizer to do anything, provided the observable results are what a natural implementation should have given. Many operations can invoke Undefined Behaviour. For example writing a float at a memory location and using it as an integer is explicitely UB. The optimizer can assume that no UB exists in the program, so it can just optimize out any block containing UB (recent versions of gcc are great at that).
Look for example at this function:
void stopit() {
int i = 0;
while(1) {
i+=1;
}
printf("done");
}
It contains an infinite loop, so the printf should never be reached. But the loop has no observable result, so the compiler is free to optimize it out and translate it the same as:
void stopit() {
printf("done");
}
Another example
int i = 12;
float *f = &i;
*f = 12.5; // UB use an float variable to access an int
printf("0x%04x\n", i); // try to dump the representation of 12.5
This code can legally display 0x000c, because the compiler is free to assume that the *f=0. has not modified i, so it can directly use a cached value and translate the last line directly as printf("0x%04x\n", 12);
So not, recent versions of the C language are no longer a small "what you see is all you get" language
What is true is that C is a low level language. The programmer has full control on allocation/deallocation of dynamic storage. You have a natural access at the byte level for any type, you have the notion of pointer and explicit pointer/integer conversion to allow direct access to well known memory addresses. That indeed allows to program embedded systems or micro-controller in C. The standard even defines two environment levels: a hosted environment where you have full access to the standard library and a freestanding environment where the standard library is not present. This can be specifically interesting for systems with very little memory.
C provides low-level control of memory and resources at the byte and bit level. For example C and assembly language are very common in the programming of microcontrollers (my area of expertise), which have very little memory and most often require bit-level control of input and output ports.
If you write a C program and build it, then look at your listing file, you'll typically see the very close correspondence between your C statements and the few assembly instructions into which the C is assembled.
Another clue to its simplicity is to look at its grammar definition as compared to that for C# or Java or Python, for example. The C grammar is small, terse, compact compared to the "fuller" languages, and it's true, there isn't even input or output defined in C. That typically comes from including stdio.h or similar. In this way, you only get what you need in your executable. That is in start contrast to the "big" languages.
While many in the embedded (microcontroller) programming space still prefer assembly, C is a great way to abstract a little bit things like flow of control and pointers, while still retaining the power to employ practically every instruction the microprocessor or microcontroller is capable of.
Regarding the "what you see is all you get" statement...
C is a "small" language in that provides only a handful of abstractions - that is, high-level language constructs that either hide implementation-specific details (such as I/O, type representations, address representations, etc.) or simplify complex operations (memory management, event processing, etc.). C doesn't provide any support at the language level (either in the grammar or standard library) for things like networking, graphics, sound, etc.; you must use separate, third-party libraries for those tasks, which will vary based on platform (Windows, MacOS, iOS, Linux). Compare that to a language like Java, which provides a class library for just about everything you could ever want to do.
Compared to languages like C++ and Java, not a whole lot of things happen "under the hood" in C. There's no overloading of functions or operators, there are no constructors or destructors that are automatically called when objects are created or destroyed, there's no real support for "generic" programming (writing a function template that can be automatically instantiated for arguments of different types), etc. Because of this, it's often easier to predict how a particular piece of code will perform.
There's no automatic resource management in C - arrays don't grow or shrink as you add or remove elements, there's no automatic garbage collection that reclaims dynamic memory that you aren't using anymore, etc.
The only container provided by the C language is the array - for anything more complex (lists, trees, queues, stacks, etc.) you have to write your own implementation, or use somebody else's library.
C is "close to the machine" in that the types and abstractions it provides are based on what real-world hardware provides. For example, integer and floating-point representations and operations are based on what the native hardware supports. The size of an int is (usually) based on the native CPU's word size, meaning it can only represent a certain range of values (the minimum range required by the language standard is [-32767..32767] for signed integers and [0..65535] for unsigned integers). Operations on int objects are mapped to native ADD/DIV/MUL/SUB opcodes. Languages like Python provide "arbitrary precision" types, which are not limited by what the hardware can natively support - the tradeoff is that operations using these types are often slower, since you're not using native opcodes.

C - Writing On Windows Compiling On UNIX And Vice Versa

I am planning to write an ANSI-C program on Windows with Netbeans using Cygwin suite, and later on i want to compile the source code on an UNIX family OS and use the program. Should i worry about any kind of compability problems?
If you use only the functionality described in the C standard, the set of possible incompatibilities typically reduces to:
signedness of char
sizes of all types (e.g. int=long=32-bit in Windows, not necessarily so on UNIX), I mean literally all, including pointers and enums
poorly thought out type conversions and casts, especially involving pointers and negative values (mixing of signed and unsigned types in expressions is error-prone too)
alignment of types and their padding in structures/unions
endianness
order of bitfields
implementation-defined/specific behavior, e.g. right shifts of negative values, rounding and signs when dividing signed values
floating-point: different implementation and different optimization
unspecified behavior, e.g. orders of function parameter and subexpression evaluation, the direction in which memcpy() copies data (low to high addresses or the other way around), etc etc
undefined behavior, e.g. i+i++ or a[i]=i++, modifying string literals, pointer dereference when the object it's pointed to is gone (e.g. free()'d), not using or misusing const and volatile, etc etc
supplying standard library functions with inappropriate parameters leading to undefined behavior, e.g. calling printf()-like functions with wrong number or kind of parameters
non-ASCII characters/strings
filename formats (special chars, length, case sensitivity)
clock/time/locale formats/ranges/values/configuration
There's much more. You actually have to read the standard and note what's guaranteed to work and what's not and if there are any conditions.
If you use something outside of the C standard, that functionality may not be available or identical on a different platform.
So, yes, it's possible, but you have to be careful. It's usually the assumptions that you make that make your code poorly portable.
There will be comparability problems, but as long as you stick to basic unix functionality, they ought to be manageable for command line applications. However, if your app has a GUI or has to interact with other programs in the unix environment, you'll probably regret your approach.
Another way to go would be to run the appropriate flavor of unix in a virtualbox on your desktop, and be pretty sure there are no compatibility problems.

How to make C codes in Tru64 Unix to work in Linux 64 bit?

I wanna know probable problems faced while moving C programs for eg. server process from Tru64 Unix to Linux 64 bits and why? What probable modifications the program would need or only recompiling the source code in new environment would do as both are 64 bit platforms? I am a little confused, I gotta know before I start working on it.
I spent a lot of time in the early 90s (OMG I feel old...) porting 32-bit code to the Alpha architecture. This was back when it was called OSF/1.
You are unlikely to have any difficulties relating to the bit-width when going from Alpha to x86_64.
Developers are much more aware of the problems caused by assuming that sizeof(int) == sizeof(void *), for example. That was far and away the most common problem I used to have when porting code to Alpha.
Where you do find differences they will be in how the two systems differ in their conformity to various API specifications, e.g. POSIX, XOpen, etc. That said, such differences are normally easily worked around.
If the Alpha code has used the SVR4 style APIs (e.g. streams) that you may have more difficulty than if it has used the more BSD-like APIs.
64 bit architecture is only an approximation of the classification of an architecture.
Ideally your code would have used only "semantic" types for all descriptions of variables, in particular size_t and ptrdiff_t for sizes and pointer arithmetic and the [u]intXX_t for types where a particular width is assumed.
If this is not the case, the main point would be to compare all the standard arithmetic types (all integer types, floating point types and pointers) if they map to the same concept on both platforms. If you find differences, you know the potential trouble spots.
Check the 64-bit data model used by both platforms, most 64bit Unix-like OS's use LP64, so it is likely that your target platforms use the same data model. This being the case you should have few problems given that teh code itself compiles and links.
If you use the same compiler (e.g. GCC) on both platforms you also need not worry about incompatible compiler extensions or differences in undefined or implementation defined behaviour. Such behaviour should be avoided in any case - even if the compilers are the same, since it may differ between target architectures. If you are not using the same compiler, then you need to be cautious about using extensions. #pragma directives are a particular issue since a compiler is allowed to quietly ignore a #pragma it does not recognise.
Finally in order to compile and link, any library dependencies outside the C standard library need to be available on both platforms. Most OS calls will be available since Unix and Linux share the same POSIX API.

Is it possible to dynamically create equivalent of limits.h macros during compilation?

Main reason for this is attempt to write perfectly portable C library. After a few weeks i ended up with constants, which are unfortunately not very flexible (using constants for defining another constants isn't possible).
Thx for any advice or critic.
What you ask of is impossible. As stated before me, any standards compliant implementation of C will have limits.h correctly defined. If it's incorrect for whatever reason, blame the vendor of the compiler. Any "dynamic" discovery of the true limits wouldn't be possible at compile time, especially if you're cross compiling for an embedded system, and thus the target architecture might have smaller integers than the compiling system.
To dynamically discover the limits, you would have to do it at run-time by bit shifting, multiplying, or adding until an overflow is encountered, but then you have a variable in memory rather than a constant, which would be significantly slower. (This wouldn't be reliable anyways since different architectures use different bit-level representations, and arithmetic sometimes gets a bit funky around the limits especially with signed and abstract number representations such as floats)
Just use standard types and limits as found in stdint.h and limits.h, or try to avoid pushing the limits all together.
First thing that comes to my mind: have you considered using stdint.h? Thanks to that your library will be portable across C99-compliant compilers.

When should I use type abstraction in embedded systems

I've worked on a number of different embedded systems. They have all used typedefs (or #defines) for types such as UINT32.
This is a good technique as it drives home the size of the type to the programmer and makes you more conscious of chances for overflow etc.
But on some systems you know that the compiler and processor won't change for the life of the project.
So what should influence your decision to create and enforce project-specific types?
EDIT
I think I managed to lose the gist of my question, and maybe it's really two.
With embedded programming you may need types of specific size for interfaces and also to cope with restricted resources such as RAM. This can't be avoided, but you can choose to use the basic types from the compiler.
For everything else the types have less importance.
You need to be careful not to cause overflow and may need to watch out for register and stack usage. Which may lead you to UINT16, UCHAR.
Using types such as UCHAR can add compiler 'fluff' however. Because registers are typically larger, some compilers may add code to force the result into the type.
i++;
can become
ADD REG,1
AND REG, 0xFF
which is unecessary.
So I think my question should have been :-
given the constraints of embedded software what is the best policy to set for a project which will have many people working on it - not all of whom will be of the same level of experience.
I use type abstraction very rarely. Here are my arguments, sorted in increasing order of subjectivity:
Local variables are different from struct members and arrays in the sense that you want them to fit in a register. On a 32b/64b target, a local int16_t can make code slower compared to a local int since the compiler will have to add operations to /force/ overflow according to the semantics of int16_t. While C99 defines an intfast_t typedef, AFAIK a plain int will fit in a register just as well, and it sure is a shorter name.
Organizations which like these typedefs almost invariably end up with several of them (INT32, int32_t, INT32_T, ad infinitum). Organizations using built-in types are thus better off, in a way, having just one set of names. I wish people used the typedefs from stdint.h or windows.h or anything existing; and when a target doesn't have that .h file, how hard is it to add one?
The typedefs can theoretically aid portability, but I, for one, never gained a thing from them. Is there a useful system you can port from a 32b target to a 16b one? Is there a 16b system that isn't trivial to port to a 32b target? Moreover, if most vars are ints, you'll actually gain something from the 32 bits on the new target, but if they are int16_t, you won't. And the places which are hard to port tend to require manual inspection anyway; before you try a port, you don't know where they are. Now, if someone thinks it's so easy to port things if you have typedefs all over the place - when time comes to port, which happens to few systems, write a script converting all names in the code base. This should work according to the "no manual inspection required" logic, and it postpones the effort to the point in time where it actually gives benefit.
Now if portability may be a theoretical benefit of the typedefs, readability sure goes down the drain. Just look at stdint.h: {int,uint}{max,fast,least}{8,16,32,64}_t. Lots of types. A program has lots of variables; is it really that easy to understand which need to be int_fast16_t and which need to be uint_least32_t? How many times are we silently converting between them, making them entirely pointless? (I particularly like BOOL/Bool/eBool/boolean/bool/int conversions. Every program written by an orderly organization mandating typedefs is littered with that).
Of course in C++ we could make the type system more strict, by wrapping numbers in template class instantiations with overloaded operators and stuff. This means that you'll now get error messages of the form "class Number<int,Least,32> has no operator+ overload for argument of type class Number<unsigned long long,Fast,64>, candidates are..." I don't call this "readability", either. Your chances of implementing these wrapper classes correctly are microscopic, and most of the time you'll wait for the innumerable template instantiations to compile.
The C99 standard has a number of standard sized-integer types. If you can use a compiler that supports C99 (gcc does), you'll find these in <stdint.h> and you can just use them in your projects.
Also, it can be especially important in embedded projects to use types as a sort of "safety net" for things like unit conversions. If you can use C++, I understand that there are some "unit" libraries out there that let you work in physical units that are defined by the C++ type system (via templates) that are compiled as operations on the underlying scalar types. For example, these libraries won't let you add a distance_t to a mass_t because the units don't line up; you'll actually get a compiler error.
Even if you can't work in C++ or another language that lets you write code that way, you can at least use the C type system to help you catch errors like that by eye. (That was actually the original intent of Simonyi's Hungarian notation.) Just because the compiler won't yell at you for adding a meter_t to a gram_t doesn't mean you shouldn't use types like that. Code reviews will be much more productive at discovering unit errors then.
My opinion is if you are depending on a minimum/maximum/specific size don't just assume that (say) an unsigned int is 32 bytes - use uint32_t instead (assuming your compiler supports C99).
I like using stdint.h types for defining system APIs specifically because they explicitly say how large items are. Back in the old days of Palm OS, the system APIs were defined using a bunch of wishy-washy types like "Word" and "SWord" that were inherited from very classic Mac OS. They did a cleanup to instead say Int16 and it made the API easier for newcomers to understand, especially with the weird 16-bit pointer issues on that system. When they were designing Palm OS Cobalt, they changed those names again to match stdint.h's names, making it even more clear and reducing the amount of typedefs they had to manage.
I believe that MISRA standards suggest (require?) the use of typedefs.
From a personal perspective, using typedefs leaves no confusion as to the size (in bits / bytes) of certain types. I have seen lead developers attempt both ways of developing by using standard types e.g. int and using custom types e.g. UINT32.
If the code isn't portable there is little real benefit in using typedefs, however , if like me then you work on both types of software (portable and fixed environment) then it can be useful to keep a standard and use the cutomised types. At the very least like you say, the programmer is then very much aware of how much memory they are using. Another factor to consider is how 'sure' are you that the code will not be ported to another environment? Ive seen processor specific code have to be translated as a hardware engieer has suddenly had to change a board, this is not a nice situation to be in but due to the custom typedefs it could have been a lot worse!
Consistency, convenience and readability. "UINT32" is much more readable and writeable than "unsigned long long", which is the equivalent for some systems.
Also, the compiler and processor may be fixed for the life of a project, but the code from that project may find new life in another project. In this case, having consistent data types is very convenient.
If your embedded systems is somehow a safety critical system (or similar), it's strongly advised (if not required) to use typedefs over plain types.
As TK. has said before, MISRA-C has an (advisory) rule to do so:
Rule 6.3 (advisory): typedefs that indicate size and signedness should be used in place of the basic numerical types.
(from MISRA-C 2004; it's Rule #13 (adv) of MISRA-C 1998)
Same also applies to C++ in this area; eg. JSF C++ coding standards:
AV Rule 209 A UniversalTypes file will be created to define all sta
ndard types for developers to use. The types include: [uint16, int16, uint32_t etc.]
Using <stdint.h> makes your code more portable for unit testing on a pc.
It can bite you pretty hard when you have tests for everything but it still breaks on your target system because an int is suddenly only 16 bit long.
Maybe I'm weird, but I use ub, ui, ul, sb, si, and sl for my integer types. Perhaps the "i" for 16 bits seems a bit dated, but I like the look of ui/si better than uw/sw.

Resources