I'm building an arm-eabi-gcc toolchain with Newlib 2.5.0 as the target C library.
The target embedded system would prefer smaller code size over execution speed. How do I configure newlib to favour smaller code size?
The default build does things like produce a version of strstr that is over 1KB in code size.
There is fat in Newlib that can be addressed with Newlib-nano, which is already part of GCC ARM Embedded, as discussed here (Note the article is from 2014, so the information may be out-dated, but there appears to be Newlib-nano support in the current v6-2017 too).
It removes some features added after C89 that are rarely used in MCU based embedded systems, simplifies complex functions such as formatted I/O, and removes wide character support from non-wide character specific functions. Critically in respect to this question the default build is already size optimised (-Os).
Configure newlib like this:
CFLAGS_FOR_TARGET="-DPREFER_SIZE_OVER_SPEED=1 -Os" \
../newlib-2.5.0/configure
(where I've omitted the rest of the arguments I used for configure, they don't change based on this issue).
There isn't a configure flag, but the configure script reads certain variables from the environment. CFLAGS_FOR_TARGET means flags used when building for the target system.
Not to be confused with CFLAGS_FOR_BUILD , which are flags that would be used if the build system needed to make any auxiliary executables to execute on the build system to help with the build process.
I couldn't find any official documentation on this, but searching the source code, it contained many instances of testing for PREFER_SIZE_OVER_SPEED or __OPTIMIZE_SIZE__. Based on a quick grep, these two flags are almost identical. The only difference was a case in the printf family that if a null pointer is passed for %s, then the former will translate it to (null) but the latter bulls on ahead , probably causing a crash.
I'm working developing some embedded system with an 8051 at work. Today, all codes are written with IAR, which manage the different memories of the micro with some keywords like __xdata, __pdata, etc...
We're starting with unit testing using the framework Ceedling and I think that the best way of testing my units is making a native executable (http://www.throwtheswitch.org/build/which) and test in my linux and then, once my soft is done, compile it for the 8051.
My problem now is that I don't know how to map the different types of memories of the micro without using the IAR keywords. Does anyone have this problem?
You could consider using the C pre-compiler to define a set of 'declaration' macros, although this could obfuscate the code making it more difficult to maintain.
#ifndef UNIT_TEST
#define DECLARE_DATA_VAR(type,name) type __data name
#endif
Then you could define a similar macro for your unit test framework. The following assumes GCC, so that an output section can be specified to the linker to represent __data variables.
#ifdef UNIT_TEST
#define DECLARE_DATA_VAR(type,name) type name __attribute__((section ("__data")))
#endif
Then in your code you would have to replace standard variable declarations using the macros.
DECLARE_DATA_VAR(int,aNumber);
Caveat: If you do use the GCC __attribute__ to place a variable in a named section you must be careful about the section attributes, as the data and bss sections have different attributes and you will not be able to mix them. (Some architectures and compile tools have addition sections as well)! For example, the following may not be mutually compatible;
DECLARE_DATA_VAR(int,aNumber); // Will be placed in bss
DECLARE_DATA_VAR(int,aNumber) = 0; // May end up in bss or data (it's up to the compiler implementation).
DECLARE_DATA_VAR(int,aNumber) = 1; // Will be placed in data
Alternative: You should consider very carefully exactly what you are 'unit testing'. Think about encapsulating business functionality into functions that are independent of the underlying hardware implementation and providing a seperate hardware abstraction layer for all those parts that really do depend on the target and don't need to be unit tested (i.e. low level drivers).
You could end up writing your own 8051 simulator or spend more time abstracting your code for the sake of unit testing than you do adding business value to your software.
The memory layout of a struct is up to the compiler. So what happens when some code compiled by one compiler uses a struct generated by code compiled by another compiler?
For example, say I have a header file that declares a struct somestruct, and a function that returns the struct. One source file defines that function and is compiled by compiler A. Another source file uses than function and is compiled by compiler B and links against the binary of the other source file.
If the two compilers create two different layouts for somestruct, then what's the layout of the variable returned by the function? Does it defer to one compiler's layout, or will there be a memory bug when the second source file tries to access elements of the struct returned by the first source file? Is it an error at compile time or link time?
The function will return a structure as specified by the ABI of the compiler of the function. The callee compiler, will just treat the function as if it conforms to the ABI of itself.
Assuming the two compilers use a similar ABI, in most cases, no errors will be reported during compile-time or link time or even during runtime. For some compatible compilers like Clang, GCC, and Intel C Compiler on OS X and Linux, no errors should result (if there are errors then it's a bug of the compiler). However in real world it is usually difficult to find fully compatible compilers (in most cases their ABIs are similar but not exactly the same; such ABI errors will be even harder to track down because your app would appear normal and crashes under some really weird circumstances are encountered during runtime).
Just as Basile said, name mangling for C++ poses an additional difference in ABI, but such differences are more easily caught during compile time as the linker literally can't find the symbol of the function, rather than finding a function that is not compatible.
Also, passing structures is another headache in terms of ABI because there are multiple structure-packing ABIs, sometimes even different in "compatible" compilers like GCC/MinGW and MSVC. (See also the -m[no-]ms-bitfields option in GCC, which forces GCC to use the MSVC ABI for structures.) I have also seen some cases where passing structures by pointer is more reliable than passing structures by value.
The layout of data (e.g. structures etc...), and the call protocol (how are call done at the processor level) are defined in a (processor and operating system specific) document called Application Binary Interface. If both compilers are following the same ABI (for the same processor and the same operating system) their generated code should be interoperable.
See e.g. the wikipage for x86 calling conventions and the x86-64 ABI specification.
Name mangling, notably for C++, might also be an issue.
Read also Levine's book on Linkers and Loaders
So I'm writing portable embedded ansi C code that is attempting to support multiple compilers and hardware targets. Each compiler/hardware vendor has different math.h functions it supports. Some support only C90, some support a subset of C99, others a full set of C99.
I'm trying to find a way to check if a given function exists during preprocessor so that I can use a custom macro if it doesn't exist. Some vendors have extern functions in the math.h, some use #define to remap to some internal call. Is there a piece of code that can tell if it is #defined or an extern function? I can use #ifdef for the define, but what about an actual function call?
The usual solution is instead to look at macros defined by the preprocessor itself, or passed into the build process as -D definitions, which identify the compiler and platform you're running on, and use those plus your knowledge of what special assists each environment needs to configure your code.
I suppose you could write a series of test .c files, try compiling them, look at the error codes coming back, and use those to set appropriate -D flags... but I'm not convinced that would be any cleaner.
Is there any special C standard for microcontrollers?
I ask because so far when I programmed something under Windows OS, it doesn't matter which compiler I used. If I had a compiler for C99, I knew what I could do with it.
But recently I started to program in C for microcontrollers, and I was shocked, that even it's still C in its basics, like loops, variables creation and so, there is some syntax type I have never seen in C for desktop computers. And furthermore, the syntax is changing from version to version. I use AVR-GCC compiler, and in previous versions, you used a function for port I/O, now you can handle a port like a variable in the new version.
What defines what functions and how to have them to be implemented into the compiler and still have it be called C?
Is there any special C standard for microcontrollers?
No, there is the ISO C standard. Because many small devices have special architecture features that need to be supported, many compilers support language extensions. For example because an 8051 has bit addressable RAM, a _bit data type may be provided. It also has a Harvard architecture, so keywords are provided for specifying different memory address spaces which an address alone does not resolve since different instructions are required to address these spaces. Such extensions will be clearly indicated in the compiler documentation. Moreover, extensions in a conforming compiler should be prefixed with an underscore. However, many provide unadorned aliases for backward compatibility, and their use should be deprecated.
... when I programmed something under Windows OS, it doesn't matter which compiler I used.
Because the Windows API is standardized (by Microsoft), and it only runs on x86, so there is no architectural variation to consider. That said, you may still see FAR, and NEAR macros in APIs, and that is a throwback to 16-bit x86 with its segmented addressing, which also required compiler extensions to handle.
... that even it's still C in its basics, like loops, variables creation and so,
I am not sure what that means. A typical microcontroller application has no OS or a simple kernel, you should expect to see a lot more 'bare metal' or 'system-level' code, because there are no extensive OS APIs and device driver interfaces to do lots of work under the hood for you. All those library calls are just that; they are not part of the language; it is the same C language; jut put to different work.
... there is some syntax type I have never seen in C for desktop computers.
For example...?
And furthermore, the syntax is changing from version to version.
I doubt it. Again; for example...?
I use AVR-GCC compiler, and in previous versions, you used a function for port I/O, now you can handle a port like a variable in the new version.
That is not down to changes in the language or compiler, but more likely simple 'preprocessor magic'. On AVR, all I/O is memory mapped, so if for example you include the device support header, it may have a declaration such as:
#define PORTA (*((volatile char*)0x0100))
You can then write:
PORTA = 0xFF;
to write 0xFF to memory mapped the register at address 0x100. You could just take a look at the header file and see exactly how it does it.
The GCC documentation describes target specific variations; AVR is specifically dealt with here in section 6.36.8, and in 3.17.3. If you compare that with other targets supported by GCC, it has very few extensions, perhaps because the AVR architecture and instruction set were specifically designed for clean and efficient implementation of a C compiler without extensions.
What defines what functions and how to have them to be implemented into the compiler and still have it be called C?
It is important to realise that the C programming language is a distinct entity from its libraries, and that functions provided by libraries are no different from the ones you might write yourself - they are not part of the language - so it can be C with no library whatsoever. Ultimately, library functions are written using the same basic language elements. You cannot expect the level of abstraction present in, say, the Win32 API to exist in a library intended for a microcontroller. You can in most cases expect at least a subset of the C Standard Library to be implemented since it was designed as a systems level library with few target hardware dependencies.
I have been writing C and C++ for embedded and desktop systems for years and do not recognise the huge differences you seem to perceive, so can only assume that they are the result of a misunderstanding of what constitutes the C language. The following books may help.
C Programming Language (2nd Edition) by Brian W. Kernighan and Dennis M. Ritchie
Embedded C by Michael J. Pont
Embedded systems are weird and sometimes have exceptions to "standard" C.
From system to system you will have different ways to do things like declare interrupts, or define what variables live in different segments of memory, or run "intrinsics" (pseudo-functions that map directly to assembly code), or execute inline assembly code.
But the basics of control flow (for/if/while/switch/case) and variable and function declarations should be the same across the board.
and in previous versions, you used function for Port I/O, now you can handle Port like variable in new version.
That's not part of the C language; that's part of a device support library. That's something each manufacturer will have to document.
The C language assumes a von Neumann architecture (one address space for all code and data) which not all architectures actually have, but most desktop/server class machines do have (or at least present with the aid of the OS). To get around this without making horrible programs, the C compiler (with help from the linker) often support some extensions that aid in making use of multiple address spaces efficiently. All of this could be hidden from the programmer, but it would often slow down and inflate programs and data.
As far as how you access device registers -- on different desktop/server class machines this is very different as well, but since programs written to run under common modern OSes for these machines (Mac OS X, Windows, BSDs, or Linux) don't normally access hardware directly, this isn't an issue. There is OS code that has to deal with these issues, though. This is usually done through defining macros and/or functions that are implemented differently on different architectures or even have multiple versions on a single system so that a driver could work for a particular device (such an Ethernet chip) whether it were on a PCI card or a USB dongle (possibly plugged into a USB card plugged into a PCI slot), or directly mapped into the processor's address space.
Additionally, the C standard library makes more assumptions than the compiler (and language proper) about the system that hosts the programs that use it (the C standard library). These things just don't make sense when there isn't a general purpose OS or filesystem. fopen makes no sense on a system without a filesystem, and even printf might not be easily definable.
As far as what AVR-GCC and its libraries do -- there are lots of stuff that goes into how this is done. The AVR is a Harvard architecture with memory mapped device control registers, special function registers, and general purpose registers (memory addresses 0-31), and a different address space for code and constant data. This already falls outside of what standard C assumes. Some of the registers (general, special, and device control) are accessible via special instructions for things like flipping single bits and read/writing to some multi-byte registers (a multi-instruction operation) implicitly blocks interrupts for the next instruction (so that the second half of the operation can happen). These are things that desktop C programs don't have to know anything about, and since AVR-GCC comes from regular GCC, it didn't initially understand all of these things either. That meant that the compiler wouldn't always use the best instructions to access control registers, so:
*(DEVICE_REG_ADDR) |= 1; // Set BIT0 of control register REG
would have turned into:
temp_reg = *DEVICE_REG_ADDR;
temp_reg |= 1;
*DEVICE_REG_ADDR = temp_reg;
because AVR generally has to have things in its general purpose registers to do bit operations on them, though for some memory locations this isn't true. AVR-GCC had to be altered to recognize that when the address of a variable used in certain operations is known at compile time and lies within a certain range, it can use different instructions to preform these operations. Prior to this, AVR-GCC just provided you with some macros (that looked like functions) that had inline assembly to do this (and use the single instruction inplemenations that GCC now uses). If they no longer provide the macro versions of these operations then that's probably a bad choice since it breaks old code, but allowing you to access these registers as though they were normal variables once the ability to do so efficiently and atomically was implemented is good.
I have never seen a C compiler for a microcontroller which did not have some controller-specific extensions. Some compilers are much closer to meeting ANSI standards than others, but for many microcontrollers there are tradeoffs between performance and ANSI compliance.
On many 8-bit microcontrollers, and even some 16-bit ones, accessing variables on a stack frame is slow. Some compilers will always allocate automatic variables on a run-time stack despite the extra code required to do so, some will allocate automatic variables at compile time (allowing variables that are never live simultaneously to overlap), and some allow the behavior to be controlled with a command-line options or #pragma directives. When coding for such machines, I sometimes like to #define a macro called "auto" which gets redefined to "static" if it will help things work faster.
Some compilers have a variety of storage classes for memory. You may be able to improve performance greatly by declaring things to be of suitable storage classes. For example, an 8051-based system might have 96 bytes of "data" memory, 224 bytes of "idata" memory which overlaps the first 96 bytes, and 4K of "xdata" memory.
Variables in "data" memory may be accessed directly.
Variables in "idata" memory may only be accessed by loading their address into a one-byte pointer register. There is no extra overhead accessing them in cases where that would be necessary anyway, so idata memory is great for arrays. If array q is stored in idata memory, a reference to q[i] will be just as fast as if it were in data memory, though a reference to q[0] will be slower (in data memory, the compiler could pre-compute the address and access it without a pointer register; in idata memory that is not possible).
Variables in xdata memory are far slower to access than those in other types, but there's a lot more xdata memory available.
If one tells an 8051 compiler to put everything in "data" by default, one will "run out of memory" if one's variables total more than 96 bytes and one hasn't instructed the compiler to put anything elsewhere. If one puts everything in "xdata" by default, one can use a lot more memory without hitting a limit, but everything will run slower. The best is to place frequently-used variables that will be directly accessed in "data", frequently-used variables and arrays that are indirectly accessed in "idata", and infrequently-used variables and arrays in "xdata".
The vast majority of the standard C language is common with microcontrollers. Interrupts do tend to have slightly different conventions, although not always.
Treating ports like variables is a result of the fact that the registers are mapped to locations in memory on most microcontrollers, so by writing to the appropriate memory location (defined as a variable with a preset location in memory), you set the value on that port.
As previous contributors have said, there is no standard as such, mainly due to different architectures.
Having said that, Dynamic C (sold by Rabbit Semiconductor) is described as "C with real-time extensions". As far as I know, the compiler only targets Rabbit processors, but there are useful additional keywords (for example, costate, cofunc, and waitfor), some real peculiarities (for example, #use mylib.lib instead of #include mylib.h - and no linker), and several omissions from ANSI C (for example, no file-scope static variables).
It's still described as 'C' though.
Wiring has a C-based language syntax. Perhaps you might want to see what makes it as such.