What does Far mean in c? - c

const struct sockaddr FAR* name,

It's an old extension from the era of segmented memory architectures. It basically means "this is a pointer that needs to be able to point at any address, not just things in the same segment as the code using it".
See more or on the wikipedia page.

far doesn't mean anything in C. Check out the C99 standard [PDF] and see if you can find mention of far pointers. Far pointers were an extension added to compilers targeting the 8086/80286 architectures to provide support for the segmented memory model.

If does nothing unless you happen to be using a 16 bit x86 compiler.
If you look in the Win32 header WinDef.h (in Visual Studio, simply right-click the word FAR in the source and select "Go to Definition", you will see that it is a macro defined as far, which in turn is also a macro defined as nothing at all!
It is only there to allow the compilation of legacy Win16 source as Win32. In 16 bit x86 compilers, far was a compiler extension keyword to support seg::offset pointers which resolve to a 20bit address (16 bit x86 only had a 1Mb address space!). They are distinct from 16 bit near pointers which comprised only the ::offset from the current segment.

Related

Should one use Named Address Spaces where they are available?

There are some architectures which have multiple address spaces, notable examples are true Harvard ones, but for example OpenCL also has this property.
C compilers may provide some solutions to this, one of these is Named Address Spaces, supporting special pointer qualifiers to indicate the pointer's address space, but other solutions might also be present.
For GCC, the corresponding documentation is here: https://gcc.gnu.org/onlinedocs/gcc-4.7.0/gcc/Named-Address-Spaces.html
For IAR targeting the AVR, the corresponding documentation is here: https://www.iar.com/support/tech-notes/compiler/strings-with-iccavr-2.x/ (note that this is earlier than GCC's support, which GCC likely adapted for the 8 bit AVR target).
For SDCC (Small Device C compiler): http://sdcc.sourceforge.net/doc/sdccman.pdf , starts on Page 36. Covers microcontrollers like the 8051, Z80 and 68HC08.
Some information for OpenCL: https://www.khronos.org/registry/OpenCL/sdk/1.1/docs/man/xhtml/local.html and https://software.intel.com/en-us/articles/the-generic-address-space-in-opencl-20
I didn't know about them, and on the architecture I am using (8 bit AVR), there is another solution to deal with the problem, specialized macros (pgmspace.h) to work with data in the ROM. But there are no type checks on these, and they (in my opinion) make code ugly, so it would seem to me that using Named Address Spaces are a superior, and possibly even more portable way to deal with the problem (portable in that one could easily port such a software to a target having a single address space by providing empty definitions for the address space qualifiers).
However in a previous question in which I learned from their availability, solutions suggesting the use of Named Address Spaces got severely downvoted, here: How to make two otherwise identical pointer types incompatible
Downvoters didn't provide any explanation, and I neither found any myself, for me it seems like Named Address Spaces are a good and perfectly functional way of dealing with the problem.
Could anyone provide an explanation? Why Named Address Spaces probably shouldn't be used? (favoring whatever other method available on the target having multiple distinct address spaces)
Another approach is to steal a technique used in things like the Linux kernel and tools like smatch.
Linux has defines like
#define __user
which mean the code can say things like int foo(const __user char *p). The compiler ignores the __user but tools like smatch are then used to make sure that pointers don't accidentally wander between namespaces.
The problem with these is obvious: they only work on the gcc compiler.
And in the embedded systems branch there are lots of different compilers, each offering its own unique, non-portable way to do this. Sometimes that is fine (most embedded projects never get ported to different compilers) but from a generic point-of-view, it is not.
(The very same issue also exists with extended addresses - if you would for example use a 8 or 16 bit MCU with more than 64kib addressable memory. Compilers then use various non-standard extensions such as near and far.)
One solution to these problems is to make a "wrapper" around the compiler-specific behavior, by making a hardware abstraction layer (HAL) where you specify that the type used for storing data in flash is flash_byte_t or some such, then from your HAL include a compiler-specific header-file containing the actual typedef, such as typedef const __flash uint8_t flash_byte_t;. For example, the application includes "compiler.h" and this one in turn includes "gcc.h". That way you only need to re-write one small header file when you switch compiler.
Also as it turns out, C allows const flash_byte_t just fine even though this was already typedef'd as const. There's a special odd rule in C saying that you can add the same qualifier as many times in a declaration as you like. So const const int x is equivalent to const int x. This means that if the user would put on extra const-qualification, that's fine.
Note that it's mostly AVR being a special exception here, because of its weird Harvard model.
Otherwise, there's an industry de facto standard convention used by most compilers: all const qualified variables with static storage duration should be allocated in flash. Of course the C standard makes no guarantees of this (it is out of scope of the standard), but most embedded compilers behave like that.

how to enable __far in gcc cygwin

I am having some trouble compiling my code having __far.
I have read that __far is not a c standard keyword.
furthermore, this is in relation to the use of rl78 compiler.
C Implementations for architectures with non-flat address spaces, usually had those two pointer "classes":
near pointers that store the offset within a memory segment
far pointers that additionally specify what segment
Many compilers implemented the latter using a non-standard __far specifier. The 16-bit x86 used to be such an architecture.
But Cygwin is only available for 32 bit x86 and x86_64 versions of Windows. And on those there is no concept of near and far pointer anymore.
In order to compile your code, you will need to strip away the __fars and hope that the code itself isn't too tightly coupled to your original architecture.

why do we use far keyword in c? [duplicate]

const struct sockaddr FAR* name,
It's an old extension from the era of segmented memory architectures. It basically means "this is a pointer that needs to be able to point at any address, not just things in the same segment as the code using it".
See more or on the wikipedia page.
far doesn't mean anything in C. Check out the C99 standard [PDF] and see if you can find mention of far pointers. Far pointers were an extension added to compilers targeting the 8086/80286 architectures to provide support for the segmented memory model.
If does nothing unless you happen to be using a 16 bit x86 compiler.
If you look in the Win32 header WinDef.h (in Visual Studio, simply right-click the word FAR in the source and select "Go to Definition", you will see that it is a macro defined as far, which in turn is also a macro defined as nothing at all!
It is only there to allow the compilation of legacy Win16 source as Win32. In 16 bit x86 compilers, far was a compiler extension keyword to support seg::offset pointers which resolve to a 20bit address (16 bit x86 only had a 1Mb address space!). They are distinct from 16 bit near pointers which comprised only the ::offset from the current segment.

Is there any C standard for microcontrollers?

Is there any special C standard for microcontrollers?
I ask because so far when I programmed something under Windows OS, it doesn't matter which compiler I used. If I had a compiler for C99, I knew what I could do with it.
But recently I started to program in C for microcontrollers, and I was shocked, that even it's still C in its basics, like loops, variables creation and so, there is some syntax type I have never seen in C for desktop computers. And furthermore, the syntax is changing from version to version. I use AVR-GCC compiler, and in previous versions, you used a function for port I/O, now you can handle a port like a variable in the new version.
What defines what functions and how to have them to be implemented into the compiler and still have it be called C?
Is there any special C standard for microcontrollers?
No, there is the ISO C standard. Because many small devices have special architecture features that need to be supported, many compilers support language extensions. For example because an 8051 has bit addressable RAM, a _bit data type may be provided. It also has a Harvard architecture, so keywords are provided for specifying different memory address spaces which an address alone does not resolve since different instructions are required to address these spaces. Such extensions will be clearly indicated in the compiler documentation. Moreover, extensions in a conforming compiler should be prefixed with an underscore. However, many provide unadorned aliases for backward compatibility, and their use should be deprecated.
... when I programmed something under Windows OS, it doesn't matter which compiler I used.
Because the Windows API is standardized (by Microsoft), and it only runs on x86, so there is no architectural variation to consider. That said, you may still see FAR, and NEAR macros in APIs, and that is a throwback to 16-bit x86 with its segmented addressing, which also required compiler extensions to handle.
... that even it's still C in its basics, like loops, variables creation and so,
I am not sure what that means. A typical microcontroller application has no OS or a simple kernel, you should expect to see a lot more 'bare metal' or 'system-level' code, because there are no extensive OS APIs and device driver interfaces to do lots of work under the hood for you. All those library calls are just that; they are not part of the language; it is the same C language; jut put to different work.
... there is some syntax type I have never seen in C for desktop computers.
For example...?
And furthermore, the syntax is changing from version to version.
I doubt it. Again; for example...?
I use AVR-GCC compiler, and in previous versions, you used a function for port I/O, now you can handle a port like a variable in the new version.
That is not down to changes in the language or compiler, but more likely simple 'preprocessor magic'. On AVR, all I/O is memory mapped, so if for example you include the device support header, it may have a declaration such as:
#define PORTA (*((volatile char*)0x0100))
You can then write:
PORTA = 0xFF;
to write 0xFF to memory mapped the register at address 0x100. You could just take a look at the header file and see exactly how it does it.
The GCC documentation describes target specific variations; AVR is specifically dealt with here in section 6.36.8, and in 3.17.3. If you compare that with other targets supported by GCC, it has very few extensions, perhaps because the AVR architecture and instruction set were specifically designed for clean and efficient implementation of a C compiler without extensions.
What defines what functions and how to have them to be implemented into the compiler and still have it be called C?
It is important to realise that the C programming language is a distinct entity from its libraries, and that functions provided by libraries are no different from the ones you might write yourself - they are not part of the language - so it can be C with no library whatsoever. Ultimately, library functions are written using the same basic language elements. You cannot expect the level of abstraction present in, say, the Win32 API to exist in a library intended for a microcontroller. You can in most cases expect at least a subset of the C Standard Library to be implemented since it was designed as a systems level library with few target hardware dependencies.
I have been writing C and C++ for embedded and desktop systems for years and do not recognise the huge differences you seem to perceive, so can only assume that they are the result of a misunderstanding of what constitutes the C language. The following books may help.
C Programming Language (2nd Edition) by Brian W. Kernighan and Dennis M. Ritchie
Embedded C by Michael J. Pont
Embedded systems are weird and sometimes have exceptions to "standard" C.
From system to system you will have different ways to do things like declare interrupts, or define what variables live in different segments of memory, or run "intrinsics" (pseudo-functions that map directly to assembly code), or execute inline assembly code.
But the basics of control flow (for/if/while/switch/case) and variable and function declarations should be the same across the board.
and in previous versions, you used function for Port I/O, now you can handle Port like variable in new version.
That's not part of the C language; that's part of a device support library. That's something each manufacturer will have to document.
The C language assumes a von Neumann architecture (one address space for all code and data) which not all architectures actually have, but most desktop/server class machines do have (or at least present with the aid of the OS). To get around this without making horrible programs, the C compiler (with help from the linker) often support some extensions that aid in making use of multiple address spaces efficiently. All of this could be hidden from the programmer, but it would often slow down and inflate programs and data.
As far as how you access device registers -- on different desktop/server class machines this is very different as well, but since programs written to run under common modern OSes for these machines (Mac OS X, Windows, BSDs, or Linux) don't normally access hardware directly, this isn't an issue. There is OS code that has to deal with these issues, though. This is usually done through defining macros and/or functions that are implemented differently on different architectures or even have multiple versions on a single system so that a driver could work for a particular device (such an Ethernet chip) whether it were on a PCI card or a USB dongle (possibly plugged into a USB card plugged into a PCI slot), or directly mapped into the processor's address space.
Additionally, the C standard library makes more assumptions than the compiler (and language proper) about the system that hosts the programs that use it (the C standard library). These things just don't make sense when there isn't a general purpose OS or filesystem. fopen makes no sense on a system without a filesystem, and even printf might not be easily definable.
As far as what AVR-GCC and its libraries do -- there are lots of stuff that goes into how this is done. The AVR is a Harvard architecture with memory mapped device control registers, special function registers, and general purpose registers (memory addresses 0-31), and a different address space for code and constant data. This already falls outside of what standard C assumes. Some of the registers (general, special, and device control) are accessible via special instructions for things like flipping single bits and read/writing to some multi-byte registers (a multi-instruction operation) implicitly blocks interrupts for the next instruction (so that the second half of the operation can happen). These are things that desktop C programs don't have to know anything about, and since AVR-GCC comes from regular GCC, it didn't initially understand all of these things either. That meant that the compiler wouldn't always use the best instructions to access control registers, so:
*(DEVICE_REG_ADDR) |= 1; // Set BIT0 of control register REG
would have turned into:
temp_reg = *DEVICE_REG_ADDR;
temp_reg |= 1;
*DEVICE_REG_ADDR = temp_reg;
because AVR generally has to have things in its general purpose registers to do bit operations on them, though for some memory locations this isn't true. AVR-GCC had to be altered to recognize that when the address of a variable used in certain operations is known at compile time and lies within a certain range, it can use different instructions to preform these operations. Prior to this, AVR-GCC just provided you with some macros (that looked like functions) that had inline assembly to do this (and use the single instruction inplemenations that GCC now uses). If they no longer provide the macro versions of these operations then that's probably a bad choice since it breaks old code, but allowing you to access these registers as though they were normal variables once the ability to do so efficiently and atomically was implemented is good.
I have never seen a C compiler for a microcontroller which did not have some controller-specific extensions. Some compilers are much closer to meeting ANSI standards than others, but for many microcontrollers there are tradeoffs between performance and ANSI compliance.
On many 8-bit microcontrollers, and even some 16-bit ones, accessing variables on a stack frame is slow. Some compilers will always allocate automatic variables on a run-time stack despite the extra code required to do so, some will allocate automatic variables at compile time (allowing variables that are never live simultaneously to overlap), and some allow the behavior to be controlled with a command-line options or #pragma directives. When coding for such machines, I sometimes like to #define a macro called "auto" which gets redefined to "static" if it will help things work faster.
Some compilers have a variety of storage classes for memory. You may be able to improve performance greatly by declaring things to be of suitable storage classes. For example, an 8051-based system might have 96 bytes of "data" memory, 224 bytes of "idata" memory which overlaps the first 96 bytes, and 4K of "xdata" memory.
Variables in "data" memory may be accessed directly.
Variables in "idata" memory may only be accessed by loading their address into a one-byte pointer register. There is no extra overhead accessing them in cases where that would be necessary anyway, so idata memory is great for arrays. If array q is stored in idata memory, a reference to q[i] will be just as fast as if it were in data memory, though a reference to q[0] will be slower (in data memory, the compiler could pre-compute the address and access it without a pointer register; in idata memory that is not possible).
Variables in xdata memory are far slower to access than those in other types, but there's a lot more xdata memory available.
If one tells an 8051 compiler to put everything in "data" by default, one will "run out of memory" if one's variables total more than 96 bytes and one hasn't instructed the compiler to put anything elsewhere. If one puts everything in "xdata" by default, one can use a lot more memory without hitting a limit, but everything will run slower. The best is to place frequently-used variables that will be directly accessed in "data", frequently-used variables and arrays that are indirectly accessed in "idata", and infrequently-used variables and arrays in "xdata".
The vast majority of the standard C language is common with microcontrollers. Interrupts do tend to have slightly different conventions, although not always.
Treating ports like variables is a result of the fact that the registers are mapped to locations in memory on most microcontrollers, so by writing to the appropriate memory location (defined as a variable with a preset location in memory), you set the value on that port.
As previous contributors have said, there is no standard as such, mainly due to different architectures.
Having said that, Dynamic C (sold by Rabbit Semiconductor) is described as "C with real-time extensions". As far as I know, the compiler only targets Rabbit processors, but there are useful additional keywords (for example, costate, cofunc, and waitfor), some real peculiarities (for example, #use mylib.lib instead of #include mylib.h - and no linker), and several omissions from ANSI C (for example, no file-scope static variables).
It's still described as 'C' though.
Wiring has a C-based language syntax. Perhaps you might want to see what makes it as such.

Are macro definitions compatible between MIPS and Intel C compiler?

I seem to be having a problem with a macro that I have defined in a C program.
I compile this software and run it sucessfully with the MIPS compiler.
It builds OK but throws the error "Segmentation fault" at runtime when using icc.
I compiled both of these on 64 bit architectures (MIPS on SGI, with -64 flag and icc on an intel platform).
Is there some magic switch I need to use to make this work correctly on both system? I turned on warnings for the intel compiler, and EVERY one of the places in my program where a macro is invoked throws a warning. Usually something along the lines of mismatched types on the macro's parameters (int to char *) or some such thing.
Here is the offending macro
#define DEBUG_ENTER(name) {tdepth++;
if(tnames[tdepth] == NULL) tnames[tdepth] = memalign(8, sizeof(char)*MAXLEN);
strcopy(tnames[tdepth],name);
FU_DEBUG("Entering \n");}
This basically is used for debugging - printing to a log file with a set number of tabs in based on how many function calls there are. (tdepth = tab depth)
I did some checking around in man pages. it seems like memalign is only supported on IRIX. This may be my problem. I am going to track it down.
This might have to do with the system's "endianness." Looking here it seems that MIPS has switchable endianness. I'm not sure if you are using the correct endianness already, but if you aren't, you will DEFINATELY have problems.
This might be a byte order issue. MIPS can be big endian but intel is little endian.
It sounds like the array tnames is an array of int. If you're assigning pointers to it, it should be an array of a pointer type - in this case probably char * is appropriate.
(Also, strcopy() isn't a standard function - are you sure you don't mean strcpy()?)

Resources