Can a 32-bit processor really address 2^32 memory locations? - c

I feel this might be a weird/stupid question, but here goes...
In the question Is NULL in C required/defined to be zero?, it has been established that the NULL pointer points to an unaddressable memory location, and also that NULL is 0.
Now, supposedly a 32-bit processor can address 2^32 memory locations.
2^32 is only the number of distinct numbers that can be represented using 32 bits. Among those numbers is 0. But since 0, that is, NULL, is supposed to point to nothing, shouldn't we say that a 32-bit processor can only address 2^32 - 1 memory locations (because the 0 is not supposed to be a valid address)?

If a 32-bit processor can address 2^32 memory locations, that simply means that a C pointer on that architecture can refer to 2^32 - 1 locations plus NULL.

the NULL pointer points to an unaddressable memory location
This is not true. From the accepted answer in the question you linked:
Notice that, because of how the rules for null pointers are formulated, the value you use to assign/compare null pointers is guaranteed to be zero, but the bit pattern actually stored inside the pointer can be any other thing
Most platforms of which I am aware do in fact handle this by marking the first few pages of address space as invalid. That doesn't mean the processor can't address such things; it's just a convenient way of making low values a non valid pointer. For instance, several Windows APIs use this to distinguish between a resource ID and a pointer to actual data; everything below a certain value (65k if I recall correctly) is not a valid pointer, but is a valid resource ID.
Finally, just because C says something doesn't mean that the CPU needs to be restricted that way. Sure, C says accessing the null pattern is undefined -- but there's no reason someone writing in assembly need be subject to such limitations. Real machines typically can do much more than the C standard says they have to. Virtual memory, SIMD instructions, and hardware IO are some simple examples.

First, let's note the difference between the linear address (AKA the value of the pointer) and the physical address. While the linear address space is, indeed, 32 bits (AKA 2^32 different bytes), the physical address that goes to the memory chip is not the same. Parts ("pages") of the linear address space might be mapped to physical memory, or to a page file, or to an arbitrary file, or marked as inaccessible and not backed by anything. The zeroth page happens to be the latter. The mapping mechanism is implemented on the CPU level and maintained by the OS.
That said, the zero address being unaddressable memory is just a C convention that's enforced by every protected-mode OS since the first Unices. In MS-DOS-era real-mode operaring systems, null far pointer (0000:0000) was perfectly addressable; however, writing there would ruin system data structures and bring nothing but trouble. Null near pointer (DS:0000) was also perfectly accessible, but the run-time library would typically reserve some space around zero to protect from accidental null pointer dereferencing. Also, in real mode (like in DOS) the address space was not a flat 32-bit one, it was effectively 20-bit.

It depends upon the operating system. It is related to virtual memory and address spaces
In practice (at least on Linux x86 32 bits), addresses are byte "numbers"s, but most are for 4-bytes words so are often multiple of 4.
And more importantly, as seen from a Linux application, only at most 3Gbytes out of 4Gbytes is visible. a whole gigabyte of address space (including the first and last pages, near the null pointer) is unmapped. In practice the process see much less of that. See its /proc/self/maps pseudo-file (e.g. run cat /proc/self/maps to see the address map of the cat command on Linux).

Related

memory starting location in C [duplicate]

This question already has an answer here:
Why do virtual memory addresses for linux binaries start at 0x8048000?
(1 answer)
Closed 8 years ago.
I am looking into to the memory layout of a given process. I notice that the starting memory location of each process is not 0. On this website, TEXT starts at 0x08048000. One reason can be to distinguish the address with the NULL pointer. I am just wondering if there is any another good reasons? Thanks.
The null pointer doesn't actually have to be 0. It's guaranteed in the C standard that when a 0 value is given in the context of a pointer it's treated as NULL by the compiler.
But the 0 that you use in your source code is just syntactic sugar that has no relation to the actual physical address the null-pointer value is "pointing" to.
For further details see:
Why is NULL/0 an illegal memory location for an object?
Why is address zero used for the null pointer?
An application on your operating system has its unique address space, which it sees as a continuous block of memory (the memory isn't physically continuous, it's just "the impression" the operating system gives to every program).
For the most part, each process's virtual memory space is laid out in a similar and predictable manner (this is the memory layout in a Linux process, 32-bit mode):
(image from Anatomy of a Program in Memory)
Look at the text segment (the default .text base on x86 is 0x08048000, chosen by the default linker script for static binding).
Why the magical 0x08048000? Likely because Linux borrowed that address from the System V i386 ABI.
... and why then did System V use 0x08048000?
The value was chosen to accommodate the stack below the .text section,
growing downward. The 0x48000 bytes could be mapped by the same page
table already required by the .text section (thus saving a page table
in most cases), while the remaining 0x08000000 would allow more room
for stack-hungry applications.
Is there anything below 0x08048000? There could be nothing (it's only 128M), but you can pretty much map anything you desire there, using the mmap() system call.
See also:
What's the memory before 0x08048000 used for in 32 bit machine?
Reorganizing the address space
mmap
I think this sums it up:
Each process has its own set of page tables, but there is a catch. Once virtual addresses are enabled, they apply to all software running in the machine, including the kernel itself. Thus a portion of the virtual address space must be reserved to the kernel.
So while the process gets it's own address space. Without allocating a block to the kernel, it would not be able to address kernel code and data.
This is always the first block of memory it appears and so includes address 0. The user mode space starts beyond this, and so that is where both the stack and heap reside.
Distinguishing from NULL pointer
Even if the user mode space started at address 0, there would not be any data allocated to the address 0 as that will be in the stack or the heap which themselves do not start at the beginning of the user area. Therefore NULL (with the value of 0) could be used still and is not a reason for this layout.
However one benefit related to the NULL and the first block being kernel memory is any attempt to read/write to NULL throws a Segmentation Fault.
A loader loads a binary in segments into memory: text (constants), data, code. There is no need to start from 0, and as C is has the problem from bugs accessing around null, like in a[i] that is even dangerous. This allows (on some processors) to intercept segmentation faults.
It would be the C runtime introducing a linear address space from 0. That might be imaginable where C is the operating system's implementation language. But serves no purpose; to have the heap start from 0. The memory model is one of segments. A code segment might be protected against modification by some processors.
And in segments allocation happens in C runtime managed memory blocks.
I might add, that physical 0 and upwards is often used by the operating system itself.

What happens in OS when we dereference a NULL pointer in C?

Let's say there is a pointer and we initialize it with NULL.
int* ptr = NULL;
*ptr = 10;
Now , the program will crash since ptr isn't pointing to any address and we're assigning a value to that , which is an invalid access. So , the question is , what happens internally in the OS ? Does a page-fault / segmentation-fault occur ? Will the kernel even search in the page table ? Or the crash occur before that?
I know I wouldn't do such a thing in any program but this is just to know what happens internally in the OS or Compiler in such a case. And it is NOT a duplicate question.
Short answer: it depends on a lot of factors, including the compiler, processor architecture, specific processor model, and the OS, among others.
Long answer (x86 and x86-64): Let's go down to the lowest level: the CPU. On x86 and x86-64, that code will typically compile into an instruction or instruction sequence like this:
movl $10, 0x00000000
Which says to "store the constant integer 10 at virtual memory address 0". The Intel® 64 and IA-32 Architectures Software Developer Manuals describe in detail what happens when this instruction gets executed, so I'm going to summarize it for you.
The CPU can operate in several different modes, several of which are for backwards compatibility with much older CPUs. Modern operating systems run user-level code in a mode called protected mode, which uses paging to convert virtual addresses into physical addresses.
For each process, the OS keeps a page table which dictates how the addresses are mapped. The page table is stored in memory in a specific format (and protected so that they can not be modified by the user code) that the CPU understands. For every memory access that happens, the CPU translates it according to the page table. If the translation succeeds, it performs the corresponding read/write to the physical memory location.
The interesting things happen when the address translation fails. Not all addresses are valid, and if any memory access generates an invalid address, the processor raises a page fault exception. This triggers a transition from user mode (aka current privilege level (CPL) 3 on x86/x86-64) into kernel mode (aka CPL 0) to a specific location in the kernel's code, as defined by the interrupt descriptor table (IDT).
The kernel regains control and, based on the information from the exception and the process's page table, figures out what happened. In this case, it realizes that the user-level process accessed an invalid memory location, and then it reacts accordingly. On Windows, it will invoke structured exception handling to allow the user code to handle the exception. On POSIX systems, the OS will deliver a SIGSEGV signal to the process.
In other cases, the OS will handle the page fault internally and restart the process from its current location as if nothing happened. For example, guard pages are placed at the bottom of the stack to allow the stack to grow on demand up to a limit, instead of preallocating a large amount of memory for the stack. Similar mechanisms are used for achieving copy-on-write memory.
In modern OSes, the page tables are usually set up to make the address 0 an invalid virtual address. But sometimes it's possible to change that, e.g. on Linux by writing 0 to the pseudofile /proc/sys/vm/mmap_min_addr, after which it's possible to use mmap(2) to map the virtual address 0. In that case, dereferencing a null pointer would not cause a page fault.
The above discussion is all about what happens when the original code is running in user space. But this could also happen inside the kernel. The kernel can (and is certainly much more likely than user code to) map the virtual address 0, so such a memory access would be normal. But if it's not mapped, then what happens then is largely similar: the CPU raises a page fault error which traps into a predefined point at the kernel, the kernel examines what happened, and reacts accordingly. If the kernel can't recover from the exception, it will typically panic in some fashion (kernel panic, kernel oops, or a BSOD on Windows, e.g.) by printing out some debug information to the console or serial port and then halting.
See also Much ado about NULL: Exploiting a kernel NULL dereference for an example of how an attacker could exploit a null pointer dereference bug from inside the kernel in order to gain root privileges on a Linux machine.
As a side note, just to compel the differences in architectures, a certain OS developed and maintained by a company known for their three-letter acronym name and often referred to as a large primary color has a most-fasicnating NULL determination.
They utilize a 128-bit linear address space for ALL data (memory AND disk) in one giant "thing". In accordance with their OS, a "valid" pointer must be placed on a 128-bit boundary within that address space. This, btw, causes fascinating side effects for structs, packed or not, that house pointers. Anyway, tucked away in a per-process dedicated page is a bitmap that assigns one bit for every valid location in a process address space where a valid pointer can lay. ALL opcodes on their hardware and OS that can generate and return a valid memory address and assign it to a pointer will set the bit that represents the memory address where that pointer (the target pointer) is located.
So why should anyone care? For this simple reason:
int a = 0;
int *p = &a;
int *q = p-1;
if (p)
{
// p is valid, p's bit is lit, this code will run.
}
if (q)
{
// the address stored in q is not valid. q's bit is not lit. this will NOT run.
}
What is truly interesting is this.
if (p == NULL)
{
// p is valid. this will NOT run.
}
if (q == NULL)
{
// q is not valid, and therefore treated as NULL, this WILL run.
}
if (!p)
{
// same as before. p is valid, therefore this won't run
}
if (!q)
{
// same as before, q is NOT valid, therefore this WILL run.
}
Its something you have to see to believe. I can't even imagine the housekeeping done to maintain that bit map, especially when copying pointer values or freeing dynamic memory.
On CPU which support virtual mermory, a page fault exception will be usually issued if you try to read at memory address 0x0. The OS page fault handler will be invoked, the OS will then decide that the page is invalid and aborts your program.
Note that on some CPU you can also safely access memory address 0x0.
As the C Standard says dereferencing a null pointer is undefined, if the compiler is able to detect at compile time (or even runtime) that your are dereferencing a null pointer it can do whatever it wants, like aborting the program with a verbose error message.
(C99, 6.5.3.2.p4) "If an invalid value has been assigned to the pointer, the behavior of the unary * operator is undefined.87)"
87): "Among the invalid values for dereferencing a pointer by the unary * operator are a null pointer, an address inappropriately aligned for the type of object pointed to, and the address of an object after the end of its lifetime."
In a typical case, int *ptr = NULL; will set ptr to point to address 0. The C standard (and the C++ standard) is very careful to not require that, but it's extremely common nonetheless.
When you do *ptr = 10;, the CPU would normally generate 0 on the address lines, and 10 on the data lines, while setting a R/W line to indicate a write (and, if the bus has such a thing, assert the memory vs. I/O line to indicate a write to memory, not I/O).
Assuming the CPU supports memory protection (and you're using an OS that enables it), the CPU will check that (attempted) access before it happens though. For example, a modern Intel/AMD CPU will use paging tables that map virtual addresses to physical addresses. In a typical case, address 0 won't be mapped to any physical address. In this case, the CPU will generate an access violation exception. For one fairly typical example, Microsoft Windows leaves the first 4 megabytes un-mapped, so any address in that range will normally result in an access violation.
On an older CPU (or an older operating system that doesn't enable the CPUs protection features) the attempted write will often succeed. For example, under MS-DOS, writing through a NULL pointer would simply write to address zero. In small or medium model (with 16-bit addresses for data) most compilers would write some known pattern to the first few bytes of the data segment, and when the program ended, they'd check to see if that pattern remained intact (and do something to indicate that you'd written via a NULL pointer if it failed). In compact or large model (20-bit data addresses) they'd generally just write to address zero without warning.
I imagine that this is platform and compiler dependent. The NULL pointer could be implemented by using a NULL page, in which case you'd have a page fault, or it could be below the segment limit for an expand-down segment, in which case you'd have a segmentation fault.
This is not a definitive answer, just my conjecture.

CPU and memory communication

I'm programmer-beginner, but I want to understand the things a bit more deeply. I did some research and read quite a lot of text, but I'm still yet to understand some things..
When coding a basic thing (in C):
int myNumber;
myNumber = 3;
printf("Here's my number: %d", myNumber);
I found out that (mainly on 32-bit CPU) integer takes place of 32 bits = 4 bytes. So at first line of my code CPU goes into the memory. The memory is byte-addressable, so CPU chooses 4 continuous bytes for my variable and stores the address to first (or last) byte.
On the second line of my code CPU uses his stored address of the MyNumber variable, goes to that address in the memory and finds there 32 bits of reserved space. His task now is to store there the number "3", so he fills those four bytes with the sequence 00000000-00000000-00000000-00000011.
On the third line it does the same - CPU goes to that address in memory and loads the number stored in that address.
(First question - Do I understand it right?)
What I don't understand is this:
The size of that address (pointer to that variable) is 4bytes in 32-bit CPU. (Thats why 32-bit CPU can use max 4GB of memory - because there are only 2^32 different addresses of binary length 32)
Now, where the CPU stores these addresses? Does he have some sort of its own memory or cache to store that? And why it stores the 32 bit long address to 32 bit long integer? Wouldn't it be better to simply store in its cache that actual number than the pointer to that when the sizes are the same?
And last one - if it stores somewhere in its own cache the addresses to all those integers and the lenghts are the same (4 bytes), it will need exactly the same space for storing the addresses as for the actual variables. But variables can take up to 4GBs of space so CPU must have 4GB of its own space to store the addresses to those variables. And that sounds strange..
Thank you for help!
I'm trying to understand that but it's so tough.. :-[
(First question - Do I understand it right?)
The first thing to recognise is that the value might not be stored in main memory at all. The compiler might decide to store it in a register instead, as this is more optimal.1
The memory is byte-addressable, so CPU chooses 4 continuous bytes for my variable and stores the address to first (or last) byte.
Assuming that the compiler does decide to store it in main memory, then yes, on a 32-bit machine, an int is typically 4 bytes, so 4 bytes will be allocated for storage.
The size of that address (pointer to that variable) is 4bytes in 32-bit CPU. (Thats why 32-bit CPU can use max 4GB of memory - because there are only 2^32 different addresses of binary length 32)
Note that the width of an int and the width of a pointer don't have to be the same, so there's not necessarily a connection with the size of the address space.
Now, where the CPU stores these addresses?
In the case of local variables, the address is effectively hardcoded into the executable itself, typically as an offset from the stack pointer.
In the case of dynamically-allocated objects (i.e. stuff that's been malloc-ed), the programmer typically maintains a corresponding pointer variable (otherwise there would be a memory leak!). That pointer might also be dynamically-allocated (in the case of a complex data structure), but if you go back far enough, you'll eventually reach something that's a local variable. In which case, the above rule applies.
But variables can take up to 4GBs of space so CPU must have 4GB of its own space to store the addresses to those variables.
If your program consists of independently malloc-ing millions of ints, then yes, you'd end up with just as much storage required for the pointers. But most programs don't look like that. You typically allocate much bigger objects (like an array, or a big struct).
cache
The specifics of where stuff is stored is architecture-specific. On a modern x86, there's typically 2 or 3 layers of cache sitting between the CPU and main memory. But the cache is not independently addressable; the CPU cannot decide to store the int in cache instead of main memory. Rather, the cache is effectively a redundant copy of a subset of main memory.
Another thing to consider is that the compiler will typically deal with virtual addresses when allocating storage for objects. On a modern x86, these are mapped to physical addresses (i.e. addresses that correspond to physical storage bytes in main memory) by dedicated hardware, along with OS support.
1. Alternatively, the compiler may be able to optimise it away entirely.
On the second line of my code CPU uses his stored address of the MyNumber variable, goes to that address in the memory and finds there 32 bits of reserved space.
Nearly correct. Memory is basically unstructured. The CPU can't see that there are 32 bits of "reserved space". But the CPU was instructed to read 32 bits of data, so it reads 32 bits of data starting from the specified address. Then it just has to hope/assume that those 32 bits actually contain something meaningful.
Now, where the CPU stores these addresses? Does he have some sort of its own memory or cache to store that? And why it stores the 32 bit long address to 32 bit long integer? Wouldn't it be better to simply store in its cache that actual number than the pointer to that when the sizes are the same?
The CPU has a small number of registers, which it can use to store data (common CPUs have 8, 16 or 32 registers, so they can only hold the particular variables that you're working with here and now). So to answer the last part first, yes, the compiler certainly might (and probably will) generate code to just store your int into a register, instead of storing it in a memory, and telling the CPU to load it from a specified address.
As for the other part of the question: ultimately, every part of the program is stored in memory. Part of it is a stream of instructions, and part of it is in chunks of data scattered around memory.
There are a few tricks that help with locating the data the CPU needs: part of the program's memory contains the stack, which typically stores local variables while they're in scope. The CPU always maintains a pointer to the top of the stack in one of its registers, so it can easily locate data on the stack, simply by modifying the stack pointer with a fixed offset. Instructions can directly contain such offsets, so in order to read your int, the compiler could for example generate code which writes the int to the top of the stack when you enter the function, and then when you need to refer to that function, have code which reads the data found at the address the stack pointer points to, plus the small offset needed to locate your variable.
And also keep in mind that the adresses your program sees may not be (or rather rarely are) physical adresses starting from the 'beginning of the memory' or 0. Mostly they are offsets into a specifc memory block where the memory manager knows the real address and the accesses via base+offest as the real data storage.
And we do need the memory since caches are limited ;-)
Mario
Internal to the CPU there is one register that contains the address of the next instruction to be executed. The instructions themselves keep information where the variable is. If the variable is optimized, the instruction may point to a register, but in general, the instruction will have an address of the variable being accessed. Your code, after compiled and loaded in memory, have all that embedded! I recommend looking into assembly language to get a better understanding of all that. Good luck!

Number of values that a pointer can have in a 32 bit system

I had a query regarding one of the answers of this question.
The answer says:
If a 32-bit processor can address 2^32 memory locations, that simply
means that a C pointer on that architecture can refer to 2^32 - 1
locations plus NULL
Isn't it 2^32 plus NULL? Why is the -1?
EDIT: sorry for not being clear. Edited question.
2^32 - 1 locations plus NULL
That equals 2^32.
In most programming languages and operating systems NULL is a dedicated pointer value (usually 0) that means invalid pointer, thats why it cannot be used to point a valid memory location.
Because pointers are just like any integer numbers, there's no other way to signal invalid pointer than with a dedicated value. Because 32 bit integers can have 2^32 possible values, if you don't count this NULL value, you get 2^32-1 valid memory locations.
The author of that text is distinguishing NULL as not being a memory location. So you use one of the 2^32 available values for NULL which leaves 2^32-1 available for memory locations.
NULL is actually a value, usually 0, so if you add it you get 2^32
I believe the concept of NULL originally comes from memory allocators when they have failed at allocating a slot of memory. By comparing the returned value with NULL the programmer can determine whether the pointer may be used as such or not. I don't believe that NULL has any place in the discussion, it is simply a form of alias for position 0 (0x00000000).
In real-mode (lacking a memory protection scheme) systems the memory at location 0 may usually be read from and sometimes written to (if it isn't read-only). This of course depends on if physical memory chips are actually wired to that location. In many real-mode systems with a writable 0 location you get funny results if you manipulate that location. In x86 (16-bit real mode) PC-DOS systems the location points to the first vector in the interrupt vector table, a vector that points to the interrupt handler for divide by 0. A programmer could write there by mistake or for some valid reason.
In protected-mode systems a program accessing the 0 position will usually cause a memory protection fault that will terminate it. I should clarify this and say that location 0 to the application is almost certainly not the physical location 0 since most protected mode OSes have remapped an application's memory area using a virtual addressing mechanism provided by the host processor. The OS itself may, under certain circumstances, give itself or another program permission to access whatever memory location it pleases but such circumstances are few and so restricted that an application developer will never encounter them in his or her lifetime.
With this somewhat lengthy backgrounder I agree with those who say that for a 32-bit processor there are usually 2^32 addressable locations ranging from 0 (0x00000000) to 2^32-1 (0xffffffff). Exceptions are early 32-bit processors (such as intel's 486sx and Motorola's 68000) that implemented less than 32 adressing lines.
The author is saying that the C implementation must reserve one of the addresses that the processor can use, so that in C that address is used to represent a null pointer. Hence, C can address one fewer memory locations than the processor can. Convention is to use address 0 for null pointers.
In practice, a 32-bit processor can't really address 2^32 memory locations anyway, since various parts of the address space will be reserved for special purposes rather than mapped to memory.
There's also no actual requirement that the C implementation represent pointers using the same sized addresses that the processor uses. It might be horribly inefficient to do so, but the C implementation could use 33-bit pointers (which would therefore require at least 5 bytes of storage, and wouldn't fit in a CPU register), enabling it to use a value for null pointers that isn't one of the 2^32 addresses the processor can handle.
But assuming nothing like that, it's true that a 32 bit pointer can represent any of 2^32 addresses (whether those addresses refer to memory locations is another matter), and it's also true that since C requires a null pointer, one of those addresses must be used to mean "a null pointer".

What is the difference between far pointers and near pointers?

Can anybody tell me the difference between far pointers and near pointers in C?
On a 16-bit x86 segmented memory architecture, four registers are used to refer to the respective segments:
DS → data segment
CS → code segment
SS → stack segment
ES → extra segment
A logical address on this architecture is written segment:offset. Now to answer the question:
Near pointers refer (as an offset) to the current segment.
Far pointers use segment info and an offset to point across segments. So, to use them, DS or CS must be changed to the specified value, the memory will be dereferenced and then the original value of DS/CS restored. Note that pointer arithmetic on them doesn't modify the segment portion of the pointer, so overflowing the offset will just wrap it around.
And then there are huge pointers, which are normalized to have the highest possible segment for a given address (contrary to far pointers).
On 32-bit and 64-bit architectures, memory models are using segments differently, or not at all.
Since nobody mentioned DOS, lets forget about old DOS PC computers and look at this from a generic point-of-view. Then, very simplified, it goes like this:
Any CPU has a data bus, which is the maximum amount of data the CPU can process in one single instruction, i.e equal to the size of its registers. The data bus width is expressed in bits: 8 bits, or 16 bits, or 64 bits etc. This is where the term "64 bit CPU" comes from - it refers to the data bus.
Any CPU has an address bus, also with a certain bus width expressed in bits. Any memory cell in your computer that the CPU can access directly has an unique address. The address bus is large enough to cover all the addressable memory you have.
For example, if a computer has 65536 bytes of addressable memory, you can cover these with a 16 bit address bus, 2^16 = 65536.
Most often, but not always, the data bus width is as wide as the address bus width. It is nice if they are of the same size, as it keeps both the CPU instruction set and the programs written for it clearer. If the CPU needs to calculate an address, it is convenient if that address is small enough to fit inside the CPU registers (often called index registers when it comes to addresses).
The non-standard keywords far and near are used to describe pointers on systems where you need to address memory beyond the normal CPU address bus width.
For example, it might be convenient for a CPU with 16 bit data bus to also have a 16 bit address bus. But the same computer may also need more than 2^16 = 65536 bytes = 64kB of addressable memory.
The CPU will then typically have special instructions (that are slightly slower) which allows it to address memory beyond those 64kb. For example, the CPU can divide its large memory into n pages (also sometimes called banks, segments and other such terms, that could mean a different thing from one CPU to another), where every page is 64kB. It will then have a "page" register which has to be set first, before addressing that extended memory. Similarly, it will have special instructions when calling/returning from sub routines in extended memory.
In order for a C compiler to generate the correct CPU instructions when dealing with such extended memory, the non-standard near and far keywords were invented. Non-standard as in they aren't specified by the C standard, but they are de facto industry standard and almost every compiler supports them in some manner.
far refers to memory located in extended memory, beyond the width of the address bus. Since it refers to addresses, most often you use it when declaring pointers. For example: int * far x; means "give me a pointer that points to extended memory". And the compiler will then know that it should generate the special instructions needed to access such memory. Similarly, function pointers that use far will generate special instructions to jump to/return from extended memory. If you didn't use far then you would get a pointer to the normal, addressable memory, and you'd end up pointing at something entirely different.
near is mainly included for consistency with far; it refers to anything in the addressable memory as is equivalent to a regular pointer. So it is mainly a useless keyword, save for some rare cases where you want to ensure that code is placed inside the standard addressable memory. You could then explicitly label something as near. The most typical case is low-level hardware programming where you write interrupt service routines. They are called by hardware from an interrupt vector with a fixed width, which is the same as the address bus width. Meaning that the interrupt service routine must be in the standard addressable memory.
The most famous use of far and near is perhaps the mentioned old MS DOS PC, which is nowadays regarded as quite ancient and therefore of mild interest.
But these keywords exist on more modern CPUs too! Most notably in embedded systems where they exist for pretty much every 8 and 16 bit microcontroller family on the market, as those microcontrollers typically have an address bus width of 16 bits, but sometimes more than 64kB memory.
Whenever you have a CPU where you need to address memory beyond the address bus width, you will have the need of far and near. Generally, such solutions are frowned upon though, since it is quite a pain to program on them and always take the extended memory in account.
One of the main reasons why there was a push to develop the 64 bit PC, was actually that the 32 bit PCs had come to the point where their memory usage was starting to hit the address bus limit: they could only address 4GB of RAM. 2^32 = 4,29 billion bytes = 4GB. In order to enable the use of more RAM, the options were then either to resort to some burdensome extended memory solution like in the DOS days, or to expand the computers, including their address bus, to 64 bits.
Far and near pointers were used in old platforms like DOS.
I don't think they're relevant in modern platforms. But you can learn about them here and here (as pointed by other answers). Basically, a far pointer is a way to extend the addressable memory in a computer. I.E., address more than 64k of memory in a 16bit platform.
A pointer basically holds addresses. As we all know, Intel memory management is divided into 4 segments.
So when an address pointed to by a pointer is within the same segment, then it is a near pointer and therefore it requires only 2 bytes for offset.
On the other hand, when a pointer points to an address which is out of the segment (that means in another segment), then that pointer is a far pointer. It consist of 4 bytes: two for segment and two for offset.
Four registers are used to refer to four segments on the 16-bit x86 segmented memory architecture. DS (data segment), CS (code segment), SS (stack segment), and ES (extra segment). A logical address on this platform is written segment:offset, in hexadecimal.
Near pointers refer (as an offset) to the current segment.
Far pointers use segment info and an offset to point across segments. So, to use them, DS or CS must be changed to the specified value, the memory will be dereferenced and then the original value of DS/CS restored. Note that pointer arithmetic on them doesn't modify the segment portion of the pointer, so overflowing the offset will just wrap it around.
And then there are huge pointers, which are normalized to have the highest possible segment for a given address (contrary to far pointers).
On 32-bit and 64-bit architectures, memory models are using segments differently, or not at all.
Well in DOS it was kind of funny dealing with registers. And Segments. All about maximum counting capacities of RAM.
Today it is pretty much irrelevant. All you need to read is difference about virtual/user space and kernel.
Since win nt4 (when they stole ideas from *nix) microsoft programmers started to use what was called user/kernel memory spaces.
And avoided direct access to physical controllers since then. Since then dissapered a problem dealing with direct access to memory segments as well. - Everything became R/W through OS.
However if you insist on understanding and manipulating far/near pointers look at linux kernel source and how it works - you will newer come back I guess.
And if you still need to use CS (Code Segment)/DS (Data Segment) in DOS. Look at these:
https://en.wikipedia.org/wiki/Intel_Memory_Model
http://www.digitalmars.com/ctg/ctgMemoryModel.html
I would like to point out to perfect answer below.. from Lundin. I was too lazy to answer properly. Lundin gave very detailed and sensible explanation "thumbs up"!

Resources