Bottom Level Graphics Programming/ Working with Screen Hardware - c

It is said that a 'low level' programming language is one that works 'close' to the hardware. I work with C as well as Python and Java and it is generally excepted that C is a 'lower level' language or works 'closer' to the hardware than these other two languages. This of course makes sense because python and java are C derived languages.
So what I want to know is what is the 'lowest level' or 'closest to hardware' way of working with the computer screen to display pixel graphics on a monitor?
Basically, how does the screen work on the programming 'back end'?
Just to be perfectly clear I'll give an analogy. When I first learned about python lists, I figured they were multiple numbers stored together. What I later learned was that they are actually pointers to locations in memory reserved for the length of the array in units of the data type.
So what I know about the screen is that its a grid of pixels x wide, y long. When you refer to this grid with functions that accept some form of vector, a pixel will light up a specified colour at that pixel's coordinance (0,0 starts top left of course.) What is really going on in those functions?

sMachine code is lowest you can go. And this even still gets interpreted by the CPU. (Dependent on CPU of course)
Generally most people will go only as low as assembler though.
(I didn't mention GPU programming as you mentioned Python, Java, C and made an assumption you are trying to get to grips with the old way of rendering).
EDIT
Missed your other question
Pointers are used when a block of memory needs referencing. Simple data types do not require a pointer (but can still have one). This is dependent on the language and how it allows data type allocation.
In the case of a list/array of data you can not allocate this on the heap as it is a varying length so a block of memory is request from the operating system which will return a pointer to this section of memory that can be used. The operating system does not care how this is used and it's up to the application to know how much to allocate and how it should use it.
This is why languages use data types so the compiler knows how to manipulate that block of memory. Now nothing stops you manipulating this block of memory directly but you must follow the rules the compile uses since any further code executed could cause errors.
EDIT
Additional based on comment.
When modifying a a buffer that if linked to hardware you need to understand the specs for the buffer. Usually you have already requested the device to use a certain MODE of operation and you know how the hardware will behave. This is why varying standards arose to help programmer use these. I am going to reference to a very good old school programming guide for hardware it should help you a lot PC Game Programmer's Encyclopedia
These concepts still apply to operating systems now in windows you would use the GDI to allocate these screens and the GDI interfaces to a driver to present the required image. The guide I linked is about using hardware directly from old operating systems which have no graphics support (MS-DOS for example)

The word you are looking for is frame buffer. For an introduction see http://en.wikipedia.org/wiki/Framebuffer

Related

Why using Low-level-Languages or close to it ( C ) for embedded system and not a high level language, when all will be compiled to machine code?

I have searched but I couldn't find a clear answer. If we are compiling the code in a computer(powerful) then we are only sending a machine instruction to the memory in the embedded device. This, for my understandings, will make no difference if we use any sort of language because, in the end, we will be sending only a machine code to the embedded device, the code compilation which is the expensive phase is already done by a powerful machine!
Why using language like C ? Why not Java? we are sending a machine code at the end.
The answer partly lies in the runtime requirements and platform-provided expectations of a language: The size of the runtime for C is minimal - it needs a stack and that is about it to be able to start running code. For a compliant implementation static data initialisation is required, but you can run code without it - the initialisation itself could even be written in C, and even heap and standard library initialisation are optional, as is the presence of a library at all. It need have no OS dependencies, no interpreter and no virtual machine.
Most other languages require a great deal more runtime support and this is usually provided by an OS, runtime-library, or virtual machine. To operate "stand-alone" these languages would require that support to be "built-in" and would consequently be much larger - so much so that you may as well in many cases deploy a system with an OS and/or JVM for example in any case.
There are of course other reasons why particular languages are suited to embedded systems, such as hardware level access, performance and deterministic behaviour.
While the issue of a runtime environment and/or OS is a primary reason you do not often see higher-level languages in small embedded systems, it is by no means unheard of. The .Net Micro Framework for example allows C# to be used in embedded systems, and there are a number of embedded JVM implementations, and of course Linux distributions are widely embedded making language choice virtually unlimited. .Net Micro runs on a limited number of processor architectures, and requires a reasonably large memory (>256kb), and JVM implementations probably have similar requirements. Linux will not boot on less than about 16Mb ROM/4Mb RAM. Neither are particularly suited to hard real-time applications with deadlines in the microsecond domain.
C is more-or-less ubiquitous across 8, 16, 32 and 64 bit platforms and normally available for any architecture from day one, while support for other languages (other than perhaps C++ on 32 bit platforms at least) may be variable and patchy, and perhaps only available on more mature or widely used platforms.
From a developer point of view, one important consideration is also the availability of cross-compilation tools for the target platform and language. It is therefore a virtuous circle where developers choose C (or increasingly also C++) because that is the most widely available tool, and tool/chip vendors provide C and C++ tool-chains because that is what developers demand. Add to that the third-party support in the form of libraries, open-source code, debuggers, RTOS etc., and it would be a brave (or foolish) developer to select a language with barely any support. It is not just high level languages that suffer in this way. I once worked on a project programmed in Forth - a language even lower-level than C - it was a lonely experience, and while there were the enthusiastic advocates of the language, they were frankly a bit nuts favouring language evangelism over commercial success. C has in short reached critical mass acceptance and is hard to dislodge. C++ benefits from broad interoperability with C and similarly minimal runtime requirements, and by tool-chains that normally support both languages. So the only barrier to adoption of C++ is largely developer inertia, and to some extent availability on 8 and 16 bit platforms.
You're misunderstanding things a bit. Let's start by explaining the foundation of how computers work internally. I'll use simple and practical concepts here. For the underlying theories, read about Turing machines. So, what's your machine made up of? All computers have two basic components: a processor and a memory.
The memory is a sequential group of "cells" that works sort of like a table. If you "write" a value into the Nth cell, you can then retrieve that same value by "reading" from the Nth cell. This allows computers to "remember" things. If a computer is to perform a calculation, it needs to retrieve input data for it from somewhere, and to output data from it into somewhere. That place is the memory. In practice, the memory is what we call RAM, short for random access memory.
Then we have the processor. Its job is to perform the actual calculations on memory. The actual operations that are to be performed are mandated by a program, that is, a series of instructions that the processor is able to understand and execute. The processor decodes and executes an instruction, then the next one, and so on until the program halts (stops) the machine. If the program is add cell #1 and cell #2 and store result in cell #3, the processor will grab the values at cells 1 and 2, add their values together, and store the result into cell 3.
Now, there's some sort of an intrinsic question. Where is the program stored, if at all? First of all, a program can't be hardcoded into the wires. Otherwise, the system is not more of a computer than your microwave. To these problems are two distinct approaches/solutions: the Harvard architecture and the Von Neumann Architecture.
Basically, in the Harvard architecture, the data (as always has been) is stored in the memory. The code (or program) is stored somewhere else, usually in read-only memory. In the Von Neumann architecture, code is stored in memory, and is just another form of data. As a result, code is data, and data is code. It's worth noting that most modern systems use the Von Neumann architecture for several reasons, including the fact that this is the only way to implement just-in-time compilation, an essential part of runtime systems for modern bytecode-based programming languages, such as Java.
We now know what the machine does, and how it does that. However, how are both data and code stored? What's the "underlying format", and how shall it be interpreted? You've probably heard of this thing called the binary numeral system. In our usual decimal numeral system, we have ten digits, zero through nine. However, why exactly ten digits? Couldn't they be eight, or sixteen, or sixty, or even two? Be aware that it's impossible to create an unary based computational system.
Have you heard that computers are "logical and cold". Both of them are true... unless your machine has an AMD processor or a special kind of Pentium. The theory states that every logical predicate can be reduced to either "true" or "false". That is to say that "treu" and "false" are the basis of logic. Plus, computers are made up of electrical cruft, no? A light switch is either on or off, no? So, at the electrical level we can easily recognize two voltage levels, right? And we want to handle logic stuff, such as numbers, in computers, right? So zero and one may be, as the only feasible solution they are.
Now, taking all the theory into account, let's talk about programming languages and assembly languages. Assembly languages are a way to express binary instructions in a (supposedly) readable way to human programmers. For instance, something like this...
ADD 0, 1 # Add cells 0 and 1 together and store the result in cell 0
Could be translated by an assembler into something like...
110101110000000000000001
Both are equivalent, but humans will only understand the former, and processors will only understand the later.
A compiler is a program that translates input data that is expected to conform to the rules of a given programming language into another, usually lower-level form. For instance, a C compiler may take this code...
x = some_function(y + z);
And translate it into assembly code such as (of course this is not real assembly, BTW!)...
# Assume x is at cell 1, y at cell 2, and z at cell 3.
# Assuem that, when calling a function, the first argument
# is at cell 16, and the result is stored in cell 0.
MOVE 16, 2
ADD 16, 3
CALL some_function
MOVE 1, 0
And the assembler will spit (this is not random)...
11101001000100000000001001101110000100000000001110111011101101111010101111101111110110100111010010000000100000000
Now, let's talk about another language, namely Java. Java's compiler does not give you assembly/raw binary code, but bytecode. Bytecode is... like a generic, higher-level form of assembly language that the CPU can't understand (there are exceptions), but another program that directly runs on the CPU does. This means that the lie that some badly educated people spread around, that "both interpreted and compiled programs ultimately boil down to machine code" is false. If, for example, the interpreter is written in C, and has this line of code...
Bytecode some_bytecode;
/* ... */
execute_bytecode(&some_bytecode);
(Note: I won't translate that into assembly/binary again!) The processor executes the interpreter, and the interpreter's code executes the bytecode, by performing the actions specified by the bytecode. Although, if not optimized correctly, this can severely degrade performance, this is not the problem per se, but the fact that things such as reflection, garbage collection, and exceptions can add quite some overhead. For embedded systems, whose memories are small and whose processors are slow, this is something you want. You're wasting precious system resources on things you don't need. If C programs are slow on your Arduino, image a full blown Java/Python program with all sorts of bells and whistles! Even if you translated bytecode into machine code before inserting it into the system, support must be there for all that extra stuff, and results in basically the same unwanted overhead/waste. You would still need support for reflection, exceptions, garbage collection, etc... It's basically the same thing.
On most other environments, this is not a big deal, as memory is cheap and abundant, and processors are fast and powerful. Embedded systems have special needs, they're special by themselves, and things are not free in that land.
Why using language like C ? why not Java ? we are sending a machine
code at the end.
No, Java code does not compile to machine code, it needs a virtual machine (the JVM) on the target system.
You're partly right about the compilation, however, but still "higher-level" languages can result in less efficient machine code. For instance, the language can include garbage collection, run-time correctness checks, can't use all the "native" numeric types, etc.
In general it depends on the target. On small targets (i.e. microcontrollers like AVR) you don't have that complex programs running. Additionally, you need to access the hardware directly (f.e. a UART). High level languages like Java don't support accessing the hardware directly, so you usually end up with C.
In the case of C versus Java there's a major difference:
With C you compile the code and get a binary that runs on the target. It directly runs on the target.
Java instead creates Java Bytecode. The target CPU cannot process that. Instead it requires running another program: the Java runtime environment. That translates the Java Bytecode to actual machine code. Obviously this is more work and thus requires more processing power. While this isn't much of a concern for standard PCs it is for small embedded devices. (Note: some CPUs do actually have support for running Java bytecode directly. Those are exceptions though.)
Generally speaking, the compile step isn't the issue -- the limited resources and special requirements of the target device are.
you misunderstand something , 'compiling' java gives a different output then compiling a low level language , it is true that both are machine codes , but in c case the machine code is directly executable by the processor , whereas with java the output will be in an intermediate stage , a bytecode , and it can't be executed by the processor , it needs some extra work , a translation to a machine code , that is the only directly executable format , while that takes a extra time , c will be an attractive choice , because of its speed , with low level language you write you code then you compile to a target machine ( you need to specify the target to the compiler since each processor have his own machine code ) , then your code is understandable by the processor .
in the other hand c allows direct hardware access , that is not allowed in java-like languages even via an api
It's an industry thing.
There are three kinds of high level languages. Interpreted (lua, python, javascript), compiled to bytecode (java, c#), and compiled to machinne code (c, c++, fortran, cobol, pascal)
Yes, C is a high level language, and closer to java than to assembly.
High level languages are popular for two reasons.
Memory management, and a wide standard library.
Managed memory comes with a cost,
somebody must manage it. That's an issue not only for java and c#, where somebody must implement a VM, but also to baremetal c/c++ where someone must implement the memory allocation functions.
A wide standard library can't be supported by all targets because there aren't enough resources. ie, avr arduino doesn't support the full c++ standard library.
C gained popularity, because it can easily be converted to equivalent assembly code. Most statements can be converted, without optimization, to a bunch of fixed assembly instructions, so compilers are easy to program. And its standard is compact and easy to implement. C prevailed because it became the defacto standard for the lowest high level language of any arch.
So in the end, besides special snowflakes like cython, go, rust, haskell etc, industry decided that machinne code is compiled from C, C++ and most optimization efforts went that way
Languages, like java, decided to hide memory from the progarammer, so good luck trying to interface with low level stuff there. As by design they do that, almost nobody bothers trying to bring them to compete with C. Realistically, java without GC would be C++ with different syntax.
Finally, if all the industry money goes to one language, the cheapest/easyest thing to do is choosig that language.
You are right in that you can use any language that generates machine code. But JAVA is not one of them. JAVA, Python and even some languages that compile to machine code may have heavy system requirements. You could and some folks use Pascal, but C won the C vs Pascal war many years ago. There are some other languages that fell by the wayside that if you had a compiler for you could use. there are some new languages you can use, but the tools are not as mature and not as many targets as one would like. But it is very unlikely that they will unseat C. C is just the right amount of power/freedom, low enough and high enough.
Java is an interpreted language and (like all interpreted languages) produces an intermediate code that is not directly executable by the processor. So what you send to the embedded device would be the Bytecode and you should have a JVM running on it and interpreting your code. Clearly not feasible. For what concern the compiled languages (C, C++...) you are right to say that at the end you send machine code to the device. However consider that using high level features of a language will produce much more machine code that you would expect. If you use polymorphism for example, you have just a function call, but when you compile the machine code explodes. Consider also that very often the use of dynamic memory (malloc, new...) is not feasible on an embedded device.

tiny garbage collector in C for embedded devices

Is there some open source tiny GC implementation (preferably as one C source file)?
Google search provides tinygc.sourceforge.net :)
I've got some prototype code that might give you a head start. If all your pointers are "managed" through your interface, you can chop up a heap in any convenient way and use the classic algorithms from 70s dissertations. My adventures with a postscript garbage collector began here.
On reading through it again, the code may not be what you're looking for. It's designed to run on top of an OS. In particular, it uses relative integer locations as much as possible to allow the entire memory space to be moved by the OS if needed for a reallocation. I imagine you don't need to do that (although it guarantees that internal relocations are ok, too). But the code should show that a garbage collector doesn't have to be horribly complicated. It's just a tree traversal. It's futzing with some bits and following some pointers. Keep it simple. You can do it.

Getting into Embedded [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm trying to familiarize myself with the embedded field, but also have limited resources in terms of time and equipment to buy.
What's a good language to wrap my head around embedded, without investing too much time leaning an embedded-specific language? I'm most familiar with PHP, Java, Actionscript, but unfortunately know very little C. I remember reading somewhere that someone used PERL to program embedded systems, but not sure if that's really possible.
Can learning be done without needing to buy chips, etc. via simulators or such?
Can someone recommend a simplified roadmap to show how one would get sarted? I'm a little unsure where to even start.
You need to know C (but every programmer needs to know C !)
Most of these platforms have a simulator/emualtor, but since the point is to learn real applications and real problems ( which are all to do with real world timing issues) then you want a real board.
You probably also want an oscilloscope (a very cheap n'th hand slow analogue scope will do) and have some idea how to use it.
Easiest way in is probably Arduino, perhaps more professional but a little harder is launchpad MSP430
There are a few embedded programming lessons that carry over from one platform and style to another, but it is really a broad field. Different processors can require very different tactics, and different applications can dictate both different firmware design tactics and different microcontrollers. Here's some stuff to get you started....
msp430
Texas Instruments has several very inexpensive USB development kits which they call EZ430 and are based on their MSP430 family of micro-controllers. The simplest one has an msp430 f2013, which has 2K of flash program space, 3x128 bytes of usable user flash (another 128 byte page exists, but it's special), 128 bytes of RAM (yes, 128 bytes but it's enough for lots of things), and 16 CPU registers (some of these are special purpose like the Stack Pointer, Instruction Pointer, Status Register, and maybe one or two more). MSP430s also have several memory mapped special function registers which are used for configuring and controlling the built in peripherals. MSP430s are von Newman processors, so everything lives within one address space. These cost about $20us for both the programmer and a removable tab (pc board) containing the msp430 f2013. For about $10us you can get 3 replacement tabs with msp430 2012, which is pin compatible with the 2013 (mostly) and has a few different peripherals. These tabs have an LED, a button, and several large vias (holes in the pc board) which are connected to the pin of the processor. These vias are easy to solder wires into even if you have never soldered before -- due to capillary action the vias just suck the molten solder up and while it's hot you can just jab the end of your wire in there.
They also have a couple more similar kits with 802.15.4 radios. Even if you aren't interested in the radio you may still be interested in these because their programmer also has a UART pulled over from the removable tab and are compatible with the tabs used on the other kits mentioned above. These kits also contain at least one extra programmable board and a battery pack for it. (one of these kits may contain more, but I don't have mine with me right now, and not going to look it up).
They also have a kit that has a programmable watch as the target platform. I've never had one of these, but they have a display, accelerometers, and several other cool things, but this may overwhelm you for your first project. I'd suggest one of the previous kits to get you started with MSP430s.
You can get free C compilers and development environments for MSP430s in the form of IAR's Embedded Workbench kickstart (4 kb program space limited ) IDE, Code Composer Studio (also limited program size, but higher limit, I think), and gcc/gdb for the MSP430. IAR's kickstart is pretty easy to get started with quickly, though it's not perfect. You may find that you have to shut it down, unplug your USB EZ430, restart IAR, and plug back in to get it going again. Or maybe some different order will work better for you.
TI also provides many examples in badly named files (all of their downloadable files go out of their way to be badly named). Be warned -- similar MSP430s may have different device control register interfaces for similar peripherals, which can be confusing. Make sure that any document or example you are reading really does apply to the microcontroller you are using.
other small systems
There are many many other processors families and kits that you can go with, and you should probably at least know a little bit about them.
AVR -- Atmel's 8/16 bit Harvard architecture. Harvard refers to separate address spaces for code and working memory. It has 32 8 bit registers, some of which may be used in pairs as 16 bit registers. It's a very popular and pretty cool processor. Some of the smallest ones only have registers with no extra RAM, which is scary. Atmel also has an AVR32 which isn't at all the same as the AVR. Unless you make use of an existing bootloader capable of loading your new code you will need to get a JTAG unit for these.
8051 -- This is old as the hills and a pain in the butt to use until you finally understand it. It is an 8/16 bit processor, with many more limits on how you go about doing 16 bit math and only has 1 pair of registers which can act as a pointer. It has 3 separate address spaces (stack, global memory, and code) and lots of odd (compared to other architectures) features. The low level stuff might not mean much to you if your are programming in C except that very simple C operations can turn into much more code than you thought they would. You don't want to start on one of thise, most likely.
propeller -- Parallax's very interesting multi-core processor which is very unlike other processors. It has several cores which act mostly independently and can be used to simulate peripherals or do more traditional computational tasks. I've never used one of these, though I'd like to. Just never had a task that seemed to fit it. They have their own high level language to program them as well as the processor's assembly language.
larger systems
After you get out of the 8/16/24 bit processors you start to blur the lines between embedded and desktop level programming, even if it is technically embedded.
AVR32 -- There are 2 main versions of these. One is a Harvard architecture and the other is von Newman. The von Newman version is essentially a better ARM than ARM, but it's not as popular as ARM. As near as I can tell it was designed with "run Linux" in mind, though not tied to it in any crazy way. You used to be able to get cheap development boards for these and code is often almost as easy to load as copying files from one PC to another, though you will probably make use of uboot and tftp to do some work. JTAG is only needed when you mess up the boot loader. I think all of these have support for native JAVA acceleration. www.AVR32.org
ARM -- The most popular embedded processor. There's many versions of these. Some don't have an MMU (memory management unit) and some do. There's too much to say about them. Some version have native JAVA acceleration, though I think that the ARM lords don't freely tell all of the details of how to use it, so you have to find a JVM which knows how to use it. Many vendors make them, including Atmel, Freescale, Intel, and many others.
MIPS -- A very RISC processor. The RISCiest.
There are many others.
Programming styles
I could write 3 books on this but the general rule is make things as simple as the application can let you. An exception to this is that if you can easily make use of an operating system you might want to make use of it if it simplifies your task.
The first thing you need to know when answering this question is "WHAT IS" an embedded system? A GENERAL definition would be a computer system which is dedicated to a single specific purpose. This doesn't limit the type of hardware you can use, as matter of fact "Embedded PCs" have been used for years. The QNX realtime OS has existed since the early 80s and been used in industrial PCs for embedded applications for years. I've personally used in control systems for steel mills XRAY thickness gages. On the other hand I currently use TI DSPs without any OS support and only using 256K of Ram. Another example would be the key fob for your car. Old ones used to use a PIC microcontroller from Microchip. (That's actually the company's name.)
Some people call the IPhone an embedded system, but due to the fact you can load applications to do just about anything I tend to say it's a palm top computer with phone capabilities. An OLD DUMB cell phone that is just a phone, not PDA is an embedded system. That's just a bit of philosophy.
As a general rule there are a handful of concepts you need to grasp for embedded systems programming, and most of them can be explored on a PC.
EDIT:
The REASON why C or C++ is recommended is C itself was designed to do systems programming. C++ maintains all it's advantages but adds capabilities for OOP programming. Some ASM maybe required in some systems. However a lot of chip vendors, such as TI, provide tools basically make it possible to do your entire system in C++.
:END EDIT
ALOT of simple embedded systems look more or less like this:
While(true) // LOOP FOREVER... There is no command prompt
{
// Typically you want I/O to occur on fixed "timebase."
wait(timerTick);
readDigitalIO(&dioStruct);
readAnalogIO(&aioStruct);
// Combine current system state with input values
// and do some useful calculations. (i.e. Analog input to temperature calc)
Process(dioStruct,aioStruct,&CurrentState);
// This can be a serial output/audio buzzer/leds/motor controller
// or Whatever the system REQUIREMENT call for.
driveOutputs(CurrentState);
// The watchdog timer resets your system if it gets stuck.
petWatchDogTimer();
}
There is nothing here that you can't do using the PC. (Well a PC that still has a parallel port anyway. Which is a more or less just a DIO port.) On a simple system without a os this might be all there is. On a RTOS based system you may have several task that all look somewhat simalar to this, but pass data back and forth between the tasks.
The interesting parts come when you have to interface to the hardware on you're own, the first job I had out of college was writing a device driver for a data acquisition board under QNX.
Basic concepts of dealing with hardware, or device drivers (Which you can experiement with by hacking Linux device drive code which is freely avaliable.), most hardware looks to the programmer like just another memory address. This is called "Memory mapped I/O." What does this mean? Lets use a serial port as an example:
// Serial port registers definition:
typedef struct
{
unsigned int control; // Control bits for the port.
unsigned int baudDiv; // Baud rate divider.
unsigned int status; // READ Status bits/ Write resets fifos;
char TXdata; // The head of the hardware TX fifo.
char RXdata; // The tail of the hardware RX filo.
} serRegs;
// Using the volatile keyword to indicate the hardware can change the value
// independantly from the software.
volatile serRegs *Ser1 = (serRegs *)0x8000; // Hardware exists at a specific location in memory.
volatile serRegs *Ser2 = (serRegs *)0x8010; // Hardware exists at a specific location in memory.
// Bits bits 15-12 enable interupts and select interupt vector,
// bits 11-8 enable,bits 7-4 parity,bits 3-0 stop bits.
Ser1->status = 1; // Reset fifos.
Ser1->baudDiv = CLOCKVALUE / 9600; // Set the baudrate 9600;
Ser1->control = 0x1801; // Enable, 8 data, no parity, 1 stop bit.
// Write out a "OK\r\n" message; (Normally this would be a loop.)
Ser1->Txdata = 'O'; // First byte in fifo Transmission starts.
Ser1->Txdata = 'K'; // Second byte in fifo still transmitting first byte
Ser1->Txdata = '\r'; // Third byte in fifo still transmitting first byte
Ser1->Txdata = '\n'; // Fouth byte in fifo still transmitting first byte
Normally you would have a function or an interrupt handler to handle TXing the data, but for example I wanted to point out that the hardware is working while the software keeps going. Basically hardware works like, I write a value to an address and "STUFF" happens independantly of the software. This is perhaps one of the key concepts for embedded programming, how to make the computer effect a change in the real world.
EDIT:
If you actually want to get a cheap board, the current trend from Micro developers is to put a dev kit on a usb thumb stick. This page has info on several, ranging from 8 bits upto ARM architectures: http://dev.emcelettronica.com/microcontrollers-usb-stick-tool
The Cypress PSOC was one of the first to do this with the "FirstTouch Starter Kit." The PSOC is a very unique part in that it has a micro controller and "Configurable analog and digital blocks" that allow you to plop down a ADC, serial port, or digital I/O using a gui and automatically configures you're C app to use it. The PSOCs also are avaliable in DIP packages which makes them easy to use on a prototyper's breadboard.
Picture your embedded controller sitting in a switched-off circuit...
The Vcc power is applied and the reset circuit asserts reset signal.
Clocks have reached running speeds and voltages stabilized, so reset is de-asserted.
Now your controller sets its instruction pointer to the "reset vector," which is physical address 0xE0000000 on this particular chip. The controller fetches the instruction at that location.
Interrupts are disabled, and the first order of business is to initialize registers such as the stack pointer. On some chips, there are flags bits (e.g., x86 direction flag) which need to be cleared or set.
Once the registers and flag bits are set up correctly, it becomes possible for interrupt service routines to run. By now, we must have run code to about location 0xE0000072 when we get to the code which enables interrupts by first toggling some GPIO pins to the external interrupt controller, then enables the CPU interrupts mask.
At this point, the equivalent of "device drivers" are running in the form of interrupt service routines. Assuming the C environment has a library which matches the interfaces of these routines' data structures, by now our boot-loader code can jump to the main() function of some C object code.
In other words, the code which brought us from power-on to main(), and which handles the low-level I/O, is written in the assembler peculiar to the chip you choose. This means that if you want to be versatile at embedded programming, you must know how to implement assembly code starting at the reset vector.
The reality is that hobbyist embedded programming doesn't allow time for implementing all the ISRs and the boot-loader code. For this reason, many people use standard software frameworks available for specific chips. Others use custom-language chips such as the BASICstamp. The BASICstamp is an embedded chip which hosts a BASIC language interpreter on-board. The interpreter and all the ISRs are pre-written for you. The BASIC environment gives you the ability to control I/O pins, read voltages, everything you could do from assembly with an embedded controller, but a bit slower.
As for the language, C is probably the most important language to know. From Java you should be able to adapt, but just remember that a lot of high level Java will not be available to you. Loads of textbooks out there but I'd recommend the original C programming book by Kernighan and Ritchie http://en.wikipedia.org/wiki/The_C_Programming_Language_(book)
For a good introduction to embedded C you could try a book by Michael J Pont:
http://www.amazon.com/Embedded-C-Michael-J-Pont/dp/020179523X
As for the embedded side of things you could start with Microchip, the IDE is OK to develop in with a reasonable simulator, and the c compilers are free for the slightly limited student editions c18 and c30 compilers, the IDE installer will also ask if you want to install a 3rd party HI-TECH C compiler which you could use. As for the processor I'd recommend selecting a standard 18 series PIC such as the PIC18F4520.
http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en019469&part=SW007002
Whatever the chip manufacturer, you have to get to know the datasheets. You don't have to learn it all at once but will need it to hand!
Embedded, like most programming, tends to revolve around:
1) initialising a resource, in this case rather than data being from a data store it is from computer registers. Just include the processor header file (.h) and it will allow you to access these as ports (usually bytes) or pins (bits). Also micro-processors come with useful resources on the chip such as timers, analogue to digital converters (ADCs) and serial communications systems (UARTs). Remember that the chip itself is a resource and needs initialising before anything else.
2) using the resource. C will allow you to make data as global as possible and everything can access everything at every time! Avoid this temptation and keep it modular like Java will have encouraged you to (though for speed you may need to be a little looser on these rules).
But they do have an extra weapon called interrupts which can be used to provide real-time behaviour. These can be thought of a bit like OnClick() events. Interrupts can be generated by external events (e.g. buttons or receiving a byte from another device) and internal (timers, transmissions completed, ADC conversions completed). Keep interrupt service routines (ISRs) short and sweet, use them to handle real-time events (e.g. take a byte received and store it in a buffer then raise a flag) but allow background code to deal with it (e.g. check a received byte flag, if set then read the byte received). And remember the all important volatile for variables used by ISR routines and background routines!
Anyway, read around, I recommend www.ganssle.com for advice in general.
Good luck!
The scope of embedded computing has grown very broad, so the answers somewhat depend on what kind of device you're aiming at. On one end, there are 8-bit controllers with only a few KB of memory, usually programmed entirely in assembly or C. On the other end, processors such as those in your router are fairly powerful (200 MHz and a few MB of RAM is not uncommon) and often run an OS like Linux, which means you can use pretty much any language, though C and Java are the most common.
It's best to buy a real chip and experiment. Most of the work involved is usually in getting to know a device and how to interface with it, so using a simulator kind of defeats the purpose.
What's a good language to wrap my head around embedded, without investing too much time leaning an embedded-specific language?
As everyone will suggest: C. Now depending on how deep are you going to dig into your platform of choice, you may also need some assembly, but don't be scared about that: typically you'll use just a little.
If you are learning C, my personal suggestion is: work as you would do in assembly; the programming language won't give you much abstractions, so think in terms of memory management. When you've learnt how to do it move up towards abstractions and live happy.
C++ is also popular on embedded platforms, but IMHO is difficult unless you know well how to program in C you can also understand what's under the hood of its abstraction.
When you feel confident with C/C++ you can start to mess with embedded operating systems. You'll notice that they can be totally different from your OS of choice (By example not all operating systems have a C standard library, processes and splitting between userspace and kernelspace).
You'll learn how to build a cross compiler, how to mess with linker scripts, the tricks of binary formats and a lot of cool stuff.
For the theoretical point of view there's a lot of stuff as well: if you study Computer Science you can get a master's degree in embedded systems.
Can learning be done without needing to buy chips, etc. via simulators or such?
Yes: many operating systems can be run on simulators like qemu.
Can someone recommend a simplified roadmap to show how one would get sarted? I'm a little unsure where to even start.
Try to get a simple operating system which can be run on emulators, hack it and follow your curiosity. Don't be scared of messing with knotty code.
1) Most of the time for most and usually the lower end embedded system, you need to know C.
And I will still recommend you to get a vanilla development board to get yourself familiar with the work flow and tricky part of working with embedded system like debugging and cross compiling. You will run into trouble if you only rely on emulator.
You can try out The Linux Stamp, it is not expensive and is good for beginner but you do need some prior knowledge on Linux.
2) For high end embedded system, a good example is Smartphone from HTC (CPU speed can reach 1Ghz)or some other Android phone it run fast and you can even code Java on it.
C and the assembly specific to your chip.
No, you really need a real chip. Simulators aren't the real thing. You need to be able to deal with keypress jitter, voltage funkyness, etc.
The Arduino is the current fad for embedded hobbyists. I'm not a big fan of Harvard architecture, personally. But you will find oodles of help out there for it. I use an XCore for my thesis work and I have found it super easy to program multicore stuff. I would suggest starting with an AVR32 and going from there.
As everyone else is saying, you need to know C.
Have a look at AVR butterfly for a cheap development board.
Smileymicros have a simple kit with dev board and book:
http://www.smileymicros.com/index.php?module=pagemaster&PAGE_user_op=view_page&PAGE_id=41
The soon to ship Raspberry Pi board looks like an incredibly cheap way to get into this field.
Yup, Arduino would be the way to go. Agreed.. Cheap (about $20 to start) and has great API to get started with high level functions. C is a must though, can't avoid it. But if you can program in other languages you'll be all good.
My recommendation is to start shopping at http://www.sparkfun.com lots of examples to work from and helpful hints of what devices to buy.

What specific examples are there of knowing C making you a better high level programmer?

I know about the existance of question such as this one and this one. Let me explain.
Afet reading Joel's article Back to Basics and seeing many similar questions on SO, I've begun to wonder what are specific examples of situations where knowing stuff like C can make you a better high level programmer.
What I want to know is if there are many examples of this. Many times, the answer to this question is something like "Knowing C gives you a better feel of what's happening under the covers" or "You need a solid foundation for your program", and these answers don't have much meaning. I want to understand the different specific ways in which you will benefit from knowing low level concepts,
Joel gave a couple of examples: Binary databases vs XML, and strings. But two examples don't really justify learning C and/or Assembly. So my question is this: What specific examples are there of knowing C making you a better high level programmer?
My experience with teaching students and working with people who only studied high-level languages is that they tend to think at a certain high level of abstraction, and they assume that "everything comes for free". They can become very competent programmers, but eventually they have to deal with some code that has performance issues and then it comes to bite them.
When you work a lot with C, you do think about memory allocation. You often think about memory layout (and cache locality if that's an issue). You understand how and why certain graphics operations just cost a lot. How efficient or inefficient certain socket behaviors are. How buffers work, etc. I feel that using the abstractions in a higher level language when you do know how it is implemented below the covers sometimes gives you "that extra secret sauce" when thinking about performance.
For example, Java has a garbage collector and you can't directly assign things to memory directly. And yet, you can make certain design choices (e.g., with custom data structures) that affect performance because of the same reasons this would be an issue in C.
Also, and more generally, I feel that it is important for a power programmer to not only know big-O notation (which most schools teach), but that in real-life applications the constant is also important (which schools try to ignore). My anecdotal experience is that people with skills in both language levels tend to have a better understanding of the constant, perhaps because of what I described above.
In addition, many higher level systems that I have seen interface with lower level libraries and infrastructures. For instance, some communications, databases or graphics libraries. Some drivers for certain devices, etc. If you are a power programmer, you may eventially have to venture out there and it helps to at least have an idea of what is going on.
Knowing low level stuff can help a lot.
To become a racing driver, you have to learn and understand the basic physics of how tyres grip the road. Anyone can learn to drive pretty fast, but you need a good understanding of the "low level" stuff (forces and friction, racing lines, fine throttle and brake control, etc) to get those last few percent of performance that will allow you to win the race.
For example, if you understand how the CPU architecture works in your computer, you can write code that works better with it (e.g. if you know you have a certain CPU cache size or a certain number of bytes in each CPU cache line, you can arrange your data structures and the way that you access them to make the best use of the cache - for example, processing many elements of an array in order is often faster than processing random elements, due to the CPU cache). If you have a multi-core computer, then understanding how low level techniques like threading work can gave huge benefits (just as not understanding the low level can lead to disaster in threading).
If you understand how Disk I/O and caching works, you can modify file operations to work well with it (e.g. if you read from one file and write to another, working on large batches of data in RAM can help reduce I/O contention between the reading and writing phases of your code, and vastly improve throughput)
If you understand how virtual functions work, you can design high-level code that uses virtual functions well. If used incorrectly they can severely hamper performance.
If you understand how drawing is handled, you can use clever tricks to improve drawing speed. e.g. You can draw a chessboard by alternately drawing 64 white and black squares. But it is often faster to draw 32 white sqares and then 32 black ones (because you only have to change the drawing colour twice instead of 64 times). But you can actually draw the whole board black, then XOR 4 stripes across the board and 4 stripes down the board in white, and this can be much faster still (2 colour changes, and only 9 rectangles to draw instead of 64). This chessboard trick teaches you a very important programming skill: Lateral thinking. By designing your algorithm well, you can often make a big difference to how well your program operates.
Understanding C, or for that matter, any low level programming language, gives you an opportunity to understand things like memory usage (i.e. why is it a bad thing to create several million heavy objects), how pointers/object references work, etc.
The problem is that as we've created ever increasing levels of abstraction, we find ourselves doing a lot of 'lego block' programming, without understanding how the legos actually function. And by having almost infinite resources, we start treating memory and resources like water, and tend to solve problems by throwing more iron at the situation.
While not limited to C, there's a tremendous benefit to working at a low level with much smaller, memory constrained systems like the Arduino or old-school 8-bit processors. It lets you experience close to the metal coding in a much more approachable package, and after spending time squeezing apps into 512K, you will find yourself applying these skills at a larger level within your day to day programming.
So the language itself is not important, but having a deeper appreciation for how all of the bits come together, and how to work effectively at a level closer to the hardware is a set of skills beneficial to any software developer.
For one, knowing C helps you understand how memory works in the OS and in other high level languages. When your C# or Java program balloons on memory usage, understanding that references (which are basically just pointers) take memory too, and understand how many of the data structures are implemented (which you get from making your own in C) helps you understand that your dictionary is reserving huge amounts of memory that aren't actually used.
For another, knowing C can help you understand how to make use of lower level operating system features. You don't need this often, but sometimes you may need memory mapped files, or to use marshalling in C#, and C will greatly help understand what you're doing when that happens.
I think C has also helped my understanding of network protocols, but I can't put my finger on specific examples. I was reading another SO question the other day where someone was complaining about how C's bit-fields are 'basically useless' and I was thinking how elegantly C bit fields represent low-level network protocols. High level languages dealing with structures of bits always end up a mess!
In general, the more you know, the better programmer you will be.
However, sometimes knowing another language, such as C, can make you do the wrong thing, because there might be an assumption that is not true in a higher-level language (such as Python, or PHP). For example, one might assume that finding the length of a list might be O(N) where N is the length of the list. However, this is probably not the case in many high-level language instances. In Python, for most list-like things the cost is O(1).
Knowing more about the specifics of a language will help, but knowing more in general might lead one to make incorrect assumptions.
Just "knowing" C would not make you better.
But, if you understand the whole thing, how native binaries work, how does CPU work with it, what are architecture limitations, you may write a code which is easier for CPU.
For example, how L1/L2 caches affect your work, and how should you write your code to have more hits in L1/L2 caches. When working with C/C++ and doing heavy optimizations, you will have to go down to that kind of things.
It isn't so much knowing C as it is that C is closer to the bare metal than many other languages. You need to be more aware of how to allocate/deallocate memory because you have to do it yourself. Doing it yourself helps you understand the implications of many decisions that you make.
To me any language is acceptable as long as you understand how the compiler/interpreter (basically) maps your code onto the machine. It's a bit easier to do in a language that exposes this directly, but you should be able to, with a bit of reading, figure out how memory is allocated and organized, what sort of indexing patterns are more optimal than others, what constructs are more efficient for particular applications, etc.
More important, I think, is a good understanding of operating systems, memory architectures, and algorithms. If you understand how your algorithm works, why it would be better to choose one algorithm or data structure over another (e.g., HashSet vs. List), and how your code maps onto the machine, it shouldn't matter what language you are using.
This is my experience of how I learnt and taught myself programming, specifically, understanding C, this is going back to early 1990's so may be a bit antique, but the passion and the drive is important:
Learn to understand the low level principles of the computer, such as EGA/VGA programming, here's a link to the Simtel archive on the C programmer's guide to the PC.
Understanding how TSR's work
Download the whole archive of Bob Stout's snippets which is a big collection of C code that does one thing only - study them and understand it, not alone that, the collection of snippets strives to be portable.
Browse at the International Obfuscated C Code Contest (IOCCC) online, and see how the C code can be abused and understand the intracies of the language. The worst code abuse is the winner! Download the archives and study them.
Like myself, I loved the infamous Ponzo's C Tutorial which helped me immensely, unfortunately, the archive is very hard to find. If anyone knows of where to obtain them, please leave a comment and I will amend this answer to include the link. There is another one that I can remember - Coronado's [Generic?] C Tutorial, again, my memory on this one is hazy...
Look at Dr. Dobb's journal and C User Journal here - I do not know if you can still get them in print but they were a classic, can remember the feeling of holding a printed copy in my hand and tearing off home to type in the code to see what happens!
Grab an ancient copy of Turbo C v2 which I believe you can get from borland.com and just play with 16bit C programming to get a feel and mess with the pointers...sure it is ancient and old but playing with pointers on it is fine.
Understand and learn Pointers, link here to the legacy Simtel.net - a crucial link to achieving C Guru'ship for want of a better word, also you will find a host of downloads pertaining to the C programming language - I remember actually ordering the Simtel CD Archive and looking for the C stuff...
A couple of things that you have to deal directly with in C that other languages abstract away from you include explicit memory management (malloc) and dealing directly with pointers.
My girlfriend is one semester from graduating MIT (where they mainly use Java, Scheme, and Python) with a Computer Science degree, and she is currently working at a company whose codebase is in C++. For the first few days she had a difficult time understanding all the pointers/references/etc.
On the other hand, I found moving from C++ to Java very easy, because I was never confused about pass-references-by-value vs pass-by-reference.
Similarly, in C/C++ it is much more apparent that primitives are just the compiler treating the same sets of bits in different ways, as opposed to a language like Python or Ruby where everything is an object with its own distinct properties.
A simple (not entirely realistic) example to illustrate some of the advice above. Consider the seemingly harmless
while(true)
for(Iterator iter = foo.iterator(); iter.hasNext();)
bar.doSomething( iter.next() )
or the even higher level
while(true)
for(Baz b: foo)
bar.doSomething(b)
A possible problem here is that each time round the while loop a new object (the iterator) is created. If all you care about is programmer convenience, then the latter is definitely better. But if the loop has to be efficient or the machine is resource constrained then you are pretty much at the mercy of the designers of your high level language.
For example, a typical complaint for doing high-performance Java is having execution stop while garbage (such as all those allocated Iterator objects) is reclaimed. Not very good if your software is charged with tracking incoming missiles, auto-piloting a passenger jet, or just not leaving the user wondering why the GUI has stopped responding.
One possible solution (still in the higher-level language) would be to weaken the convenience of the iterator to something like
Iterator iter = new Iterator();
while(true)
for(foo.initAlreadyAllocatedIterator(iter); iter.hasNext();)
bar.doSomething(iter.next())
But this would only make sense if you had some idea about memory allocation...otherwise it just looks like a nasty API. Convenience always costs somewhere, and knowing lower-level stuff can help you identify and mitigate those costs.

Run time Data and Code memory size estimate

I am working on a project, C programing language, to develop an application, that can be ported on to a number of different microcontroller platforms, such as ARM\Freescale\PIC microcontroller. I am developing this application on Linux now and then I will have to port it to the above said platforms.
I would like to know, are there any tools (open source preferably), using which I can determine the "code" and the data memory footprint\size, before porting it to the new platform.
I have been searching on "Google" for it and have not found anything so far, not even for Linux as well.
any help from you will greatly help me.
-Vikas
For a small program, much of the size is determined by the libraries/DLL your program depends on. Since you refer to ARM/Freescale/Pic I assume you're dealing with compact, embedded applications where data size is measured in bytes rather than MBytes.
For your own code, size differences will determined by:
word size (i.e. 32bit programs tend to be a bit larger /more data than 8 bit)
architecture (i.e. Intel code versus ARM, freescale, PIC)
In your case, I expect that PIC is the most critical part (for RAM/ROM constraints). So propably monitoring the PIC compile size during PC development is sufficient. The linker output will contain info on TEXT/DATA/BSS size, which you can monitor.
I generally work on embedded systems. In my work much of the data size is known at design time (i.e. number of buffers * buffer size). For code size, I have rules of thumb on different architectures which help me to do a sanity check at design time. For instance, I define a suite of some exising-code libraries, for which I know performance and size numbers for each architecture. This way I know what kind of ratio I can expect at design time. If the PC program has 1 MBytes of data, it won't fit in an 8-bit PIC.....
Nothing can tell you how much memory your application will need. You'll have to make some assumptions about how it will be used and try your application under different scenarios.
As you're testing, you can monitor the memory usage stats in the /proc file system or use the ps command to do the same.
The size of your text/code segment will depend on optimization level and back-end. GCC can be configured to generate that information for you.
Run-time is a little more difficult as Jeremy said. Besides his suggestion, you also might want to try gcov and/or gprof in order to analyse your program in the context of your most common use scenarios. This kind of instrumentation is focussed on complexity rather than size but at least you'll know better where to focus your memory analysis.
Your compiler can/will generate a map file. The map file will, generally speaking, have code and data size (or location ranges). There may be differences between different compilers for the different targets. And as pointed out in other posts here, your dependencies on supplied libraries will also impact overall memory usage.

Resources